title
stringlengths 3
69
| text
stringlengths 776
102k
| relevans
float64 0.76
0.82
| popularity
float64 0.96
1
| ranking
float64 0.76
0.81
|
---|---|---|---|---|
ISO 31 | ISO 31 (Quantities and units, International Organization for Standardization, 1992) is a superseded international standard concerning physical quantities, units of measurement, their interrelationships and their presentation. It was revised and replaced by ISO/IEC 80000.
Parts
The standard comes in 14 parts:
ISO 31-0: General principles (replaced by ISO/IEC 80000-1:2009)
ISO 31-1: Space and time (replaced by ISO/IEC 80000-3:2007)
ISO 31-2: Periodic and related phenomena (replaced by ISO/IEC 80000-3:2007)
ISO 31-3: Mechanics (replaced by ISO/IEC 80000-4:2006)
ISO 31-4: Heat (replaced by ISO/IEC 80000-5)
ISO 31-5: Electricity and magnetism (replaced by ISO/IEC 80000-6)
ISO 31-6: Light and related electromagnetic radiations (replaced by ISO/IEC 80000-7)
ISO 31-7: Acoustics (replaced by ISO/IEC 80000-8:2007)
ISO 31-8: Physical chemistry and molecular physics (replaced by ISO/IEC 80000-9)
ISO 31-9: Atomic and nuclear physics (replaced by ISO/IEC 80000-10)
ISO 31-10: Nuclear reactions and ionizing radiations (replaced by ISO/IEC 80000-10)
ISO 31-11: Mathematical signs and symbols for use in the physical sciences and technology (replaced by ISO 80000-2:2009)
ISO 31-12: Characteristic numbers (replaced by ISO/IEC 80000-11)
ISO 31-13: Solid state physics (replaced by ISO/IEC 80000-12)
A second international standard on quantities and units was IEC 60027. The ISO 31 and IEC 60027 Standards were revised by the two standardization organizations in collaboration (, ) to integrate both standards into a joint standard ISO/IEC 80000 - Quantities and Units in which the quantities and equations used with SI are to be referred as the International System of Quantities (ISQ). ISO/IEC 80000 supersedes both ISO 31 and part of IEC 60027.
Coined words
ISO 31-0 introduced several new words into the English language that are direct spelling-calques from the French. Some of these words have been used in scientific literature.
Related national standards
Canada: CAN/CSA-Z234-1-89 Canadian Metric Practice Guide (covers some aspects of ISO 31-0, but is not a comprehensive list of physical quantities comparable to ISO 31)
United States: There are several national SI guidance documents, such as NIST SP 811, NIST SP 330, NIST SP 814, IEEE/ASTM SI 10, SAE J916. These cover many aspects of the ISO 31-0 standard, but lack the comprehensive list of quantities and units defined in the remaining parts of ISO 31.
See also
SI – the international system of units
BIPM – publishes freely available information on SI units , which overlaps with some of the material covered in ISO 31-0
IUPAP – much of the material in ISO 31 comes originally from Document IUPAP-25 of the Commission for Symbols, Units and Nomenclature (SUN Commission) of the International Union of Pure and Applied Physics
IUPAC – some of the material in ISO 31 originates from the Interdivisional Committee on Terminology, Nomenclature and Symbols of the International Union of Pure and Applied Chemistry
Quantities, Units and Symbols in Physical Chemistry – this IUPAC "Green Book" covers many ISO 31 definitions
IEC 60027 Letter symbols to be used in electrical technology
ISO 1000 SI Units and Recommendations for the use of their multiples and of certain other units (bundled with ISO 31 as the ISO Standards Handbook – Quantities and units)
Notes
References
(contains both ISO 31 and ISO 1000)
External links
ISO TC12 standards – Quantities, units, symbols, conversion factors
00031
00031
+
Measurement | 0.790578 | 0.964506 | 0.762517 |
Holonomic constraints | In classical mechanics, holonomic constraints are relations between the position variables (and possibly time) that can be expressed in the following form:
where are generalized coordinates that describe the system (in unconstrained configuration space). For example, the motion of a particle constrained to lie on the surface of a sphere is subject to a holonomic constraint, but if the particle is able to fall off the sphere under the influence of gravity, the constraint becomes non-holonomic. For the first case, the holonomic constraint may be given by the equation
where is the distance from the centre of a sphere of radius , whereas the second non-holonomic case may be given by
Velocity-dependent constraints (also called semi-holonomic constraints) such as
are not usually holonomic.
Holonomic system
In classical mechanics a system may be defined as holonomic if all constraints of the system are holonomic. For a constraint to be holonomic it must be expressible as a function:
i.e. a holonomic constraint depends only on the coordinates and maybe time . It does not depend on the velocities or any higher-order derivative with respect to . A constraint that cannot be expressed in the form shown above is a nonholonomic constraint.
Introduction
As described above, a holonomic system is (simply speaking) a system in which one can deduce the state of a system by knowing only the change of positions of the components of the system over time, but not needing to know the velocity or in what order the components moved relative to each other. In contrast, a nonholonomic system is often a system where the velocities of the components over time must be known to be able to determine the change of state of the system, or a system where a moving part is not able to be bound to a constraint surface, real or imaginary. Examples of holonomic systems are gantry cranes, pendulums, and robotic arms. Examples of nonholonomic systems are Segways, unicycles, and automobiles.
Terminology
The configuration space lists the displacement of the components of the system, one for each degree of freedom. A system that can be described using a configuration space is called scleronomic.
The event space is identical to the configuration space except for the addition of a variable to represent the change in the system over time (if needed to describe the system). A system that must be described using an event space, instead of only a configuration space, is called rheonomic. Many systems can be described either scleronomically or rheonomically. For example, the total allowable motion of a pendulum can be described with a scleronomic constraint, but the motion over time of a pendulum must be described with a rheonomic constraint.
The state space is the configuration space, plus terms describing the velocity of each term in the configuration space.
The state-time space adds time .
Examples
Gantry crane
As shown on the right, a gantry crane is an overhead crane that is able to move its hook in 3 axes as indicated by the arrows. Intuitively, we can deduce that the crane should be a holonomic system as, for a given movement of its components, it doesn't matter what order or velocity the components move: as long as the total displacement of each component from a given starting condition is the same, all parts and the system as a whole will end up in the same state. Mathematically we can prove this as such:
We can define the configuration space of the system as:
We can say that the deflection of each component of the crane from its "zero" position are , , and , for the blue, green, and orange components, respectively. The orientation and placement of the coordinate system does not matter in whether a system is holonomic, but in this example the components happen to move parallel to its axes. If the origin of the coordinate system is at the back-bottom-left of the crane, then we can write the position constraint equation as:
Where is the height of the crane. Optionally, we may simplify to the standard form where all constants are placed after the variables:
Because we have derived a constraint equation in holonomic form (specifically, our constraint equation has the form where ), we can see that this system must be holonomic.
Pendulum
As shown on the right, a simple pendulum is a system composed of a weight and a string. The string is attached at the top end to a pivot and at the bottom end to a weight. Being inextensible, the string’s length is a constant. This system is holonomic because it obeys the holonomic constraint
where is the position of the weight and is length of the string.
Rigid body
The particles of a rigid body obey the holonomic constraint
where , are respectively the positions of particles and , and is the distance between them. If a given system is holonomic, rigidly attaching additional parts to components of the system in question cannot make it non-holonomic, assuming that the degrees of freedom are not reduced (in other words, assuming the configuration space is unchanged).
Pfaffian form
Consider the following differential form of a constraint:
where are the coefficients of the differentials for the ith constraint equation. This form is called the Pfaffian form or the differential form.
If the differential form is integrable, i.e., if there is a function satisfying the equality
then this constraint is a holonomic constraint; otherwise, it is nonholonomic. Therefore, all holonomic and some nonholonomic constraints can be expressed using the differential form. Examples of nonholonomic constraints that cannot be expressed this way are those that are dependent on generalized velocities. With a constraint equation in Pfaffian form, whether the constraint is holonomic or nonholonomic depends on whether the Pfaffian form is integrable. See Universal test for holonomic constraints below for a description of a test to verify the integrability (or lack of) of a Pfaffian form constraint.
Universal test for holonomic constraints
When the constraint equation of a system is written in Pfaffian constraint form, there exists a mathematical test to determine whether the system is holonomic.
For a constraint equation, or sets of constraint equations (note that variable(s) representing time can be included, as from above and in the following form):
we can use the test equation:
where in combinations of test equations per constraint equation, for all sets of constraint equations.
In other words, a system of three variables would have to be tested once with one test equation with the terms being terms in the constraint equation (in any order), but to test a system of four variables the test would have to be performed up to four times with four different test equations, with the terms being terms , , , and in the constraint equation (each in any order) in four different tests. For a system of five variables, ten tests would have to be performed on a holonomic system to verify that fact, and for a system of five variables with three sets of constraint equations, thirty tests (assuming a simplification like a change-of-variable could not be performed to reduce that number). For this reason, it is advisable when using this method on systems of more than three variables to use common sense as to whether the system in question is holonomic, and only pursue testing if the system likely is not. Additionally, it is likewise best to use mathematical intuition to try to predict which test would fail first and begin with that one, skipping tests at first that seem likely to succeed.
If every test equation is true for the entire set of combinations for all constraint equations, the system is holonomic. If it is untrue for even one test combination, the system is nonholonomic.
Example
Consider this dynamical system described by a constraint equation in Pfaffian form.
The configuration space, by inspection, is . Because there are only three terms in the configuration space, there will be only one test equation needed.
We can organize the terms of the constraint equation as such, in preparation for substitution:
Substituting the terms, our test equation becomes:
After calculating all partial derivatives, we get:
Simplifying, we find that:
We see that our test equation is true, and thus, the system must be holonomic.
We have finished our test, but now knowing that the system is holonomic, we may wish to find the holonomic constraint equation. We can attempt to find it by integrating each term of the Pfaffian form and attempting to unify them into one equation, as such:
It's easy to see that we can combine the results of our integrations to find the holonomic constraint equation:
where C is the constant of integration.
Constraints of constant coefficients
For a given Pfaffian constraint where every coefficient of every differential is a constant, in other words, a constraint in the form:
the constraint must be holonomic.
We may prove this as follows: consider a system of constraints in Pfaffian form where every coefficient of every differential is a constant, as described directly above. To test whether this system of constraints is holonomic, we use the universal test. We can see that in the test equation, there are three terms that must sum to zero. Therefore, if each of those three terms in every possible test equation are each zero, then all test equations are true and this the system is holonomic. Each term of each test equation is in the form:
where:
, , and are some combination (with total combinations) of and for a given constraint .
, , and are the corresponding combination of and .
Additionally, there are sets of test equations.
We can see that, by definition, all are constants. It is well-known in calculus that any derivative (full or partial) of any constant is . Hence, we can reduce each partial derivative to:
and hence each term is zero, the left side each test equation is zero, each test equation is true, and the system is holonomic.
Configuration spaces of two or one variable
Any system that can be described by a Pfaffian constraint and has a configuration space or state space of only two variables or one variable is holonomic.
We may prove this as such: consider a dynamical system with a configuration space or state space described as:
if the system is described by a state space, we simply say that equals our time variable . This system will be described in Pfaffian form:
with sets of constraints. The system will be tested by using the universal test. However, the universal test requires three variables in the configuration or state space. To accommodate this, we simply add a dummy variable to the configuration or state space to form:
Because the dummy variable is by definition not a measure of anything in the system, its coefficient in the Pfaffian form must be . Thus we revise our Pfaffian form:
Now we may use the test as such, for a given constraint if there are a set of constraints:
Upon realizing that : because the dummy variable cannot appear in the coefficients used to describe the system, we see that the test equation must be true for all sets of constraint equations and thus the system must be holonomic. A similar proof can be conducted with one actual variable in the configuration or state space and two dummy variables to confirm that one-degree-of-freedom systems describable in Pfaffian form are also always holonomic.
In conclusion, we realize that even though it is possible to model nonholonomic systems in Pfaffian form, any system modellable in Pfaffian form with two or fewer degrees of freedom (the number of degrees of freedom is equal to the number of terms in the configuration space) must be holonomic.
Important note: realize that the test equation failed because the dummy variable, and hence the dummy differential included in the test, will differentiate anything that is a function of the actual configuration or state space variables to . Having a system with a configuration or state space of:
and a set of constraints where one or more constraints are in the Pfaffian form:
does not guarantee the system is holonomic, as even though one differential has a coefficient of , there are still three degrees of freedom described in the configuration or state space.
Transformation to independent generalized coordinates
The holonomic constraint equations can help us easily remove some of the dependent variables in our system. For example, if we want to remove , which is a parameter in the constraint equation , we can rearrange the equation into the following form, assuming it can be done,
and replace the in every equation of the system using the above function. This can always be done for general physical systems, provided that the derivative of is continuous, then by the implicit function theorem, the solution , is guaranteed in some open set. Thus, it is possible to remove all occurrences of the dependent variable .
Suppose that a physical system has degrees of freedom. Now, holonomic constraints are imposed on the system. Then, the number of degrees of freedom is reduced to . We can use independent generalized coordinates to completely describe the motion of the system. The transformation equation can be expressed as follows:
Classification of physical systems
In order to study classical physics rigorously and methodically, we need to classify systems. Based on previous discussion, we can classify physical systems into holonomic systems and non-holonomic systems. One of the conditions for the applicability of many theorems and equations is that the system must be a holonomic system. For example, if a physical system is a holonomic system and a monogenic system, then Hamilton's principle is the necessary and sufficient condition for the correctness of Lagrange's equation.
See also
Nonholonomic system
Goryachev–Chaplygin top
Pfaffian constraint
Udwadia–Kalaba equation
References
Classical mechanics | 0.770141 | 0.990072 | 0.762494 |
Field propulsion | Field propulsion is the concept of spacecraft propulsion where no propellant is necessary but instead momentum of the spacecraft is changed by an interaction of the spacecraft with external force fields, such as gravitational and magnetic fields from stars and planets. Proposed drives that use field propulsion are often called a reactionless or propellantless drive.
Types
Practical methods
Although not presently in wide use for space, there exist proven terrestrial examples of "field propulsion", in which electromagnetic fields act upon a conducting medium such as seawater or plasma for propulsion, is known as magnetohydrodynamics or MHD. MHD is similar in operation to electric motors, however rather than using moving parts or metal conductors, fluid or plasma conductors are employed. The EMS-1 and more recently the Yamato 1 are examples of such electromagnetic Field propulsion systems, first described in 1994. There is potential to apply MHD to the space environment such as in experiments like NASA's electrodynamic tether, Lorentz Actuated Orbits, the wingless electromagnetic air vehicle, and magnetoplasmadynamic thruster (which does use propellant).
Electrohydrodynamics is another method whereby electrically charged fluids are used for propulsion and boundary layer control such as ion propulsion
Other practical methods which could be loosely considered as field propulsion include: The gravity assist trajectory, which uses planetary gravity fields and orbital momentum; Solar sails and magnetic sails use respectively the radiation pressure and solar wind for spacecraft thrust; aerobraking uses the atmosphere of a planet to change relative velocity of a spacecraft. The last two actually involve the exchange of momentum with physical particles and are not usually expressed as an interaction with fields, but they are sometimes included as examples of field propulsion since no spacecraft propellant is required. An example is the Magsail magnetic sail design.
Speculative methods
Other concepts that have been proposed are speculative, using "frontier physics" and concepts from modern physics. So far none of these methods have been unambiguously demonstrated, much less proven practical.
The Woodward effect is based on a controversial concept of inertia and certain solutions to the equations for General Relativity. Experiments attempting to conclusively demonstrate this effect have been conducted since the 1990s.
In contrast, examples of proposals for field propulsion that rely on physics outside the present paradigms are various schemes for faster-than-light, warp drive and antigravity, and often amount to little more than catchy descriptive phrases, with no known physical basis. Until it is shown that the conservation of energy and momentum break down under certain conditions (or scales), any such schemes worthy of discussion must rely on energy and momentum transfer to the spacecraft from some external source such as a local force field, which in turn must obtain it from still other momentum and/or energy sources in the cosmos (in order to satisfy conservation of both energy and momentum).
Several people have speculated that the Casimir effect could be used to create a propellantless drive, often described as the "Casimir Sail", or a "Quantum Sail".
Field propulsion based on physical structure of space
This concept is based on the general relativity theory and the quantum field theory from which the idea that space has a physical structure can be proposed. The macroscopic structure is described by the general relativity theory and the microscopic structure by the quantum field theory.
The idea is to deform space around the space craft. By deforming the space it would be possible to create a region with higher pressure behind the space craft than before it. Due to the pressure gradient a force would be exerted on the space craft which in turn creates thrust for propulsion. Due to the purely theoretical nature of this propulsion concept it is hard to determine the amount of thrust and the maximum velocity that could be achieved. Currently there are two different concepts for such a field propulsion system one that is purely based on the general relativity theory and one based on the quantum field theory.
In the general relativistic field propulsion system space is considered to be an elastic field similar to rubber which means that space itself can be treated as an infinite elastic body. If the space-time curves, a normal inwards surface stress is generated which serves as a pressure field. By creating a great number of those curve surfaces behind the space craft it is possible to achieve a unidirectional surface force which can be use for the acceleration of the space craft.
For the quantum field theoretical propulsion system it is assumed, as stated by the quantum field theory and quantum Electrodynamics, that the quantum vacuum consists out of a zero-radiating electromagnetic field in a non-radiating mode and at a zero-point energy state, the lowest possible energy state. It is also theorized that matter is composed out of elementary primary charged entities, partons, which are bound together as elementary oscillators. By applying an electromagnetic zero point field a Lorentz force is applied on the partons. Using this on a dielectric material could affect the inertia of the mass and that way create an acceleration of the material without creating stress or strain inside the material.
Conservation Laws
Conservation of momentum is a fundamental requirement of propulsion systems because in experiments momentum is always conserved. This conservation law is implicit in the published work of Newton and Galileo, but arises on a fundamental level from the spatial translation symmetry of the laws of physics, as given by Noether's theorem. In each of the propulsion technologies, some form of energy exchange is required with momentum directed backward at the speed of light 'c' or some lesser velocity 'v' to balance the forward change of momentum. In absence of interaction with an external field, the power 'P' that is required to create a thrust force 'F' is given by when mass is ejected or if mass-free energy is ejected.
For a photon rocket the efficiency is too small to be competitive. Other technologies may have better efficiency if the ejection velocity is less than speed of light, or a local field can interact with another large scale field of the same type residing in space, which is the intent of field effect propulsion.
Advantages
The main advantage of a field propulsion systems is that no propellant is needed, only an energy source. This means that no propellant has to be stored and transported with the space craft which makes it attractive for long term interplanetary or even interstellar crewed missions. With current technology a large amount of fuel meant for the way back has to be brought to the destination which increases the payload of the overall space craft significantly. The increased payload of fuel, thus requires more force to accelerate it, requiring even more fuel which is the primary drawback of current rocket technology. Approximately 83% of a Hydrogen-Oxygen powered rocket, which can achieve orbit, is fuel.
Limits
The idea that with field propulsion no fuel tank would be required is technically inaccurate. The energy required to reach the high speeds involved begins to be non-neglectable for interstellar travel. For example, a 1-tonne spaceship traveling at 1/10 of the speed of light carries a kinetic energy of 4.5 × 1017 joules, equal to 5 kg according to the mass–energy equivalence. This means that for accelerating to such speed, no matter how this is achieved, the spaceship must have converted at least 5 kg of mass/energy into momentum, imagining 100% efficiency. Although such mass has not been "expelled" it has still been "disposed".
See also
References
External links
Examples of current field propulsion systems for ships.
Example of a possible field propulsion system based on existing physics and links to papers on the topic. broken link
Y. Minami., An Introduction to Concepts of Field Propulsion, JBIS,56,350-359(2003).
Minami Y., Musha T., Field Propulsion Systems for Space Travel, the Seventh IAA Symposium on Realistic Near-Term Advanced Scientific Space Missions, 11–13 July 2011, Aosta, Italy
Ed.T.Musha, Y.Minami, Field Propulsion System for Space Travel: Physics of Non-Conventional Propulsion Methods for Interstellar Travel, 2011 .
Field Resonance Propulsion Concept - NASA
ASPS
Biasing Nature's Omni-Vector Tensors via Dense, Co-aligned, Asymmetric Angular-Acceleration of Energy
Spacecraft propulsion
Science fiction themes
Hypothetical technology | 0.781875 | 0.975175 | 0.762465 |
Mohr's circle | Mohr's circle is a two-dimensional graphical representation of the transformation law for the Cauchy stress tensor.
Mohr's circle is often used in calculations relating to mechanical engineering for materials' strength, geotechnical engineering for strength of soils, and structural engineering for strength of built structures. It is also used for calculating stresses in many planes by reducing them to vertical and horizontal components. These are called principal planes in which principal stresses are calculated; Mohr's circle can also be used to find the principal planes and the principal stresses in a graphical representation, and is one of the easiest ways to do so.
After performing a stress analysis on a material body assumed as a continuum, the components of the Cauchy stress tensor at a particular material point are known with respect to a coordinate system. The Mohr circle is then used to determine graphically the stress components acting on a rotated coordinate system, i.e., acting on a differently oriented plane passing through that point.
The abscissa and ordinate (,) of each point on the circle are the magnitudes of the normal stress and shear stress components, respectively, acting on the rotated coordinate system. In other words, the circle is the locus of points that represent the state of stress on individual planes at all their orientations, where the axes represent the principal axes of the stress element.
19th-century German engineer Karl Culmann was the first to conceive a graphical representation for stresses while considering longitudinal and vertical stresses in horizontal beams during bending. His work inspired fellow German engineer Christian Otto Mohr (the circle's namesake), who extended it to both two- and three-dimensional stresses and developed a failure criterion based on the stress circle.
Alternative graphical methods for the representation of the stress state at a point include the Lamé's stress ellipsoid and Cauchy's stress quadric.
The Mohr circle can be applied to any symmetric 2x2 tensor matrix, including the strain and moment of inertia tensors.
Motivation
Internal forces are produced between the particles of a deformable object, assumed as a continuum, as a reaction to applied external forces, i.e., either surface forces or body forces. This reaction follows from Euler's laws of motion for a continuum, which are equivalent to Newton's laws of motion for a particle. A measure of the intensity of these internal forces is called stress. Because the object is assumed as a continuum, these internal forces are distributed continuously within the volume of the object.
In engineering, e.g., structural, mechanical, or geotechnical, the stress distribution within an object, for instance stresses in a rock mass around a tunnel, airplane wings, or building columns, is determined through a stress analysis. Calculating the stress distribution implies the determination of stresses at every point (material particle) in the object. According to Cauchy, the stress at any point in an object (Figure 2), assumed as a continuum, is completely defined by the nine stress components of a second order tensor of type (2,0) known as the Cauchy stress tensor, :
After the stress distribution within the object has been determined with respect to a coordinate system , it may be necessary to calculate the components of the stress tensor at a particular material point with respect to a rotated coordinate system , i.e., the stresses acting on a plane with a different orientation passing through that point of interest —forming an angle with the coordinate system (Figure 3). For example, it is of interest to find the maximum normal stress and maximum shear stress, as well as the orientation of the planes where they act upon. To achieve this, it is necessary to perform a tensor transformation under a rotation of the coordinate system. From the definition of tensor, the Cauchy stress tensor obeys the tensor transformation law. A graphical representation of this transformation law for the Cauchy stress tensor is the Mohr circle for stress.
Mohr's circle for two-dimensional state of stress
In two dimensions, the stress tensor at a given material point with respect to any two perpendicular directions is completely defined by only three stress components. For the particular coordinate system these stress components are: the normal stresses and , and the shear stress . From the balance of angular momentum, the symmetry of the Cauchy stress tensor can be demonstrated. This symmetry implies that . Thus, the Cauchy stress tensor can be written as:
The objective is to use the Mohr circle to find the stress components and on a rotated coordinate system , i.e., on a differently oriented plane passing through and perpendicular to the - plane (Figure 4). The rotated coordinate system makes an angle with the original coordinate system .
Equation of the Mohr circle
To derive the equation of the Mohr circle for the two-dimensional cases of plane stress and plane strain, first consider a two-dimensional infinitesimal material element around a material point (Figure 4), with a unit area in the direction parallel to the - plane, i.e., perpendicular to the page or screen.
From equilibrium of forces on the infinitesimal element, the magnitudes of the normal stress and the shear stress are given by:
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of Mohr's circle parametric equations - Equilibrium of forces
|-
|From equilibrium of forces in the direction of (-axis) (Figure 4), and knowing that the area of the plane where acts is , we have:
However, knowing that
we obtain
Now, from equilibrium of forces in the direction of (-axis) (Figure 4), and knowing that the area of the plane where acts is , we have:
However, knowing that
we obtain
|}
Both equations can also be obtained by applying the tensor transformation law on the known Cauchy stress tensor, which is equivalent to performing the static equilibrium of forces in the direction of and .
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of Mohr's circle parametric equations - Tensor transformation
|-
|The stress tensor transformation law can be stated as
Expanding the right hand side, and knowing that and , we have:
However, knowing that
we obtain
However, knowing that
we obtain
It is not necessary at this moment to calculate the stress component acting on the plane perpendicular to the plane of action of as it is not required for deriving the equation for the Mohr circle.
|}
These two equations are the parametric equations of the Mohr circle. In these equations, is the parameter, and and are the coordinates. This means that by choosing a coordinate system with abscissa and ordinate , giving values to the parameter will place the points obtained lying on a circle.
Eliminating the parameter from these parametric equations will yield the non-parametric equation of the Mohr circle. This can be achieved by rearranging the equations for and , first transposing the first term in the first equation and squaring both sides of each of the equations then adding them. Thus we have
where
This is the equation of a circle (the Mohr circle) of the form
with radius centered at a point with coordinates in the coordinate system.
Sign conventions
There are two separate sets of sign conventions that need to be considered when using the Mohr Circle: One sign convention for stress components in the "physical space", and another for stress components in the "Mohr-Circle-space". In addition, within each of the two set of sign conventions, the engineering mechanics (structural engineering and mechanical engineering) literature follows a different sign convention from the geomechanics literature. There is no standard sign convention, and the choice of a particular sign convention is influenced by convenience for calculation and interpretation for the particular problem in hand. A more detailed explanation of these sign conventions is presented below.
The previous derivation for the equation of the Mohr Circle using Figure 4 follows the engineering mechanics sign convention. The engineering mechanics sign convention will be used for this article.
Physical-space sign convention
From the convention of the Cauchy stress tensor (Figure 3 and Figure 4), the first subscript in the stress components denotes the face on which the stress component acts, and the second subscript indicates the direction of the stress component. Thus is the shear stress acting on the face with normal vector in the positive direction of the -axis, and in the positive direction of the -axis.
In the physical-space sign convention, positive normal stresses are outward to the plane of action (tension), and negative normal stresses are inward to the plane of action (compression) (Figure 5).
In the physical-space sign convention, positive shear stresses act on positive faces of the material element in the positive direction of an axis. Also, positive shear stresses act on negative faces of the material element in the negative direction of an axis. A positive face has its normal vector in the positive direction of an axis, and a negative face has its normal vector in the negative direction of an axis. For example, the shear stresses and are positive because they act on positive faces, and they act as well in the positive direction of the -axis and the -axis, respectively (Figure 3). Similarly, the respective opposite shear stresses and acting in the negative faces have a negative sign because they act in the negative direction of the -axis and -axis, respectively.
Mohr-circle-space sign convention
In the Mohr-circle-space sign convention, normal stresses have the same sign as normal stresses in the physical-space sign convention: positive normal stresses act outward to the plane of action, and negative normal stresses act inward to the plane of action.
Shear stresses, however, have a different convention in the Mohr-circle space compared to the convention in the physical space. In the Mohr-circle-space sign convention, positive shear stresses rotate the material element in the counterclockwise direction, and negative shear stresses rotate the material in the clockwise direction. This way, the shear stress component is positive in the Mohr-circle space, and the shear stress component is negative in the Mohr-circle space.
Two options exist for drawing the Mohr-circle space, which produce a mathematically correct Mohr circle:
Positive shear stresses are plotted upward (Figure 5, sign convention #1)
Positive shear stresses are plotted downward, i.e., the -axis is inverted (Figure 5, sign convention #2).
Plotting positive shear stresses upward makes the angle on the Mohr circle have a positive rotation clockwise, which is opposite to the physical space convention. That is why some authors prefer plotting positive shear stresses downward, which makes the angle on the Mohr circle have a positive rotation counterclockwise, similar to the physical space convention for shear stresses.
To overcome the "issue" of having the shear stress axis downward in the Mohr-circle space, there is an alternative sign convention where positive shear stresses are assumed to rotate the material element in the clockwise direction and negative shear stresses are assumed to rotate the material element in the counterclockwise direction (Figure 5, option 3). This way, positive shear stresses are plotted upward in the Mohr-circle space and the angle has a positive rotation counterclockwise in the Mohr-circle space. This alternative sign convention produces a circle that is identical to the sign convention #2 in Figure 5 because a positive shear stress is also a counterclockwise shear stress, and both are plotted downward. Also, a negative shear stress is a clockwise shear stress, and both are plotted upward.
This article follows the engineering mechanics sign convention for the physical space and the alternative sign convention for the Mohr-circle space (sign convention #3 in Figure 5)
Drawing Mohr's circle
Assuming we know the stress components , , and at a point in the object under study, as shown in Figure 4, the following are the steps to construct the Mohr circle for the state of stresses at :
Draw the Cartesian coordinate system with a horizontal -axis and a vertical -axis.
Plot two points and in the space corresponding to the known stress components on both perpendicular planes and , respectively (Figure 4 and 6), following the chosen sign convention.
Draw the diameter of the circle by joining points and with a straight line .
Draw the Mohr Circle. The centre of the circle is the midpoint of the diameter line , which corresponds to the intersection of this line with the axis.
Finding principal normal stresses
The magnitude of the principal stresses are the abscissas of the points and (Figure 6) where the circle intersects the -axis. The magnitude of the major principal stress is always the greatest absolute value of the abscissa of any of these two points. Likewise, the magnitude of the minor principal stress is always the lowest absolute value of the abscissa of these two points. As expected, the ordinates of these two points are zero, corresponding to the magnitude of the shear stress components on the principal planes. Alternatively, the values of the principal stresses can be found by
where the magnitude of the average normal stress is the abscissa of the centre , given by
and the length of the radius of the circle (based on the equation of a circle passing through two points), is given by
Finding maximum and minimum shear stresses
The maximum and minimum shear stresses correspond to the ordinates of the highest and lowest points on the circle, respectively. These points are located at the intersection of the circle with the vertical line passing through the center of the circle, . Thus, the magnitude of the maximum and minimum shear stresses are equal to the value of the circle's radius
Finding stress components on an arbitrary plane
As mentioned before, after the two-dimensional stress analysis has been performed we know the stress components , , and at a material point . These stress components act in two perpendicular planes and passing through as shown in Figure 5 and 6. The Mohr circle is used to find the stress components and , i.e., coordinates of any point on the circle, acting on any other plane passing through making an angle with the plane . For this, two approaches can be used: the double angle, and the Pole or origin of planes.
Double angle
As shown in Figure 6, to determine the stress components acting on a plane at an angle counterclockwise to the plane on which acts, we travel an angle in the same counterclockwise direction around the circle from the known stress point to point , i.e., an angle between lines and in the Mohr circle.
The double angle approach relies on the fact that the angle between the normal vectors to any two physical planes passing through (Figure 4) is half the angle between two lines joining their corresponding stress points on the Mohr circle and the centre of the circle.
This double angle relation comes from the fact that the parametric equations for the Mohr circle are a function of . It can also be seen that the planes and in the material element around of Figure 5 are separated by an angle , which in the Mohr circle is represented by a angle (double the angle).
Pole or origin of planes
The second approach involves the determination of a point on the Mohr circle called the pole or the origin of planes. Any straight line drawn from the pole will intersect the Mohr circle at a point that represents the state of stress on a plane inclined at the same orientation (parallel) in space as that line. Therefore, knowing the stress components and on any particular plane, one can draw a line parallel to that plane through the particular coordinates and on the Mohr circle and find the pole as the intersection of such line with the Mohr circle. As an example, let's assume we have a state of stress with stress components , , and , as shown on Figure 7. First, we can draw a line from point parallel to the plane of action of , or, if we choose otherwise, a line from point parallel to the plane of action of . The intersection of any of these two lines with the Mohr circle is the pole. Once the pole has been determined, to find the state of stress on a plane making an angle with the vertical, or in other words a plane having its normal vector forming an angle with the horizontal plane, then we can draw a line from the pole parallel to that plane (See Figure 7). The normal and shear stresses on that plane are then the coordinates of the point of intersection between the line and the Mohr circle.
Finding the orientation of the principal planes
The orientation of the planes where the maximum and minimum principal stresses act, also known as principal planes, can be determined by measuring in the Mohr circle the angles ∠BOC and ∠BOE, respectively, and taking half of each of those angles. Thus, the angle ∠BOC between and is double the angle which the major principal plane makes with plane .
Angles and can also be found from the following equation
This equation defines two values for which are apart (Figure). This equation can be derived directly from the geometry of the circle, or by making the parametric equation of the circle for equal to zero (the shear stress in the principal planes is always zero).
Example
Assume a material element under a state of stress as shown in Figure 8 and Figure 9, with the plane of one of its sides oriented 10° with respect to the horizontal plane.
Using the Mohr circle, find:
The orientation of their planes of action.
The maximum shear stresses and orientation of their planes of action.
The stress components on a horizontal plane.
Check the answers using the stress transformation formulas or the stress transformation law.
Solution:
Following the engineering mechanics sign convention for the physical space (Figure 5), the stress components for the material element in this example are:
.
Following the steps for drawing the Mohr circle for this particular state of stress, we first draw a Cartesian coordinate system with the -axis upward.
We then plot two points A(50,40) and B(-10,-40), representing the state of stress at plane A and B as show in both Figure 8 and Figure 9. These points follow the engineering mechanics sign convention for the Mohr-circle space (Figure 5), which assumes positive normals stresses outward from the material element, and positive shear stresses on each plane rotating the material element clockwise. This way, the shear stress acting on plane B is negative and the shear stress acting on plane A is positive.
The diameter of the circle is the line joining point A and B. The centre of the circle is the intersection of this line with the -axis. Knowing both the location of the centre and length of the diameter, we are able to plot the Mohr circle for this particular state of stress.
The abscissas of both points E and C (Figure 8 and Figure 9) intersecting the -axis are the magnitudes of the minimum and maximum normal stresses, respectively; the ordinates of both points E and C are the magnitudes of the shear stresses acting on both the minor and major principal planes, respectively, which is zero for principal planes.
Even though the idea for using the Mohr circle is to graphically find different stress components by actually measuring the coordinates for different points on the circle, it is more convenient to confirm the results analytically. Thus, the radius and the abscissa of the centre of the circle are
and the principal stresses are
The coordinates for both points H and G (Figure 8 and Figure 9) are the magnitudes of the minimum and maximum shear stresses, respectively; the abscissas for both points H and G are the magnitudes for the normal stresses acting on the same planes where the minimum and maximum shear stresses act, respectively.
The magnitudes of the minimum and maximum shear stresses can be found analytically by
and the normal stresses acting on the same planes where the minimum and maximum shear stresses act are equal to
We can choose to either use the double angle approach (Figure 8) or the Pole approach (Figure 9) to find the orientation of the principal normal stresses and principal shear stresses.
Using the double angle approach we measure the angles ∠BOC and ∠BOE in the Mohr Circle (Figure 8) to find double the angle the major principal stress and the minor principal stress make with plane B in the physical space. To obtain a more accurate value for these angles, instead of manually measuring the angles, we can use the analytical expression
One solution is: .
From inspection of Figure 8, this value corresponds to the angle ∠BOE. Thus, the minor principal angle is
Then, the major principal angle is
Remember that in this particular example and are angles with respect to the plane of action of (oriented in the -axis)and not angles with respect to the plane of action of (oriented in the -axis).
Using the Pole approach, we first localize the Pole or origin of planes. For this, we draw through point A on the Mohr circle a line inclined 10° with the horizontal, or, in other words, a line parallel to plane A where acts. The Pole is where this line intersects the Mohr circle (Figure 9). To confirm the location of the Pole, we could draw a line through point B on the Mohr circle parallel to the plane B where acts. This line would also intersect the Mohr circle at the Pole (Figure 9).
From the Pole, we draw lines to different points on the Mohr circle. The coordinates of the points where these lines intersect the Mohr circle indicate the stress components acting on a plane in the physical space having the same inclination as the line. For instance, the line from the Pole to point C in the circle has the same inclination as the plane in the physical space where acts. This plane makes an angle of 63.435° with plane B, both in the Mohr-circle space and in the physical space. In the same way, lines are traced from the Pole to points E, D, F, G and H to find the stress components on planes with the same orientation.
Mohr's circle for a general three-dimensional state of stresses
To construct the Mohr circle for a general three-dimensional case of stresses at a point, the values of the principal stresses and their principal directions must be first evaluated.
Considering the principal axes as the coordinate system, instead of the general , , coordinate system, and assuming that , then the normal and shear components of the stress vector , for a given plane with unit vector , satisfy the following equations
Knowing that , we can solve for , , , using the Gauss elimination method which yields
Since , and is non-negative, the numerators from these equations satisfy
as the denominator and
as the denominator and
as the denominator and
These expressions can be rewritten as
which are the equations of the three Mohr's circles for stress , , and , with radii , , and , and their centres with coordinates , , , respectively.
These equations for the Mohr circles show that all admissible stress points lie on these circles or within the shaded area enclosed by them (see Figure 10). Stress points satisfying the equation for circle lie on, or outside circle . Stress points satisfying the equation for circle lie on, or inside circle . And finally, stress points satisfying the equation for circle lie on, or outside circle .
See also
Critical plane analysis
References
Bibliography
External links
Mohr's Circle and more circles by Rebecca Brannon
DoITPoMS Teaching and Learning Package- "Stress Analysis and Mohr's Circle"
Classical mechanics
Elasticity (physics)
Solid mechanics
Mechanics
Circles | 0.766438 | 0.994802 | 0.762454 |
Potential gradient | In physics, chemistry , a potential gradient is the local rate of change of the potential with respect to displacement, i.e. spatial derivative, or gradient. This quantity frequently occurs in equations of physical processes because it leads to some form of flux.
Definition
One dimension
The simplest definition for a potential gradient F in one dimension is the following:
where is some type of scalar potential and is displacement (not distance) in the direction, the subscripts label two different positions , and potentials at those points, . In the limit of infinitesimal displacements, the ratio of differences becomes a ratio of differentials:
The direction of the electric potential gradient is from to .
Three dimensions
In three dimensions, Cartesian coordinates make it clear that the resultant potential gradient is the sum of the potential gradients in each direction:
where are unit vectors in the directions. This can be compactly written in terms of the gradient operator ,
although this final form holds in any curvilinear coordinate system, not just Cartesian.
This expression represents a significant feature of any conservative vector field , namely has a corresponding potential .
Using Stokes' theorem, this is equivalently stated as
meaning the curl, denoted ∇×, of the vector field vanishes.
Physics
Newtonian gravitation
In the case of the gravitational field , which can be shown to be conservative, it is equal to the gradient in gravitational potential :
There are opposite signs between gravitational field and potential, because the potential gradient and field are opposite in direction: as the potential increases, the gravitational field strength decreases and vice versa.
Electromagnetism
In electrostatics, the electric field is independent of time , so there is no induction of a time-dependent magnetic field by Faraday's law of induction:
which implies is the gradient of the electric potential , identical to the classical gravitational field:
In electrodynamics, the field is time dependent and induces a time-dependent field also (again by Faraday's law), so the curl of is not zero like before, which implies the electric field is no longer the gradient of electric potential. A time-dependent term must be added:
where is the electromagnetic vector potential. This last potential expression in fact reduces Faraday's law to an identity.
Fluid mechanics
In fluid mechanics, the velocity field describes the fluid motion. An irrotational flow means the velocity field is conservative, or equivalently the vorticity pseudovector field is zero:
This allows the velocity potential to be defined simply as:
Chemistry
In an electrochemical half-cell, at the interface between the electrolyte (an ionic solution) and the metal electrode, the standard electric potential difference is:
where R = gas constant, T = temperature of solution, z = valency of the metal, e = elementary charge, NA = Avogadro constant, and aM+z is the activity of the ions in solution. Quantities with superscript ⊖ denote the measurement is taken under standard conditions. The potential gradient is relatively abrupt, since there is an almost definite boundary between the metal and solution, hence the interface term.
Biology
In biology, a potential gradient is the net difference in electric charge across a cell membrane.
Non-uniqueness of potentials
Since gradients in potentials correspond to physical fields, it makes no difference if a constant is added on (it is erased by the gradient operator which includes partial differentiation). This means there is no way to tell what the "absolute value" of the potential "is" – the zero value of potential is completely arbitrary and can be chosen anywhere by convenience (even "at infinity"). This idea also applies to vector potentials, and is exploited in classical field theory and also gauge field theory.
Absolute values of potentials are not physically observable, only gradients and path-dependent potential differences are. However, the Aharonov–Bohm effect is a quantum mechanical effect which illustrates that non-zero electromagnetic potentials along a closed loop (even when the and fields are zero everywhere in the region) lead to changes in the phase of the wave function of an electrically charged particle in the region, so the potentials appear to have measurable significance.
Potential theory
Field equations, such as Gauss's laws for electricity, for magnetism, and for gravity, can be written in the form:
where is the electric charge density, monopole density (should they exist), or mass density and is a constant (in terms of physical constants , , and other numerical factors).
Scalar potential gradients lead to Poisson's equation:
A general theory of potentials has been developed to solve this equation for the potential. The gradient of that solution gives the physical field, solving the field equation.
See also
Tensors in curvilinear coordinates
References
Concepts in physics
Spatial gradient
pl:Gradient potencjału | 0.779144 | 0.978577 | 0.762452 |
Convergent evolution | Convergent evolution is the independent evolution of similar features in species of different periods or epochs in time. Convergent evolution creates analogous structures that have similar form or function but were not present in the last common ancestor of those groups. The cladistic term for the same phenomenon is homoplasy. The recurrent evolution of flight is a classic example, as flying insects, birds, pterosaurs, and bats have independently evolved the useful capacity of flight. Functionally similar features that have arisen through convergent evolution are analogous, whereas homologous structures or traits have a common origin but can have dissimilar functions. Bird, bat, and pterosaur wings are analogous structures, but their forelimbs are homologous, sharing an ancestral state despite serving different functions.
The opposite of convergence is divergent evolution, where related species evolve different traits. Convergent evolution is similar to parallel evolution, which occurs when two independent species evolve in the same direction and thus independently acquire similar characteristics; for instance, gliding frogs have evolved in parallel from multiple types of tree frog.
Many instances of convergent evolution are known in plants, including the repeated development of C4 photosynthesis, seed dispersal by fleshy fruits adapted to be eaten by animals, and carnivory.
Overview
In morphology, analogous traits arise when different species live in similar ways and/or a similar environment, and so face the same environmental factors. When occupying similar ecological niches (that is, a distinctive way of life) similar problems can lead to similar solutions. The British anatomist Richard Owen was the first to identify the fundamental difference between analogies and homologies.
In biochemistry, physical and chemical constraints on mechanisms have caused some active site arrangements such as the catalytic triad to evolve independently in separate enzyme superfamilies.
In his 1989 book Wonderful Life, Stephen Jay Gould argued that if one could "rewind the tape of life [and] the same conditions were encountered again, evolution could take a very different course." Simon Conway Morris disputes this conclusion, arguing that convergence is a dominant force in evolution, and given that the same environmental and physical constraints are at work, life will inevitably evolve toward an "optimum" body plan, and at some point, evolution is bound to stumble upon intelligence, a trait presently identified with at least primates, corvids, and cetaceans.
Distinctions
Cladistics
In cladistics, a homoplasy is a trait shared by two or more taxa for any reason other than that they share a common ancestry. Taxa which do share ancestry are part of the same clade; cladistics seeks to arrange them according to their degree of relatedness to describe their phylogeny. Homoplastic traits caused by convergence are therefore, from the point of view of cladistics, confounding factors which could lead to an incorrect analysis.
Atavism
In some cases, it is difficult to tell whether a trait has been lost and then re-evolved convergently, or whether a gene has simply been switched off and then re-enabled later. Such a re-emerged trait is called an atavism. From a mathematical standpoint, an unused gene (selectively neutral) has a steadily decreasing probability of retaining potential functionality over time. The time scale of this process varies greatly in different phylogenies; in mammals and birds, there is a reasonable probability of remaining in the genome in a potentially functional state for around 6 million years.
Parallel vs. convergent evolution
When two species are similar in a particular character, evolution is defined as parallel if the ancestors were also similar, and convergent if they were not. Some scientists have argued that there is a continuum between parallel and convergent evolution, while others maintain that despite some overlap, there are still important distinctions between the two.
When the ancestral forms are unspecified or unknown, or the range of traits considered is not clearly specified, the distinction between parallel and convergent evolution becomes more subjective. For instance, the striking example of similar placental and marsupial forms is described by Richard Dawkins in The Blind Watchmaker as a case of convergent evolution, because mammals on each continent had a long evolutionary history prior to the extinction of the dinosaurs under which to accumulate relevant differences.
At molecular level
Proteins
Protease active sites
The enzymology of proteases provides some of the clearest examples of convergent evolution. These examples reflect the intrinsic chemical constraints on enzymes, leading evolution to converge on equivalent solutions independently and repeatedly.
Serine and cysteine proteases use different amino acid functional groups (alcohol or thiol) as a nucleophile. In order to activate that nucleophile, they orient an acidic and a basic residue in a catalytic triad. The chemical and physical constraints on enzyme catalysis have caused identical triad arrangements to evolve independently more than 20 times in different enzyme superfamilies.
Threonine proteases use the amino acid threonine as their catalytic nucleophile. Unlike cysteine and serine, threonine is a secondary alcohol (i.e. has a methyl group). The methyl group of threonine greatly restricts the possible orientations of triad and substrate, as the methyl clashes with either the enzyme backbone or the histidine base. Consequently, most threonine proteases use an N-terminal threonine in order to avoid such steric clashes.
Several evolutionarily independent enzyme superfamilies with different protein folds use the N-terminal residue as a nucleophile. This commonality of active site but difference of protein fold indicates that the active site evolved convergently in those families.
Cone snail and fish insulin
Conus geographus produces a distinct form of insulin that is more similar to fish insulin protein sequences than to insulin from more closely related molluscs, suggesting convergent evolution, though with the possibility of horizontal gene transfer.
Ferrous iron uptake via protein transporters in land plants and chlorophytes
Distant homologues of the metal ion transporters ZIP in land plants and chlorophytes have converged in structure, likely to take up Fe2+ efficiently. The IRT1 proteins from Arabidopsis thaliana and rice have extremely different amino acid sequences from Chlamydomonass IRT1, but their three-dimensional structures are similar, suggesting convergent evolution.
Na+,K+-ATPase and Insect resistance to cardiotonic steroids
Many examples of convergent evolution exist in insects in terms of developing resistance at a molecular level to toxins. One well-characterized example is the evolution of resistance to cardiotonic steroids (CTSs) via amino acid substitutions at well-defined positions of the α-subunit of Na+,K+-ATPase (ATPalpha). Variation in ATPalpha has been surveyed in various CTS-adapted species spanning six insect orders. Among 21 CTS-adapted species, 58 (76%) of 76 amino acid substitutions at sites implicated in CTS resistance occur in parallel in at least two lineages. 30 of these substitutions (40%) occur at just two sites in the protein (positions 111 and 122). CTS-adapted species have also recurrently evolved neo-functionalized duplications of ATPalpha, with convergent tissue-specific expression patterns.
Nucleic acids
Convergence occurs at the level of DNA and the amino acid sequences produced by translating structural genes into proteins. Studies have found convergence in amino acid sequences in echolocating bats and the dolphin; among marine mammals; between giant and red pandas; and between the thylacine and canids. Convergence has also been detected in a type of non-coding DNA, cis-regulatory elements, such as in their rates of evolution; this could indicate either positive selection or relaxed purifying selection.
In animal morphology
Bodyplans
Swimming animals including fish such as herrings, marine mammals such as dolphins, and ichthyosaurs (of the Mesozoic) all converged on the same streamlined shape. A similar shape and swimming adaptations are even present in molluscs, such as Phylliroe. The fusiform bodyshape (a tube tapered at both ends) adopted by many aquatic animals is an adaptation to enable them to travel at high speed in a high drag environment. Similar body shapes are found in the earless seals and the eared seals: they still have four legs, but these are strongly modified for swimming.
The marsupial fauna of Australia and the placental mammals of the Old World have several strikingly similar forms, developed in two clades, isolated from each other. The body, and especially the skull shape, of the thylacine (Tasmanian tiger or Tasmanian wolf) converged with those of Canidae such as the red fox, Vulpes vulpes.
Echolocation
As a sensory adaptation, echolocation has evolved separately in cetaceans (dolphins and whales) and bats, but from the same genetic mutations.
Electric fishes
The Gymnotiformes of South America and the Mormyridae of Africa independently evolved passive electroreception (around 119 and 110 million years ago, respectively). Around 20 million years after acquiring that ability, both groups evolved active electrogenesis, producing weak electric fields to help them detect prey.
Eyes
One of the best-known examples of convergent evolution is the camera eye of cephalopods (such as squid and octopus), vertebrates (including mammals) and cnidaria (such as jellyfish). Their last common ancestor had at most a simple photoreceptive spot, but a range of processes led to the progressive refinement of camera eyes—with one sharp difference: the cephalopod eye is "wired" in the opposite direction, with blood and nerve vessels entering from the back of the retina, rather than the front as in vertebrates. As a result, vertebrates have a blind spot.
Flight
Birds and bats have homologous limbs because they are both ultimately derived from terrestrial tetrapods, but their flight mechanisms are only analogous, so their wings are examples of functional convergence. The two groups have independently evolved their own means of powered flight. Their wings differ substantially in construction. The bat wing is a membrane stretched across four extremely elongated fingers and the legs. The airfoil of the bird wing is made of feathers, strongly attached to the forearm (the ulna) and the highly fused bones of the wrist and hand (the carpometacarpus), with only tiny remnants of two fingers remaining, each anchoring a single feather. So, while the wings of bats and birds are functionally convergent, they are not anatomically convergent. Birds and bats also share a high concentration of cerebrosides in the skin of their wings. This improves skin flexibility, a trait useful for flying animals; other mammals have a far lower concentration. The extinct pterosaurs independently evolved wings from their fore- and hindlimbs, while insects have wings that evolved separately from different organs.
Flying squirrels and sugar gliders are much alike in their body plans, with gliding wings stretched between their limbs, but flying squirrels are placental mammals while sugar gliders are marsupials, widely separated within the mammal lineage from the placentals.
Hummingbird hawk-moths and hummingbirds have evolved similar flight and feeding patterns.
Insect mouthparts
Insect mouthparts show many examples of convergent evolution. The mouthparts of different insect groups consist of a set of homologous organs, specialised for the dietary intake of that insect group. Convergent evolution of many groups of insects led from original biting-chewing mouthparts to different, more specialised, derived function types. These include, for example, the proboscis of flower-visiting insects such as bees and flower beetles, or the biting-sucking mouthparts of blood-sucking insects such as fleas and mosquitos.
Opposable thumbs
Opposable thumbs allowing the grasping of objects are most often associated with primates, like humans and other apes, monkeys, and lemurs. Opposable thumbs also evolved in giant pandas, but these are completely different in structure, having six fingers including the thumb, which develops from a wrist bone entirely separately from other fingers.
Primates
Convergent evolution in humans includes blue eye colour and light skin colour. When humans migrated out of Africa, they moved to more northern latitudes with less intense sunlight. It was beneficial to them to reduce their skin pigmentation. It appears certain that there was some lightening of skin colour before European and East Asian lineages diverged, as there are some skin-lightening genetic differences that are common to both groups. However, after the lineages diverged and became genetically isolated, the skin of both groups lightened more, and that additional lightening was due to different genetic changes.
Lemurs and humans are both primates. Ancestral primates had brown eyes, as most primates do today. The genetic basis of blue eyes in humans has been studied in detail and much is known about it. It is not the case that one gene locus is responsible, say with brown dominant to blue eye colour. However, a single locus is responsible for about 80% of the variation. In lemurs, the differences between blue and brown eyes are not completely known, but the same gene locus is not involved.
In plants
The annual life-cycle
While most plant species are perennial, about 6% follow an annual life cycle, living for only one growing season. The annual life cycle independently emerged in over 120 plant families of angiosperms. The prevalence of annual species increases under hot-dry summer conditions in the four species-rich families of annuals (Asteraceae, Brassicaceae, Fabaceae, and Poaceae), indicating that the annual life cycle is adaptive.
Carbon fixation
C4 photosynthesis, one of the three major carbon-fixing biochemical processes, has arisen independently up to 40 times. About 7,600 plant species of angiosperms use carbon fixation, with many monocots including 46% of grasses such as maize and sugar cane, and dicots including several species in the Chenopodiaceae and the Amaranthaceae.
Fruits
Fruits with a wide variety of structural origins have converged to become edible. Apples are pomes with five carpels; their accessory tissues form the apple's core, surrounded by structures from outside the botanical fruit, the receptacle or hypanthium. Other edible fruits include other plant tissues; the fleshy part of a tomato is the walls of the pericarp. This implies convergent evolution under selective pressure, in this case the competition for seed dispersal by animals through consumption of fleshy fruits.
Seed dispersal by ants (myrmecochory) has evolved independently more than 100 times, and is present in more than 11,000 plant species. It is one of the most dramatic examples of convergent evolution in biology.
Carnivory
Carnivory has evolved multiple times independently in plants in widely separated groups. In three species studied, Cephalotus follicularis, Nepenthes alata and Sarracenia purpurea, there has been convergence at the molecular level. Carnivorous plants secrete enzymes into the digestive fluid they produce. By studying phosphatase, glycoside hydrolase, glucanase, RNAse and chitinase enzymes as well as a pathogenesis-related protein and a thaumatin-related protein, the authors found many convergent amino acid substitutions. These changes were not at the enzymes' catalytic sites, but rather on the exposed surfaces of the proteins, where they might interact with other components of the cell or the digestive fluid. The authors also found that homologous genes in the non-carnivorous plant Arabidopsis thaliana tend to have their expression increased when the plant is stressed, leading the authors to suggest that stress-responsive proteins have often been co-opted in the repeated evolution of carnivory.
Methods of inference
Phylogenetic reconstruction and ancestral state reconstruction proceed by assuming that evolution has occurred without convergence. Convergent patterns may, however, appear at higher levels in a phylogenetic reconstruction, and are sometimes explicitly sought by investigators. The methods applied to infer convergent evolution depend on whether pattern-based or process-based convergence is expected. Pattern-based convergence is the broader term, for when two or more lineages independently evolve patterns of similar traits. Process-based convergence is when the convergence is due to similar forces of natural selection.
Pattern-based measures
Earlier methods for measuring convergence incorporate ratios of phenotypic and phylogenetic distance by simulating evolution with a Brownian motion model of trait evolution along a phylogeny. More recent methods also quantify the strength of convergence. One drawback to keep in mind is that these methods can confuse long-term stasis with convergence due to phenotypic similarities. Stasis occurs when there is little evolutionary change among taxa.
Distance-based measures assess the degree of similarity between lineages over time. Frequency-based measures assess the number of lineages that have evolved in a particular trait space.
Process-based measures
Methods to infer process-based convergence fit models of selection to a phylogeny and continuous trait data to determine whether the same selective forces have acted upon lineages. This uses the Ornstein–Uhlenbeck process to test different scenarios of selection. Other methods rely on an a priori specification of where shifts in selection have occurred.
See also
: the presence of multiple alleles in ancestral populations might lead to the impression that convergent evolution has occurred.
Iterative evolution – The repeated evolution of a specific trait or body plan from the same ancestral lineage at different points in time.
Breeding back – A form of selective breeding to recreate the traits of an extinct species, but the genome will differ from the original species.
Orthogenesis (contrastable with convergent evolution; involves teleology)
Contingency (evolutionary biology) – effect of evolutionary history on outcomes
Notes
References
Further reading
External links
Convergent evolution
Evolutionary biology terminology | 0.763802 | 0.998231 | 0.762451 |
Euler angles | The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body with respect to a fixed coordinate system.
They can also represent the orientation of a mobile frame of reference in physics or the orientation of a general basis in three dimensional linear algebra.
Classic Euler angles usually take the inclination angle in such a way that zero degrees represent the vertical orientation. Alternative forms were later introduced by Peter Guthrie Tait and George H. Bryan intended for use in aeronautics and engineering in which zero degrees represent the horizontal position.
Chained rotations equivalence
Euler angles can be defined by elemental geometry or by composition of rotations (i.e. chained rotations). The geometrical definition demonstrates that three composed elemental rotations (rotations about the axes of a coordinate system) are always sufficient to reach any target frame.
The three elemental rotations may be extrinsic (rotations about the axes xyz of the original coordinate system, which is assumed to remain motionless), or intrinsic (rotations about the axes of the rotating coordinate system XYZ, solidary with the moving body, which changes its orientation with respect to the extrinsic frame after each elemental rotation).
In the sections below, an axis designation with a prime mark superscript (e.g., z″) denotes the new axis after an elemental rotation.
Euler angles are typically denoted as α, β, γ, or ψ, θ, φ. Different authors may use different sets of rotation axes to define Euler angles, or different names for the same angles. Therefore, any discussion employing Euler angles should always be preceded by their definition.
Without considering the possibility of using two different conventions for the definition of the rotation axes (intrinsic or extrinsic), there exist twelve possible sequences of rotation axes, divided in two groups:
Proper Euler angles
Tait–Bryan angles .
Tait–Bryan angles are also called Cardan angles; nautical angles; heading, elevation, and bank; or yaw, pitch, and roll. Sometimes, both kinds of sequences are called "Euler angles". In that case, the sequences of the first group are called proper or classic Euler angles.
Classic Euler angles
The Euler angles are three angles introduced by Swiss mathematician Leonhard Euler (1707–1783) to describe the orientation of a rigid body with respect to a fixed coordinate system.
Geometrical definition
The axes of the original frame are denoted as x, y, z and the axes of the rotated frame as X, Y, Z. The geometrical definition (sometimes referred to as static) begins by defining the line of nodes (N) as the intersection of the planes xy and XY (it can also be defined as the common perpendicular to the axes z and Z and then written as the vector product N = z × Z). Using it, the three Euler angles can be defined as follows:
(or ) is the signed angle between the x axis and the N axis (x-convention – it could also be defined between y and N, called y-convention).
(or ) is the angle between the z axis and the Z axis.
(or ) is the signed angle between the N axis and the X axis (x-convention).
Euler angles between two reference frames are defined only if both frames have the same handedness.
Conventions by intrinsic rotations
Intrinsic rotations are elemental rotations that occur about the axes of a coordinate system XYZ attached to a moving body. Therefore, they change their orientation after each elemental rotation. The XYZ system rotates, while xyz is fixed. Starting with XYZ overlapping xyz, a composition of three intrinsic rotations can be used to reach any target orientation for XYZ.
Euler angles can be defined by intrinsic rotations. The rotated frame XYZ may be imagined to be initially aligned with xyz, before undergoing the three elemental rotations represented by Euler angles. Its successive orientations may be denoted as follows:
x-y-z or x0-y0-z0 (initial)
x′-y′-z′ or x1-y1-z1 (after first rotation)
x″-y″-z″ or x2-y2-z2 (after second rotation)
X-Y-Z or x3-y3-z3 (final)
For the above-listed sequence of rotations, the line of nodes N can be simply defined as the orientation of X after the first elemental rotation. Hence, N can be simply denoted x′. Moreover, since the third elemental rotation occurs about Z, it does not change the orientation of Z. Hence Z coincides with z″. This allows us to simplify the definition of the Euler angles as follows:
α (or φ) represents a rotation around the z axis,
β (or θ) represents a rotation around the x′ axis,
γ (or ψ) represents a rotation around the z″ axis.
Conventions by extrinsic rotations
Extrinsic rotations are elemental rotations that occur about the axes of the fixed coordinate system xyz. The XYZ system rotates, while xyz is fixed. Starting with XYZ overlapping xyz, a composition of three extrinsic rotations can be used to reach any target orientation for XYZ. The Euler or Tait–Bryan angles (α, β, γ) are the amplitudes of these elemental rotations. For instance, the target orientation can be reached as follows (note the reversed order of Euler angle application):
The XYZ system rotates about the z axis by γ. The X axis is now at angle γ with respect to the x axis.
The XYZ system rotates again, but this time about the x axis by β. The Z axis is now at angle β with respect to the z axis.
The XYZ system rotates a third time, about the z axis again, by angle α.
In sum, the three elemental rotations occur about z, x and z. Indeed, this sequence is often denoted z-x-z (or 3-1-3). Sets of rotation axes associated with both proper Euler angles and Tait–Bryan angles are commonly named using this notation (see above for details).
If each step of the rotation acts on the rotating coordinate system XYZ, the rotation is intrinsic (Z-X'-Z''). Intrinsic rotation can also be denoted 3-1-3.
Signs, ranges and conventions
Angles are commonly defined according to the right-hand rule. Namely, they have positive values when they represent a rotation that appears clockwise when looking in the positive direction of the axis, and negative values when the rotation appears counter-clockwise. The opposite convention (left hand rule) is less frequently adopted.
About the ranges (using interval notation):
for α and γ, the range is defined modulo 2 radians. For instance, a valid range could be .
for β, the range covers radians (but can not be said to be modulo ). For example, it could be or .
The angles α, β and γ are uniquely determined except for the singular case that the xy and the XY planes are identical, i.e. when the z axis and the Z axis have the same or opposite directions. Indeed, if the z axis and the Z axis are the same, β = 0 and only (α + γ) is uniquely defined (not the individual values), and, similarly, if the z axis and the Z axis are opposite, β = and only (α − γ) is uniquely defined (not the individual values). These ambiguities are known as gimbal lock in applications.
There are six possibilities of choosing the rotation axes for proper Euler angles. In all of them, the first and third rotation axes are the same. The six possible sequences are:
z1-x′-z2″ (intrinsic rotations) or z2-x-z1 (extrinsic rotations)
x1-y′-x2″ (intrinsic rotations) or x2-y-x1 (extrinsic rotations)
y1-z′-y2″ (intrinsic rotations) or y2-z-y1 (extrinsic rotations)
z1-y′-z2″ (intrinsic rotations) or z2-y-z1 (extrinsic rotations)
x1-z′-x2″ (intrinsic rotations) or x2-z-x1 (extrinsic rotations)
y1-x′-y2″ (intrinsic rotations) or y2-x-y1 (extrinsic rotations)
Precession, nutation and intrinsic rotation
Precession, nutation, and intrinsic rotation (spin) are defined as the movements obtained by changing one of the Euler angles while leaving the other two constant. These motions are not expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. They constitute a mixed axes of rotation system, where the first angle moves the line of nodes around the external axis z, the second rotates around the line of nodes N and the third one is an intrinsic rotation around Z, an axis fixed in the body that moves.
The static definition implies that:
α (precession) represents a rotation around the z axis,
β (nutation) represents a rotation around the N or x′ axis,
γ (intrinsic rotation) represents a rotation around the Z or z″ axis.
If β is zero, there is no rotation about N. As a consequence, Z coincides with z, α and γ represent rotations about the same axis (z), and the final orientation can be obtained with a single rotation about z, by an angle equal to .
As an example, consider a top. The top spins around its own axis of symmetry; this corresponds to its intrinsic rotation. It also rotates around its pivotal axis, with its center of mass orbiting the pivotal axis; this rotation is a precession. Finally, the top can wobble up and down; the inclination angle is the nutation angle. The same example can be seen with the movements of the earth.
Though all three movements can be represented by a rotation operator with constant coefficients in some frame, they cannot be represented by these operators all at the same time. Given a reference frame, at most one of them will be coefficient-free. Only precession can be expressed in general as a matrix in the basis of the space without dependencies of the other angles.
These movements also behave as a gimbal set. Given a set of frames, able to move each with respect to the former according to just one angle, like a gimbal, there will exist an external fixed frame, one final frame and two frames in the middle, which are called "intermediate frames". The two in the middle work as two gimbal rings that allow the last frame to reach any orientation in space.
Tait–Bryan angles
[[Image:taitbrianzyx.svg|thumb|left|200px|Tait–Bryan angles. z-y′-x″ sequence (intrinsic rotations; N coincides with y). The angle rotation sequence is ψ, θ, φ. Note that in this case and θ is a negative angle.]]
The second type of formalism is called Tait–Bryan angles, after Scottish mathematical physicist Peter Guthrie Tait (1831–1901) and English applied mathematician George H. Bryan (1864–1928). It is the convention normally used for aerospace applications, so that zero degrees elevation represents the horizontal attitude. Tait–Bryan angles represent the orientation of the aircraft with respect to the world frame. When dealing with other vehicles, different axes conventions are possible.
Definitions
The definitions and notations used for Tait–Bryan angles are similar to those described above for proper Euler angles (geometrical definition, intrinsic rotation definition, extrinsic rotation definition). The only difference is that Tait–Bryan angles represent rotations about three distinct axes (e.g. x-y-z, or x-y′-z″), while proper Euler angles use the same axis for both the first and third elemental rotations (e.g., z-x-z, or z-x′-z″).
This implies a different definition for the line of nodes in the geometrical construction. In the proper Euler angles case it was defined as the intersection between two homologous Cartesian planes (parallel when Euler angles are zero; e.g. xy and XY). In the Tait–Bryan angles case, it is defined as the intersection of two non-homologous planes (perpendicular when Euler angles are zero; e.g. xy and YZ).
Conventions
The three elemental rotations may occur either about the axes of the original coordinate system, which remains motionless (extrinsic rotations), or about the axes of the rotating coordinate system, which changes its orientation after each elemental rotation (intrinsic rotations).
There are six possibilities of choosing the rotation axes for Tait–Bryan angles. The six possible sequences are:
x-y′-z″ (intrinsic rotations) or z-y-x (extrinsic rotations)
y-z′-x″ (intrinsic rotations) or x-z-y (extrinsic rotations)
z-x′-y″ (intrinsic rotations) or y-x-z (extrinsic rotations)
x-z′-y″ (intrinsic rotations) or y-z-x (extrinsic rotations)
z-y′-x″ (intrinsic rotations) or x-y-z (extrinsic rotations): the intrinsic rotations are known as: yaw, pitch and roll
y-x′-z″ (intrinsic rotations) or z-x-y (extrinsic rotations)
Signs and ranges
Tait–Bryan convention is widely used in engineering with different purposes. There are several axes conventions in practice for choosing the mobile and fixed axes, and these conventions determine the signs of the angles. Therefore, signs must be studied in each case carefully.
The range for the angles ψ and φ covers 2 radians. For θ the range covers radians.
Alternative names
These angles are normally taken as one in the external reference frame (heading, bearing), one in the intrinsic moving frame (bank) and one in a middle frame, representing an elevation or inclination with respect to the horizontal plane, which is equivalent to the line of nodes for this purpose.
As chained rotations
For an aircraft, they can be obtained with three rotations around its principal axes if done in the proper order and starting from a frame coincident with the reference frame.
A yaw will obtain the bearing,
a pitch will yield the elevation, and
a roll gives the bank angle.
Therefore, in aerospace they are sometimes called yaw, pitch, and roll. Notice that this will not work if the rotations are applied in any other order or if the airplane axes start in any position non-equivalent to the reference frame.
Tait–Bryan angles, following z-y′-x″ (intrinsic rotations) convention, are also known as nautical angles, because they can be used to describe the orientation of a ship or aircraft, or Cardan angles, after the Italian mathematician and physicist Gerolamo Cardano, who first described in detail the Cardan suspension and the Cardan joint.
Angles of a given frame
A common problem is to find the Euler angles of a given frame. The fastest way to get them is to write the three given vectors as columns of a matrix and compare it with the expression of the theoretical matrix (see later table of matrices). Hence the three Euler Angles can be calculated. Nevertheless, the same result can be reached avoiding matrix algebra and using only elemental geometry. Here we present the results for the two most commonly used conventions: ZXZ for proper Euler angles and ZYX for Tait–Bryan. Notice that any other convention can be obtained just changing the name of the axes.
Proper Euler angles
Assuming a frame with unit vectors (X, Y, Z) given by their coordinates as in the main diagram, it can be seen that:
And, since
for we have
As is the double projection of a unitary vector,
There is a similar construction for , projecting it first over the plane defined by the axis z and the line of nodes. As the angle between the planes is and , this leads to:
and finally, using the inverse cosine function,
Tait–Bryan angles
Assuming a frame with unit vectors (X, Y, Z) given by their coordinates as in this new diagram (notice that the angle theta is negative), it can be seen that:
As before,
for we have
in a way analogous to the former one:
Looking for similar expressions to the former ones:
Last remarks
Note that the inverse sine and cosine functions yield two possible values for the argument. In this geometrical description, only one of the solutions is valid. When Euler angles are defined as a sequence of rotations, all the solutions can be valid, but there will be only one inside the angle ranges. This is because the sequence of rotations to reach the target frame is not unique if the ranges are not previously defined.
For computational purposes, it may be useful to represent the angles using . For example, in the case of proper Euler angles:
Conversion to other orientation representations
Euler angles are one way to represent orientations. There are others, and it is possible to change to and from other conventions. Three parameters are always required to describe orientations in a 3-dimensional Euclidean space. They can be given in several ways, Euler angles being one of them; see charts on SO(3) for others.
The most common orientation representations are the rotation matrices, the axis-angle and the quaternions, also known as Euler–Rodrigues parameters, which provide another mechanism for representing 3D rotations. This is equivalent to the special unitary group description.
Expressing rotations in 3D as unit quaternions instead of matrices has some advantages:
Concatenating rotations is computationally faster and numerically more stable.
Extracting the angle and axis of rotation is simpler.
Interpolation is more straightforward. See for example slerp.
Quaternions do not suffer from gimbal lock as Euler angles do.
Regardless, the rotation matrix calculation is the first step for obtaining the other two representations.
Rotation matrix
Any orientation can be achieved by composing three elemental rotations, starting from a known standard orientation. Equivalently, any rotation matrix R can be decomposed as a product of three elemental rotation matrices. For instance:
is a rotation matrix that may be used to represent a composition of extrinsic rotations about axes z, y, x, (in that order), or a composition of intrinsic rotations about axes x-y′-z″ (in that order). However, both the definition of the elemental rotation matrices X, Y, Z, and their multiplication order depend on the choices taken by the user about the definition of both rotation matrices and Euler angles (see, for instance, Ambiguities in the definition of rotation matrices). Unfortunately, different sets of conventions are adopted by users in different contexts. The following table was built according to this set of conventions:
Each matrix is meant to operate by pre-multiplying column vectors (see Ambiguities in the definition of rotation matrices)
Each matrix is meant to represent an active rotation (the composing and composed matrices are supposed to act on the coordinates of vectors defined in the initial fixed reference frame and give as a result the coordinates of a rotated vector defined in the same reference frame).
Each matrix is meant to represent, primarily, a composition of intrinsic rotations (around the axes of the rotating reference frame) and, secondarily, the composition of three extrinsic rotations (which corresponds to the constructive evaluation of the R matrix by the multiplication of three truly elemental matrices, in reverse order).
Right handed reference frames are adopted, and the right hand rule is used to determine the sign of the angles α, β, γ.
For the sake of simplicity, the following table of matrix products uses the following nomenclature:
X, Y, Z are the matrices representing the elemental rotations about the axes x, y, z of the fixed frame (e.g., Xα represents a rotation about x by an angle α).
s and c represent sine and cosine (e.g., sα represents the sine of α).
These tabular results are available in numerous textbooks. For each column the last row constitutes the most commonly used convention.
To change the formulas for passive rotations (or find reverse active rotation), transpose the matrices (then each matrix transforms the initial coordinates of a vector remaining fixed to the coordinates of the same vector measured in the rotated reference system; same rotation axis, same angles, but now the coordinate system rotates, rather than the vector).
The following table contains formulas for angles α, β and γ from elements of a rotation matrix .
Properties
The Euler angles form a chart on all of SO(3), the special orthogonal group of rotations in 3D space. The chart is smooth except for a polar coordinate style singularity along . See charts on SO(3) for a more complete treatment.
The space of rotations is called in general "The Hypersphere of rotations", though this is a misnomer: the group Spin(3) is isometric to the hypersphere S3, but the rotation space SO(3) is instead isometric to the real projective space RP'''3 which is a 2-fold quotient space of the hypersphere. This 2-to-1 ambiguity is the mathematical origin of spin in physics.
A similar three angle decomposition applies to SU(2), the special unitary group of rotations in complex 2D space, with the difference that β ranges from 0 to 2. These are also called Euler angles.
The Haar measure for SO(3) in Euler angles is given by the Hopf angle parametrisation of SO(3), , where parametrise , the space of rotation axes.
For example, to generate uniformly randomized orientations, let α and γ be uniform from 0 to 2, let z be uniform from −1 to 1, and let .
Geometric algebra
Other properties of Euler angles and rotations in general can be found from the geometric algebra, a higher level abstraction, in which the quaternions are an even subalgebra. The principal tool in geometric algebra is the rotor where angle of rotation, is the rotation axis (unitary vector) and is the pseudoscalar (trivector in )
Higher dimensions
It is possible to define parameters analogous to the Euler angles in dimensions higher than three.
In four dimensions and above, the concept of "rotation about an axis" loses meaning and instead becomes "rotation in a plane." The number of Euler angles needed to represent the group is , equal to the number of planes containing two distinct coordinate axes in n-dimensional Euclidean space.
In SO(4) a rotation matrix is defined by two unit quaternions, and therefore has six degrees of freedom, three from each quaternion.
Applications
Vehicles and moving frames
Their main advantage over other orientation descriptions is that they are directly measurable from a gimbal mounted in a vehicle. As gyroscopes keep their rotation axis constant, angles measured in a gyro frame are equivalent to angles measured in the lab frame. Therefore, gyros are used to know the actual orientation of moving spacecraft, and Euler angles are directly measurable. Intrinsic rotation angle cannot be read from a single gimbal, so there has to be more than one gimbal in a spacecraft. Normally there are at least three for redundancy. There is also a relation to the well-known gimbal lock problem of mechanical engineering.
When studying rigid bodies in general, one calls the xyz system space coordinates, and the XYZ system body coordinates. The space coordinates are treated as unmoving, while the body coordinates are considered embedded in the moving body. Calculations involving acceleration, angular acceleration, angular velocity, angular momentum, and kinetic energy are often easiest in body coordinates, because then the moment of inertia tensor does not change in time. If one also diagonalizes the rigid body's moment of inertia tensor (with nine components, six of which are independent), then one has a set of coordinates (called the principal axes) in which the moment of inertia tensor has only three components.
The angular velocity of a rigid body takes a simple form using Euler angles in the moving frame. Also the Euler's rigid body equations are simpler because the inertia tensor is constant in that frame.
Crystallographic texture
In materials science, crystallographic texture (or preferred orientation) can be described using Euler angles. In texture analysis, the Euler angles provide a mathematical depiction of the orientation of individual crystallites within a polycrystalline material, allowing for the quantitative description of the macroscopic material.
The most common definition of the angles is due to Bunge and corresponds to the ZXZ convention. It is important to note, however, that the application generally involves axis transformations of tensor quantities, i.e. passive rotations. Thus the matrix that corresponds to the Bunge Euler angles is the transpose of that shown in the table above.
Others
Euler angles, normally in the Tait–Bryan convention, are also used in robotics for speaking about the degrees of freedom of a wrist. They are also used in electronic stability control in a similar way.
Gun fire control systems require corrections to gun-order angles (bearing and elevation) to compensate for deck tilt (pitch and roll). In traditional systems, a stabilizing gyroscope with a vertical spin axis corrects for deck tilt, and stabilizes the optical sights and radar antenna. However, gun barrels point in a direction different from the line of sight to the target, to anticipate target movement and fall of the projectile due to gravity, among other factors. Gun mounts roll and pitch with the deck plane, but also require stabilization. Gun orders include angles computed from the vertical gyro data, and those computations involve Euler angles.
Euler angles are also used extensively in the quantum mechanics of angular momentum. In quantum mechanics, explicit descriptions of the representations of SO(3) are very important for calculations, and almost all the work has been done using Euler angles. In the early history of quantum mechanics, when physicists and chemists had a sharply negative reaction towards abstract group theoretic methods (called the Gruppenpest''), reliance on Euler angles was also essential for basic theoretical work.
Many mobile computing devices contain accelerometers which can determine these devices' Euler angles with respect to the earth's gravitational attraction. These are used in applications such as games, bubble level simulations, and kaleidoscopes.
Computer graphics libraries like three.js use them to point the camera
See also
3D projection
Axis-angle representation
Conversion between quaternions and Euler angles
Davenport chained rotations
Euler's rotation theorem
Gimbal lock
Quaternion
Quaternions and spatial rotation
Rotation formalisms in three dimensions
Spherical coordinate system
References
Bibliography
External links
David Eberly. Euler Angle Formulas, Geometric Tools
An interactive tutorial on Euler angles available at https://www.mecademic.com/en/how-is-orientation-in-space-represented-with-euler-angles
EulerAngles an iOS app for visualizing in 3D the three rotations associated with Euler angles
Orientation Library "orilib", a collection of routines for rotation / orientation manipulation, including special tools for crystal orientations
Online tool to convert rotation matrices available at rotation converter (numerical conversion)
Online tool to convert symbolic rotation matrices (dead, but still available from the Wayback Machine) symbolic rotation converter
Rotation, Reflection, and Frame Change: Orthogonal tensors in computational engineering mechanics, IOP Publishing
Euler Angles, Quaternions, and Transformation Matrices for Space Shuttle Analysis, NASA
Rotation in three dimensions
Euclidean symmetries
Angle
Analytic geometry | 0.763426 | 0.998709 | 0.762441 |
Universal testing machine | A universal testing machine (UTM), also known as a universal tester, universal tensile machine, materials testing machine, materials test frame, is used to test the tensile strength (pulling) and compressive strength (pushing), flexural strength, bending, shear, hardness, and torsion testing, providing valuable data for designing and ensuring the quality of materials. An earlier name for a tensile testing machine is a tensometer. The "universal" part of the name reflects that it can perform many standard tests application on materials, components, and structures (in other words, that it is versatile).
Electromechanical and Hydraulic Testing System
An electromechanical UTM utilizes an electric motor to apply a controlled force, while a hydraulic UTM uses hydraulic systems for force application. Electromechanical UTMs are favored for their precision, speed, and ease of use, making them suitable for a wide range of applications, including tensile, compression, and flexural testing.
On the other hand, hydraulic UTMs are capable of generating higher forces and are often used for testing high-strength materials such as metals and alloys, where extreme force applications are required. Both types of UTMs play critical roles in various industries including aerospace, automotive, construction, and materials science, enabling engineers and researchers to accurately assess the mechanical properties of materials for design, quality control, and research purposes.
Components
Several variations are in use. Common components include:
Load frame - Usually consisting of two strong supports for the machine. Some small machines have a single support.
Load cell - A force transducer or other means of measuring the load is required. Periodic calibration is usually required by governing regulations or quality system.
Cross head - A movable cross head (crosshead) is controlled to move up or down. Usually this is at a constant speed: sometimes called a constant rate of extension (CRE) machine. Some machines can program the crosshead speed or conduct cyclical testing, testing at constant force, testing at constant deformation, etc. Electromechanical, servo-hydraulic, linear drive, and resonance drive are used.
Means of measuring extension or deformation - Many tests require a measure of the response of the test specimen to the movement of the cross head. Extensometers are sometimes used.
Control Panel and Software Device - Providing the test result with parameters set by the user for data acquisition and analysis. Some older machines have dial or digital displays and chart recorders. Many newer machines have a computer interface for analysis and printing.
Conditioning - Many tests require controlled conditioning (temperature, humidity, pressure, etc.). The machine can be in a controlled room or a special environmental chamber can be placed around the test specimen for the test.
Test fixtures, specimen holding jaws, and related sample making equipment are called for in many test methods.
Use
The set-up and usage are detailed in a test method, often published by a standards organization. This specifies the sample preparation, fixturing, gauge length (the length which is under study or observation), analysis, etc.
The specimen is placed in the machine between the grips and an extensometer if required can automatically record the change in gauge length during the test. If an extensometer is not fitted, the machine itself can record the displacement between its cross heads on which the specimen is held. However, this method not only records the change in length of the specimen but also all other extending / elastic components of the testing machine and its drive systems including any slipping of the specimen in the grips.
Once the machine is started it begins to apply an increasing load on specimen. Throughout the tests the control system and its associated software record the load and extension or compression of the specimen.
Machines range from very small table top systems to ones with over 53 MN (12 million lbf) capacity.
See also
Modulus of elasticity
Stress-strain curve
Young's modulus
Necking (engineering)
Fatigue testing
Hydraulic press
References
ASTM E74 - Practice for Calibration of Force Measuring Instruments for Verifying the Force Indication of Testing Machines
ASTM E83 - Practice for Verification and Classification on Extensometer Systems
ASTM E1012 - Practice for Verification of Test Frame and Specimen Alignment Under Tensile and Compressive Axial Force Application
ASTM E1856 - Standard Guide for Evaluating Computerized Data Acquisition Systems Used to Acquire Data from Universal Testing Machines
JIS K7171 - Standard for determine the flextural strength for plastic material & products
External links
Materials science
Tests
Measuring instruments | 0.768943 | 0.991544 | 0.762441 |
FFF system | The furlong–firkin–fortnight (FFF) system is a humorous system of units based on unusual or impractical measurements. The length unit of the system is the furlong, the mass unit is the mass of a firkin of water, and the time unit is the fortnight. Like the SI or metre–kilogram–second systems, there are derived units for velocity, volume, mass and weight, etc. It is sometimes referred to as the FFFF system where the fourth 'F' is degrees Fahrenheit for temperature.
While the FFF system is not used in practice it has been used as an example in discussions of the relative merits of different systems of units. Some of the FFF units, notably the microfortnight, have been used jokingly in computer science. Besides having the meaning "any obscure unit", the derived unit furlongs per fortnight has also served frequently in classroom examples of unit conversion and dimensional analysis.
Base units and definitions
Multiples and derived units
Microfortnight and other decimal prefixes
One microfortnight is equal to 1.2096 seconds. This has become a joke in computer science because in the VMS operating system, the TIMEPROMPTWAIT variable, which holds the time the system will wait for an operator to set the correct date and time at boot if it realizes that the current value is invalid, is set in microfortnights. This is because the computer uses a loop instead of the internal clock, which has not been activated yet to run the timer. The documentation notes that "[t]he time unit of micro-fortnights is approximated as seconds in the implementation".
The Jargon File reports that the millifortnight (about 20 minutes) and nanofortnight have been occasionally used.
Furlong per fortnight
One furlong per fortnight is a speed that would be barely noticeable to the naked eye. It converts to:
1.663 m/s, (i.e. 0.1663 mm/s),
roughly 1 cm/min (to within 1 part in 400),
5.987 km/h,
roughly in/min,
3.720 mph,
the speed of the tip of a inch minute hand.
Speed of light
The speed of light is furlongs per fortnight (1.8026 terafurlongs per fortnight). By mass–energy equivalence, 1 firkin is equal to (≈ , or ).
Others
In the FFF system, heat transfer coefficients are conventionally reported as BTU per foot-fathom per degree Fahrenheit per fortnight. Thermal conductivity has units of BTU per fortnight per furlong per degree Fahrenheit.
Like the more common furlong per fortnight, a firkin per fortnight can refer to "any obscure unit".
See also
List of unusual units of measurement
List of humorous units of measurement
Footnotes
References
Systems of units
Tech humour | 0.773802 | 0.985311 | 0.762436 |
Master equation | In physics, chemistry, and related fields, master equations are used to describe the time evolution of a system that can be modeled as being in a probabilistic combination of states at any given time, and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations – over time – of the probabilities that the system occupies each of the different states.
The name was proposed in 1940:
Introduction
A master equation is a phenomenological set of first-order differential equations describing the time evolution of (usually) the probability of a system to occupy each one of a discrete set of states with regard to a continuous time variable t. The most familiar form of a master equation is a matrix form:
where is a column vector, and is the matrix of connections. The way connections among states are made determines the dimension of the problem; it is either
a d-dimensional system (where d is 1,2,3,...), where any state is connected with exactly its 2d nearest neighbors, or
a network, where every pair of states may have a connection (depending on the network's properties).
When the connections are time-independent rate constants, the master equation represents a kinetic scheme, and the process is Markovian (any jumping time probability density function for state i is an exponential, with a rate equal to the value of the connection). When the connections depend on the actual time (i.e. matrix depends on the time, ), the process is not stationary and the master equation reads
When the connections represent multi exponential jumping time probability density functions, the process is semi-Markovian, and the equation of motion is an integro-differential equation termed the generalized master equation:
The matrix can also represent birth and death, meaning that probability is injected (birth) or taken from (death) the system, and then the process is not in equilibrium.
Detailed description of the matrix and properties of the system
Let be the matrix describing the transition rates (also known as kinetic rates or reaction rates). As always, the first subscript represents the row, the second subscript the column. That is, the source is given by the second subscript, and the destination by the first subscript. This is the opposite of what one might expect, but is appropriate for conventional matrix multiplication.
For each state k, the increase in occupation probability depends on the contribution from all other states to k, and is given by:
where is the probability for the system to be in the state , while the matrix is filled with a grid of transition-rate constants. Similarly, contributes to the occupation of all other states
In probability theory, this identifies the evolution as a continuous-time Markov process, with the integrated master equation obeying a Chapman–Kolmogorov equation.
The master equation can be simplified so that the terms with ℓ = k do not appear in the summation. This allows calculations even if the main diagonal of is not defined or has been assigned an arbitrary value.
The final equality arises from the fact that
because the summation over the probabilities yields one, a constant function. Since this has to hold for any probability (and in particular for any probability of the form for some k) we get
Using this we can write the diagonal elements as
The master equation exhibits detailed balance if each of the terms of the summation disappears separately at equilibrium—i.e. if, for all states k and ℓ having equilibrium probabilities and ,
These symmetry relations were proved on the basis of the time reversibility of microscopic dynamics (microscopic reversibility) as Onsager reciprocal relations.
Examples of master equations
Many physical problems in classical, quantum mechanics and problems in other sciences, can be reduced to the form of a master equation, thereby performing a great simplification of the problem (see mathematical model).
The Lindblad equation in quantum mechanics is a generalization of the master equation describing the time evolution of a density matrix. Though the Lindblad equation is often referred to as a master equation, it is not one in the usual sense, as it governs not only the time evolution of probabilities (diagonal elements of the density matrix), but also of variables containing information about quantum coherence between the states of the system (non-diagonal elements of the density matrix).
Another special case of the master equation is the Fokker–Planck equation which describes the time evolution of a continuous probability distribution. Complicated master equations which resist analytic treatment can be cast into this form (under various approximations), by using approximation techniques such as the system size expansion.
Stochastic chemical kinetics provide yet another example of the use of the master equation. A master equation may be used to model a set of chemical reactions when the number of molecules of one or more species is small (of the order of 100 or 1000 molecules). The chemical master equation can also solved for the very large models, such as the DNA damage signal from fungal pathogen Candida albicans.
Quantum master equations
A quantum master equation is a generalization of the idea of a master equation. Rather than just a system of differential equations for a set of probabilities (which only constitutes the diagonal elements of a density matrix), quantum master equations are differential equations for the entire density matrix, including off-diagonal elements. A density matrix with only diagonal elements can be modeled as a classical random process, therefore such an "ordinary" master equation is considered classical. Off-diagonal elements represent quantum coherence which is a physical characteristic that is intrinsically quantum mechanical.
The Redfield equation and Lindblad equation are examples of approximate quantum master equations assumed to be Markovian. More accurate quantum master equations for certain applications include the polaron transformed quantum master equation, and the VPQME (variational polaron transformed quantum master equation).
Theorem about eigenvalues of the matrix and time evolution
Because fulfills
and
one can show that:
There is at least one eigenvector with a vanishing eigenvalue, exactly one if the graph of is strongly connected.
All other eigenvalues fulfill .
All eigenvectors with a non-zero eigenvalue fulfill .
This has important consequences for the time evolution of a state.
See also
Kolmogorov equations (Markov jump process)
Continuous-time Markov process
Quantum master equation
Fermi's golden rule
Detailed balance
Boltzmann's H-theorem
References
External links
Timothy Jones, A Quantum Optics Derivation (2006)
Statistical mechanics
Stochastic calculus
Equations
Equations of physics | 0.771584 | 0.988142 | 0.762435 |
Game physics | Computer animation physics or game physics are laws of physics as they are defined within a simulation or video game, and the programming logic used to implement these laws. Game physics vary greatly in their degree of similarity to real-world physics. Sometimes, the physics of a game may be designed to mimic the physics of the real world as accurately as is feasible, in order to appear realistic to the player or observer. In other cases, games may intentionally deviate from actual physics for gameplay purposes. Common examples in platform games include the ability to start moving horizontally or change direction in mid-air and the double jump ability found in some games. Setting the values of physical parameters, such as the amount of gravity present, is also a part of defining the game physics of a particular game.
There are several elements that form components of simulation physics including the physics engine, program code that is used to simulate Newtonian physics within the environment, and collision detection, used to solve the problem of determining when any two or more physical objects in the environment cross each other's path.
Physics simulations
There are two central types of physics simulations: rigid body and soft-body simulators. In a rigid body simulation objects are grouped into categories based on how they should interact and are less performance intensive. Soft-body physics involves simulating individual sections of each object such that it behaves in a more realistic way.
Particle systems
A common aspect of computer games that model some type of conflict is the explosion. Early computer games used the simple expedient of repeating the same explosion in each circumstance. However, in the real world an explosion can vary depending on the terrain, altitude of the explosion, and the type of solid bodies being impacted. Depending on the processing power available, the effects of the explosion can be modeled as the split and shattered components propelled by the expanding gas. This is modelled by means of a particle system simulation. A particle system model allows a variety of other physical phenomena to be simulated, including smoke, moving water, precipitation, and so forth. The individual particles within the system are modelled using the other elements of the physics simulation rules, with the limitation that the number of particles that can be simulated is restricted by the computing power of the hardware. Thus explosions may need to be modelled as a small set of large particles, rather than the more accurate huge number of fine particles.
Ragdoll physics
This is a procedural animation and simulation technique to display the movement of a character when killed. It treats the character's body as a series of rigid bones connected together with hinges at the joints. The simulation models what happens to the body as it collapses to the ground. More sophisticated physics models of creature movement and collision interactions require greater level of computing power and a more accurate simulation of solids, liquids, and hydrodynamics. The modelled articulated systems can then reproduce the effects of skeleton, muscles, tendons, and other physiological components. Some games, such as Boneworks and Half-Life 2, apply forces to individual joints that allow ragdolls to move and behave like humanoids with fully procedural animations. This allows to, for example, knock an enemy down or grab each individual joint and move it around and the physics-based animation would adapt accordingly, which wouldn't be possible with conventional means. This method is called active ragdolls and is often used in combination with inverse kinematics.
Projectiles
Projectiles, such as arrows or bullets, often travel at very high speeds. This creates problems with collisions - sometimes the projectile travels so fast that it simply goes past a thin object without ever detecting that it has collided with it. Before, this was solved with ray-casting, which does not require the creation of a physical projectile. However, simply shooting a ray in the direction that the weapon is aiming at is not particularly realistic, which is why modern games often create a physical projectile that can be affected by gravity and other forces. This projectile uses a form of continuous collision detection to make sure that the above-stated problem will not occur (at the cost of inferior performance), since more complex calculations are required to perform such a task.
Games such as FIFA 14 require accurate projectile physics for objects such as the soccer ball. In FIFA 14, developers were required to fix code related to the drag coefficient which was inaccurate in previous games, leading to a much more realistic simulation of a real ball.
Books
See also
Physics engine
Physics game
Real-time simulation
References
External links
Computer physics engines
Video game development | 0.779688 | 0.977867 | 0.762432 |
Relativistic quantum mechanics | In physics, relativistic quantum mechanics (RQM) is any Poincaré covariant formulation of quantum mechanics (QM). This theory is applicable to massive particles propagating at all velocities up to those comparable to the speed of light c, and can accommodate massless particles. The theory has application in high energy physics, particle physics and accelerator physics, as well as atomic physics, chemistry and condensed matter physics. Non-relativistic quantum mechanics refers to the mathematical formulation of quantum mechanics applied in the context of Galilean relativity, more specifically quantizing the equations of classical mechanics by replacing dynamical variables by operators. Relativistic quantum mechanics (RQM) is quantum mechanics applied with special relativity. Although the earlier formulations, like the Schrödinger picture and Heisenberg picture were originally formulated in a non-relativistic background, a few of them (e.g. the Dirac or path-integral formalism) also work with special relativity.
Key features common to all RQMs include: the prediction of antimatter, spin magnetic moments of elementary spin fermions, fine structure, and quantum dynamics of charged particles in electromagnetic fields. The key result is the Dirac equation, from which these predictions emerge automatically. By contrast, in non-relativistic quantum mechanics, terms have to be introduced artificially into the Hamiltonian operator to achieve agreement with experimental observations.
The most successful (and most widely used) RQM is relativistic quantum field theory (QFT), in which elementary particles are interpreted as field quanta. A unique consequence of QFT that has been tested against other RQMs is the failure of conservation of particle number, for example in matter creation and annihilation.
Paul Dirac's work between 1927 to 1933 shaped the synthesis of special relativity and quantum mechanics. His work was instrumental, as he formulated the Dirac equation and also originated quantum electrodynamics, both of which were successful in combining the two theories.
In this article, the equations are written in familiar 3D vector calculus notation and use hats for operators (not necessarily in the literature), and where space and time components can be collected, tensor index notation is shown also (frequently used in the literature), in addition the Einstein summation convention is used. SI units are used here; Gaussian units and natural units are common alternatives. All equations are in the position representation; for the momentum representation the equations have to be Fourier transformed – see position and momentum space.
Combining special relativity and quantum mechanics
One approach is to modify the Schrödinger picture to be consistent with special relativity.
A postulate of quantum mechanics is that the time evolution of any quantum system is given by the Schrödinger equation:
using a suitable Hamiltonian operator corresponding to the system. The solution is a complex-valued wavefunction , a function of the 3D position vector of the particle at time , describing the behavior of the system.
Every particle has a non-negative spin quantum number . The number is an integer, odd for fermions and even for bosons. Each has z-projection quantum numbers; . This is an additional discrete variable the wavefunction requires; .
Historically, in the early 1920s Pauli, Kronig, Uhlenbeck and Goudsmit were the first to propose the concept of spin. The inclusion of spin in the wavefunction incorporates the Pauli exclusion principle (1925) and the more general spin–statistics theorem (1939) due to Fierz, rederived by Pauli a year later. This is the explanation for a diverse range of subatomic particle behavior and phenomena: from the electronic configurations of atoms, nuclei (and therefore all elements on the periodic table and their chemistry), to the quark configurations and colour charge (hence the properties of baryons and mesons).
A fundamental prediction of special relativity is the relativistic energy–momentum relation; for a particle of rest mass , and in a particular frame of reference with energy and 3-momentum with magnitude in terms of the dot product , it is:
These equations are used together with the energy and momentum operators, which are respectively:
to construct a relativistic wave equation (RWE): a partial differential equation consistent with the energy–momentum relation, and is solved for to predict the quantum dynamics of the particle. For space and time to be placed on equal footing, as in relativity, the orders of space and time partial derivatives should be equal, and ideally as low as possible, so that no initial values of the derivatives need to be specified. This is important for probability interpretations, exemplified below. The lowest possible order of any differential equation is the first (zeroth order derivatives would not form a differential equation).
The Heisenberg picture is another formulation of QM, in which case the wavefunction is time-independent, and the operators contain the time dependence, governed by the equation of motion:
This equation is also true in RQM, provided the Heisenberg operators are modified to be consistent with SR.
Historically, around 1926, Schrödinger and Heisenberg show that wave mechanics and matrix mechanics are equivalent, later furthered by Dirac using transformation theory.
A more modern approach to RWEs, first introduced during the time RWEs were developing for particles of any spin, is to apply representations of the Lorentz group.
Space and time
In classical mechanics and non-relativistic QM, time is an absolute quantity all observers and particles can always agree on, "ticking away" in the background independent of space. Thus in non-relativistic QM one has for a many particle system .
In relativistic mechanics, the spatial coordinates and coordinate time are not absolute; any two observers moving relative to each other can measure different locations and times of events. The position and time coordinates combine naturally into a four-dimensional spacetime position corresponding to events, and the energy and 3-momentum combine naturally into the four-momentum of a dynamic particle, as measured in some reference frame, change according to a Lorentz transformation as one measures in a different frame boosted and/or rotated relative the original frame in consideration. The derivative operators, and hence the energy and 3-momentum operators, are also non-invariant and change under Lorentz transformations.
Under a proper orthochronous Lorentz transformation in Minkowski space, all one-particle quantum states locally transform under some representation of the Lorentz group:
where is a finite-dimensional representation, in other words a square matrix . Again, is thought of as a column vector containing components with the allowed values of . The quantum numbers and as well as other labels, continuous or discrete, representing other quantum numbers are suppressed. One value of may occur more than once depending on the representation.
Non-relativistic and relativistic Hamiltonians
The classical Hamiltonian for a particle in a potential is the kinetic energy plus the potential energy , with the corresponding quantum operator in the Schrödinger picture:
and substituting this into the above Schrödinger equation gives a non-relativistic QM equation for the wavefunction: the procedure is a straightforward substitution of a simple expression. By contrast this is not as easy in RQM; the energy–momentum equation is quadratic in energy and momentum leading to difficulties. Naively setting:
is not helpful for several reasons. The square root of the operators cannot be used as it stands; it would have to be expanded in a power series before the momentum operator, raised to a power in each term, could act on . As a result of the power series, the space and time derivatives are completely asymmetric: infinite-order in space derivatives but only first order in the time derivative, which is inelegant and unwieldy. Again, there is the problem of the non-invariance of the energy operator, equated to the square root which is also not invariant. Another problem, less obvious and more severe, is that it can be shown to be nonlocal and can even violate causality: if the particle is initially localized at a point so that is finite and zero elsewhere, then at any later time the equation predicts delocalization everywhere, even for which means the particle could arrive at a point before a pulse of light could. This would have to be remedied by the additional constraint .
There is also the problem of incorporating spin in the Hamiltonian, which isn't a prediction of the non-relativistic Schrödinger theory. Particles with spin have a corresponding spin magnetic moment quantized in units of , the Bohr magneton:
where is the (spin) g-factor for the particle, and the spin operator, so they interact with electromagnetic fields. For a particle in an externally applied magnetic field , the interaction term
has to be added to the above non-relativistic Hamiltonian. On the contrary; a relativistic Hamiltonian introduces spin automatically as a requirement of enforcing the relativistic energy-momentum relation.
Relativistic Hamiltonians are analogous to those of non-relativistic QM in the following respect; there are terms including rest mass and interaction terms with externally applied fields, similar to the classical potential energy term, as well as momentum terms like the classical kinetic energy term. A key difference is that relativistic Hamiltonians contain spin operators in the form of matrices, in which the matrix multiplication runs over the spin index , so in general a relativistic Hamiltonian:
is a function of space, time, and the momentum and spin operators.
The Klein–Gordon and Dirac equations for free particles
Substituting the energy and momentum operators directly into the energy–momentum relation may at first sight seem appealing, to obtain the Klein–Gordon equation:
and was discovered by many people because of the straightforward way of obtaining it, notably by Schrödinger in 1925 before he found the non-relativistic equation named after him, and by Klein and Gordon in 1927, who included electromagnetic interactions in the equation. This is relativistically invariant, yet this equation alone isn't a sufficient foundation for RQM for a at least two reasons: one is that negative-energy states are solutions, another is the density (given below), and this equation as it stands is only applicable to spinless particles. This equation can be factored into the form:
where and are not simply numbers or vectors, but 4 × 4 Hermitian matrices that are required to anticommute for :
and square to the identity matrix:
so that terms with mixed second-order derivatives cancel while the second-order derivatives purely in space and time remain. The first factor:
is the Dirac equation. The other factor is also the Dirac equation, but for a particle of negative mass. Each factor is relativistically invariant. The reasoning can be done the other way round: propose the Hamiltonian in the above form, as Dirac did in 1928, then pre-multiply the equation by the other factor of operators , and comparison with the KG equation determines the constraints on and . The positive mass equation can continue to be used without loss of continuity. The matrices multiplying suggest it isn't a scalar wavefunction as permitted in the KG equation, but must instead be a four-component entity. The Dirac equation still predicts negative energy solutions, so Dirac postulated that negative energy states are always occupied, because according to the Pauli principle, electronic transitions from positive to negative energy levels in atoms would be forbidden. See Dirac sea for details.
Densities and currents
In non-relativistic quantum mechanics, the square modulus of the wavefunction gives the probability density function . This is the Copenhagen interpretation, circa 1927. In RQM, while is a wavefunction, the probability interpretation is not the same as in non-relativistic QM. Some RWEs do not predict a probability density or probability current (really meaning probability current density) because they are not positive-definite functions of space and time. The Dirac equation does:
where the dagger denotes the Hermitian adjoint (authors usually write for the Dirac adjoint) and is the probability four-current, while the Klein–Gordon equation does not:
where is the four-gradient. Since the initial values of both and may be freely chosen, the density can be negative.
Instead, what appears look at first sight a "probability density" and "probability current" has to be reinterpreted as charge density and current density when multiplied by electric charge. Then, the wavefunction is not a wavefunction at all, but reinterpreted as a field. The density and current of electric charge always satisfy a continuity equation:
as charge is a conserved quantity. Probability density and current also satisfy a continuity equation because probability is conserved, however this is only possible in the absence of interactions.
Spin and electromagnetically interacting particles
Including interactions in RWEs is generally difficult. Minimal coupling is a simple way to include the electromagnetic interaction. For one charged particle of electric charge in an electromagnetic field, given by the magnetic vector potential defined by the magnetic field , and electric scalar potential , this is:
where is the four-momentum that has a corresponding 4-momentum operator, and the four-potential. In the following, the non-relativistic limit refers to the limiting cases:
that is, the total energy of the particle is approximately the rest energy for small electric potentials, and the momentum is approximately the classical momentum.
Spin 0
In RQM, the KG equation admits the minimal coupling prescription;
In the case where the charge is zero, the equation reduces trivially to the free KG equation so nonzero charge is assumed below. This is a scalar equation that is invariant under the irreducible one-dimensional scalar representation of the Lorentz group. This means that all of its solutions will belong to a direct sum of representations. Solutions that do not belong to the irreducible representation will have two or more independent components. Such solutions cannot in general describe particles with nonzero spin since spin components are not independent. Other constraint will have to be imposed for that, e.g. the Dirac equation for spin , see below. Thus if a system satisfies the KG equation only, it can only be interpreted as a system with zero spin.
The electromagnetic field is treated classically according to Maxwell's equations and the particle is described by a wavefunction, the solution to the KG equation. The equation is, as it stands, not always very useful, because massive spinless particles, such as the π-mesons, experience the much stronger strong interaction in addition to the electromagnetic interaction. It does, however, correctly describe charged spinless bosons in the absence of other interactions.
The KG equation is applicable to spinless charged bosons in an external electromagnetic potential. As such, the equation cannot be applied to the description of atoms, since the electron is a spin particle. In the non-relativistic limit the equation reduces to the Schrödinger equation for a spinless charged particle in an electromagnetic field:
Spin
Non relativistically, spin was phenomenologically introduced in the Pauli equation by Pauli in 1927 for particles in an electromagnetic field:
by means of the 2 × 2 Pauli matrices, and is not just a scalar wavefunction as in the non-relativistic Schrödinger equation, but a two-component spinor field:
where the subscripts ↑ and ↓ refer to the "spin up" and "spin down" states.
In RQM, the Dirac equation can also incorporate minimal coupling, rewritten from above;
and was the first equation to accurately predict spin, a consequence of the 4 × 4 gamma matrices . There is a 4 × 4 identity matrix pre-multiplying the energy operator (including the potential energy term), conventionally not written for simplicity and clarity (i.e. treated like the number 1). Here is a four-component spinor field, which is conventionally split into two two-component spinors in the form:
The 2-spinor corresponds to a particle with 4-momentum and charge and two spin states (, as before). The other 2-spinor corresponds to a similar particle with the same mass and spin states, but negative 4-momentum and negative charge , that is, negative energy states, time-reversed momentum, and negated charge. This was the first interpretation and prediction of a particle and corresponding antiparticle. See Dirac spinor and bispinor for further description of these spinors. In the non-relativistic limit the Dirac equation reduces to the Pauli equation (see Dirac equation for how). When applied a one-electron atom or ion, setting and to the appropriate electrostatic potential, additional relativistic terms include the spin–orbit interaction, electron gyromagnetic ratio, and Darwin term. In ordinary QM these terms have to be put in by hand and treated using perturbation theory. The positive energies do account accurately for the fine structure.
Within RQM, for massless particles the Dirac equation reduces to:
the first of which is the Weyl equation, a considerable simplification applicable for massless neutrinos. This time there is a 2 × 2 identity matrix pre-multiplying the energy operator conventionally not written. In RQM it is useful to take this as the zeroth Pauli matrix which couples to the energy operator (time derivative), just as the other three matrices couple to the momentum operator (spatial derivatives).
The Pauli and gamma matrices were introduced here, in theoretical physics, rather than pure mathematics itself. They have applications to quaternions and to the SO(2) and SO(3) Lie groups, because they satisfy the important commutator [ , ] and anticommutator [ , ]+ relations respectively:
where is the three-dimensional Levi-Civita symbol. The gamma matrices form bases in Clifford algebra, and have a connection to the components of the flat spacetime Minkowski metric in the anticommutation relation:
(This can be extended to curved spacetime by introducing vierbeins, but is not the subject of special relativity).
In 1929, the Breit equation was found to describe two or more electromagnetically interacting massive spin fermions to first-order relativistic corrections; one of the first attempts to describe such a relativistic quantum many-particle system. This is, however, still only an approximation, and the Hamiltonian includes numerous long and complicated sums.
Helicity and chirality
The helicity operator is defined by;
where p is the momentum operator, S the spin operator for a particle of spin s, E is the total energy of the particle, and m0 its rest mass. Helicity indicates the orientations of the spin and translational momentum vectors. Helicity is frame-dependent because of the 3-momentum in the definition, and is quantized due to spin quantization, which has discrete positive values for parallel alignment, and negative values for antiparallel alignment.
An automatic occurrence in the Dirac equation (and the Weyl equation) is the projection of the spin operator on the 3-momentum (times c), , which is the helicity (for the spin case) times .
For massless particles the helicity simplifies to:
Higher spins
The Dirac equation can only describe particles of spin . Beyond the Dirac equation, RWEs have been applied to free particles of various spins. In 1936, Dirac extended his equation to all fermions, three years later Fierz and Pauli rederived the same equation. The Bargmann–Wigner equations were found in 1948 using Lorentz group theory, applicable for all free particles with any spin. Considering the factorization of the KG equation above, and more rigorously by Lorentz group theory, it becomes apparent to introduce spin in the form of matrices.
The wavefunctions are multicomponent spinor fields, which can be represented as column vectors of functions of space and time:
where the expression on the right is the Hermitian conjugate. For a massive particle of spin , there are components for the particle, and another for the corresponding antiparticle (there are possible values in each case), altogether forming a -component spinor field:
with the + subscript indicating the particle and − subscript for the antiparticle. However, for massless particles of spin s, there are only ever two-component spinor fields; one is for the particle in one helicity state corresponding to +s and the other for the antiparticle in the opposite helicity state corresponding to −s:
According to the relativistic energy-momentum relation, all massless particles travel at the speed of light, so particles traveling at the speed of light are also described by two-component spinors. Historically, Élie Cartan found the most general form of spinors in 1913, prior to the spinors revealed in the RWEs following the year 1927.
For equations describing higher-spin particles, the inclusion of interactions is nowhere near as simple minimal coupling, they lead to incorrect predictions and self-inconsistencies. For spin greater than , the RWE is not fixed by the particle's mass, spin, and electric charge; the electromagnetic moments (electric dipole moments and magnetic dipole moments) allowed by the spin quantum number are arbitrary. (Theoretically, magnetic charge would contribute also). For example, the spin case only allows a magnetic dipole, but for spin 1 particles magnetic quadrupoles and electric dipoles are also possible. For more on this topic, see multipole expansion and (for example) Cédric Lorcé (2009).
Velocity operator
The Schrödinger/Pauli velocity operator can be defined for a massive particle using the classical definition , and substituting quantum operators in the usual way:
which has eigenvalues that take any value. In RQM, the Dirac theory, it is:
which must have eigenvalues between ±c. See Foldy–Wouthuysen transformation for more theoretical background.
Relativistic quantum Lagrangians
The Hamiltonian operators in the Schrödinger picture are one approach to forming the differential equations for . An equivalent alternative is to determine a Lagrangian (really meaning Lagrangian density), then generate the differential equation by the field-theoretic Euler–Lagrange equation:
For some RWEs, a Lagrangian can be found by inspection. For example, the Dirac Lagrangian is:
and Klein–Gordon Lagrangian is:
This is not possible for all RWEs; and is one reason the Lorentz group theoretic approach is important and appealing: fundamental invariance and symmetries in space and time can be used to derive RWEs using appropriate group representations. The Lagrangian approach with field interpretation of is the subject of QFT rather than RQM: Feynman's path integral formulation uses invariant Lagrangians rather than Hamiltonian operators, since the latter can become extremely complicated, see (for example) Weinberg (1995).
Relativistic quantum angular momentum
In non-relativistic QM, the angular momentum operator is formed from the classical pseudovector definition . In RQM, the position and momentum operators are inserted directly where they appear in the orbital relativistic angular momentum tensor defined from the four-dimensional position and momentum of the particle, equivalently a bivector in the exterior algebra formalism:
which are six components altogether: three are the non-relativistic 3-orbital angular momenta; , , , and the other three , , are boosts of the centre of mass of the rotating object. An additional relativistic-quantum term has to be added for particles with spin. For a particle of rest mass , the total angular momentum tensor is:
where the star denotes the Hodge dual, and
is the Pauli–Lubanski pseudovector. For more on relativistic spin, see (for example) Troshin & Tyurin (1994).
Thomas precession and spin–orbit interactions
In 1926, the Thomas precession is discovered: relativistic corrections to the spin of elementary particles with application in the spin–orbit interaction of atoms and rotation of macroscopic objects. In 1939 Wigner derived the Thomas precession.
In classical electromagnetism and special relativity, an electron moving with a velocity through an electric field but not a magnetic field , will in its own frame of reference experience a Lorentz-transformed magnetic field :
In the non-relativistic limit :
so the non-relativistic spin interaction Hamiltonian becomes:
where the first term is already the non-relativistic magnetic moment interaction, and the second term the relativistic correction of order , but this disagrees with experimental atomic spectra by a factor of . It was pointed out by L. Thomas that there is a second relativistic effect: An electric field component perpendicular to the electron velocity causes an additional acceleration of the electron perpendicular to its instantaneous velocity, so the electron moves in a curved path. The electron moves in a rotating frame of reference, and this additional precession of the electron is called the Thomas precession. It can be shown that the net result of this effect is that the spin–orbit interaction is reduced by half, as if the magnetic field experienced by the electron has only one-half the value, and the relativistic correction in the Hamiltonian is:
In the case of RQM, the factor of is predicted by the Dirac equation.
History
The events which led to and established RQM, and the continuation beyond into quantum electrodynamics (QED), are summarized below [see, for example, R. Resnick and R. Eisberg (1985), and P.W Atkins (1974)]. More than half a century of experimental and theoretical research from the 1890s through to the 1950s in the new and mysterious quantum theory as it was up and coming revealed that a number of phenomena cannot be explained by QM alone. SR, found at the turn of the 20th century, was found to be a necessary component, leading to unification: RQM. Theoretical predictions and experiments mainly focused on the newly found atomic physics, nuclear physics, and particle physics; by considering spectroscopy, diffraction and scattering of particles, and the electrons and nuclei within atoms and molecules. Numerous results are attributed to the effects of spin.
Relativistic description of particles in quantum phenomena
Albert Einstein in 1905 explained of the photoelectric effect; a particle description of light as photons. In 1916, Sommerfeld explains fine structure; the splitting of the spectral lines of atoms due to first order relativistic corrections. The Compton effect of 1923 provided more evidence that special relativity does apply; in this case to a particle description of photon–electron scattering. de Broglie extends wave–particle duality to matter: the de Broglie relations, which are consistent with special relativity and quantum mechanics. By 1927, Davisson and Germer and separately G. Thomson successfully diffract electrons, providing experimental evidence of wave-particle duality.
Experiments
1897 J. J. Thomson discovers the electron and measures its mass-to-charge ratio. Discovery of the Zeeman effect: the splitting a spectral line into several components in the presence of a static magnetic field.
1908 Millikan measures the charge on the electron and finds experimental evidence of its quantization, in the oil drop experiment.
1911 Alpha particle scattering in the Geiger–Marsden experiment, led by Rutherford, showed that atoms possess an internal structure: the atomic nucleus.
1913 The Stark effect is discovered: splitting of spectral lines due to a static electric field (compare with the Zeeman effect).
1922 Stern–Gerlach experiment: experimental evidence of spin and its quantization.
1924 Stoner studies splitting of energy levels in magnetic fields.
1932 Experimental discovery of the neutron by Chadwick, and positrons by Anderson, confirming the theoretical prediction of positrons.
1958 Discovery of the Mössbauer effect: resonant and recoil-free emission and absorption of gamma radiation by atomic nuclei bound in a solid, useful for accurate measurements of gravitational redshift and time dilation, and in the analysis of nuclear electromagnetic moments in hyperfine interactions.
Quantum non-locality and relativistic locality
In 1935, Einstein, Rosen, Podolsky published a paper concerning quantum entanglement of particles, questioning quantum nonlocality and the apparent violation of causality upheld in SR: particles can appear to interact instantaneously at arbitrary distances. This was a misconception since information is not and cannot be transferred in the entangled states; rather the information transmission is in the process of measurement by two observers (one observer has to send a signal to the other, which cannot exceed c). QM does not violate SR. In 1959, Bohm and Aharonov publish a paper on the Aharonov–Bohm effect, questioning the status of electromagnetic potentials in QM. The EM field tensor and EM 4-potential formulations are both applicable in SR, but in QM the potentials enter the Hamiltonian (see above) and influence the motion of charged particles even in regions where the fields are zero. In 1964, Bell's theorem was published in a paper on the EPR paradox, showing that QM cannot be derived from local hidden-variable theories if locality is to be maintained.
The Lamb shift
In 1947, the Lamb shift was discovered: a small difference in the 2S and 2P levels of hydrogen, due to the interaction between the electron and vacuum. Lamb and Retherford experimentally measure stimulated radio-frequency transitions the 2S and 2P hydrogen levels by microwave radiation. An explanation of the Lamb shift is presented by Bethe. Papers on the effect were published in the early 1950s.
Development of quantum electrodynamics
1927 Dirac establishes the field of QED, also coining the term "quantum electrodynamics".
1943 Tomonaga begins work on renormalization, influential in QED.
1947 Schwinger calculates the anomalous magnetic moment of the electron. Kusch measures of the anomalous magnetic electron moment, confirming one of QED's great predictions.
See also
Atomic physics and chemistry
Relativistic quantum chemistry
Breit equation
Electron spin resonance
Fine-structure constant
Mathematical physics
Quantum spacetime
Spin connection
Spinor bundle
Dirac equation in the algebra of physical space
Casimir invariant
Casimir operator
Wigner D-matrix
Particle physics and quantum field theory
Zitterbewegung
Two-body Dirac equations
Relativistic Heavy Ion Collider
Symmetry (physics)
Parity
CPT invariance
Chirality (physics)
Standard model
Gauge theory
Tachyon
Modern searches for Lorentz violation
Footnotes
References
Selected books
Group theory in quantum physics
Selected papers
Further reading
Relativistic quantum mechanics and field theory
Quantum theory and applications in general
External links
Quantum mechanics
Mathematical physics
Electromagnetism
Particle physics
Atomic physics
Theory of relativity | 0.772994 | 0.986335 | 0.762431 |
Anisotropy | Anisotropy is the structural property of non-uniformity in different directions, as opposed to isotropy. An anisotropic object or pattern has properties that differ according to direction of measurement. For example, many materials exhibit very different physical or mechanical properties when measured along different axes, e.g. absorbance, refractive index, conductivity, and tensile strength.
An example of anisotropy is light coming through a polarizer. Another is wood, which is easier to split along its grain than across it because of the directional non-uniformity of the grain (the grain is the same in one direction, not all directions).
Fields of interest
Computer graphics
In the field of computer graphics, an anisotropic surface changes in appearance as it rotates about its geometric normal, as is the case with velvet.
Anisotropic filtering (AF) is a method of enhancing the image quality of textures on surfaces that are far away and steeply angled with respect to the point of view. Older techniques, such as bilinear and trilinear filtering, do not take into account the angle a surface is viewed from, which can result in aliasing or blurring of textures. By reducing detail in one direction more than another, these effects can be reduced easily.
Chemistry
A chemical anisotropic filter, as used to filter particles, is a filter with increasingly smaller interstitial spaces in the direction of filtration so that the proximal regions filter out larger particles and distal regions increasingly remove smaller particles, resulting in greater flow-through and more efficient filtration.
In fluorescence spectroscopy, the fluorescence anisotropy, calculated from the polarization properties of fluorescence from samples excited with plane-polarized light, is used, e.g., to determine the shape of a macromolecule. Anisotropy measurements reveal the average angular displacement of the fluorophore that occurs between absorption and subsequent emission of a photon.
In NMR spectroscopy, the orientation of nuclei with respect to the applied magnetic field determines their chemical shift. In this context, anisotropic systems refer to the electron distribution of molecules with abnormally high electron density, like the pi system of benzene. This abnormal electron density affects the applied magnetic field and causes the observed chemical shift to change.
Real-world imagery
Images of a gravity-bound or man-made environment are particularly anisotropic in the orientation domain, with more image structure located at orientations parallel with or orthogonal to the direction of gravity (vertical and horizontal).
Physics
Physicists from University of California, Berkeley reported about their detection of the cosmic anisotropy in cosmic microwave background radiation in 1977. Their experiment demonstrated the Doppler shift caused by the movement of the earth with respect to the early Universe matter, the source of the radiation. Cosmic anisotropy has also been seen in the alignment of galaxies' rotation axes and polarization angles of quasars.
Physicists use the term anisotropy to describe direction-dependent properties of materials. Magnetic anisotropy, for example, may occur in a plasma, so that its magnetic field is oriented in a preferred direction. Plasmas may also show "filamentation" (such as that seen in lightning or a plasma globe) that is directional.
An anisotropic liquid has the fluidity of a normal liquid, but has an average structural order relative to each other along the molecular axis, unlike water or chloroform, which contain no structural ordering of the molecules. Liquid crystals are examples of anisotropic liquids.
Some materials conduct heat in a way that is isotropic, that is independent of spatial orientation around the heat source. Heat conduction is more commonly anisotropic, which implies that detailed geometric modeling of typically diverse materials being thermally managed is required. The materials used to transfer and reject heat from the heat source in electronics are often anisotropic.
Many crystals are anisotropic to light ("optical anisotropy"), and exhibit properties such as birefringence. Crystal optics describes light propagation in these media. An "axis of anisotropy" is defined as the axis along which isotropy is broken (or an axis of symmetry, such as normal to crystalline layers). Some materials can have multiple such optical axes.
Geophysics and geology
Seismic anisotropy is the variation of seismic wavespeed with direction. Seismic anisotropy is an indicator of long range order in a material, where features smaller than the seismic wavelength (e.g., crystals, cracks, pores, layers, or inclusions) have a dominant alignment. This alignment leads to a directional variation of elasticity wavespeed. Measuring the effects of anisotropy in seismic data can provide important information about processes and mineralogy in the Earth; significant seismic anisotropy has been detected in the Earth's crust, mantle, and inner core.
Geological formations with distinct layers of sedimentary material can exhibit electrical anisotropy; electrical conductivity in one direction (e.g. parallel to a layer), is different from that in another (e.g. perpendicular to a layer). This property is used in the gas and oil exploration industry to identify hydrocarbon-bearing sands in sequences of sand and shale. Sand-bearing hydrocarbon assets have high resistivity (low conductivity), whereas shales have lower resistivity. Formation evaluation instruments measure this conductivity or resistivity, and the results are used to help find oil and gas in wells. The mechanical anisotropy measured for some of the sedimentary rocks like coal and shale can change with corresponding changes in their surface properties like sorption when gases are produced from the coal and shale reservoirs.
The hydraulic conductivity of aquifers is often anisotropic for the same reason. When calculating groundwater flow to drains or to wells, the difference between horizontal and vertical permeability must be taken into account; otherwise the results may be subject to error.
Most common rock-forming minerals are anisotropic, including quartz and feldspar. Anisotropy in minerals is most reliably seen in their optical properties. An example of an isotropic mineral is garnet.
Igneous rock like granite also shows the anisotropy due to the orientation of the minerals during the solidification process.
Medical acoustics
Anisotropy is also a well-known property in medical ultrasound imaging describing a different resulting echogenicity of soft tissues, such as tendons, when the angle of the transducer is changed. Tendon fibers appear hyperechoic (bright) when the transducer is perpendicular to the tendon, but can appear hypoechoic (darker) when the transducer is angled obliquely. This can be a source of interpretation error for inexperienced practitioners.
Materials science and engineering
Anisotropy, in materials science, is a material's directional dependence of a physical property. This is a critical consideration for materials selection in engineering applications. A material with physical properties that are symmetric about an axis that is normal to a plane of isotropy is called a transversely isotropic material. Tensor descriptions of material properties can be used to determine the directional dependence of that property. For a monocrystalline material, anisotropy is associated with the crystal symmetry in the sense that more symmetric crystal types have fewer independent coefficients in the tensor description of a given property. When a material is polycrystalline, the directional dependence on properties is often related to the processing techniques it has undergone. A material with randomly oriented grains will be isotropic, whereas materials with texture will be often be anisotropic. Textured materials are often the result of processing techniques like cold rolling, wire drawing, and heat treatment.
Mechanical properties of materials such as Young's modulus, ductility, yield strength, and high-temperature creep rate, are often dependent on the direction of measurement. Fourth-rank tensor properties, like the elastic constants, are anisotropic, even for materials with cubic symmetry. The Young's modulus relates stress and strain when an isotropic material is elastically deformed; to describe elasticity in an anisotropic material, stiffness (or compliance) tensors are used instead.
In metals, anisotropic elasticity behavior is present in all single crystals with three independent coefficients for cubic crystals, for example. For face-centered cubic materials such as nickel and copper, the stiffness is highest along the <111> direction, normal to the close-packed planes, and smallest parallel to <100>. Tungsten is so nearly isotropic at room temperature that it can be considered to have only two stiffness coefficients; aluminium is another metal that is nearly isotropic.
For an isotropic material, where is the shear modulus, is the Young's modulus, and is the material's Poisson's ratio. Therefore, for cubic materials, we can think of anisotropy, , as the ratio between the empirically determined shear modulus for the cubic material and its (isotropic) equivalent:
The latter expression is known as the Zener ratio, , where refers to elastic constants in Voigt (vector-matrix) notation. For an isotropic material, the ratio is one.
Limitation of the Zener ratio to cubic materials is waived in the Tensorial anisotropy index AT that takes into consideration all the 27 components of the fully anisotropic stiffness tensor. It is composed of two major parts and , the former referring to components existing in cubic tensor and the latter in anisotropic tensor so that This first component includes the modified Zener ratio and additionally accounts for directional differences in the material, which exist in orthotropic material, for instance. The second component of this index covers the influence of stiffness coefficients that are nonzero only for non-cubic materials and remains zero otherwise.
Fiber-reinforced or layered composite materials exhibit anisotropic mechanical properties, due to orientation of the reinforcement material. In many fiber-reinforced composites like carbon fiber or glass fiber based composites, the weave of the material (e.g. unidirectional or plain weave) can determine the extent of the anisotropy of the bulk material. The tunability of orientation of the fibers allows for application-based designs of composite materials, depending on the direction of stresses applied onto the material.
Amorphous materials such as glass and polymers are typically isotropic. Due to the highly randomized orientation of macromolecules in polymeric materials, polymers are in general described as isotropic. However, mechanically gradient polymers can be engineered to have directionally dependent properties through processing techniques or introduction of anisotropy-inducing elements. Researchers have built composite materials with aligned fibers and voids to generate anisotropic hydrogels, in order to mimic hierarchically ordered biological soft matter. 3D printing, especially Fused Deposition Modeling, can introduce anisotropy into printed parts. This is due to the fact that FDM is designed to extrude and print layers of thermoplastic materials. This creates materials that are strong when tensile stress is applied in parallel to the layers and weak when the material is perpendicular to the layers.
Microfabrication
Anisotropic etching techniques (such as deep reactive-ion etching) are used in microfabrication processes to create well defined microscopic features with a high aspect ratio. These features are commonly used in MEMS (microelectromechanical systems) and microfluidic devices, where the anisotropy of the features is needed to impart desired optical, electrical, or physical properties to the device. Anisotropic etching can also refer to certain chemical etchants used to etch a certain material preferentially over certain crystallographic planes (e.g., KOH etching of silicon [100] produces pyramid-like structures)
Neuroscience
Diffusion tensor imaging is an MRI technique that involves measuring the fractional anisotropy of the random motion (Brownian motion) of water molecules in the brain. Water molecules located in fiber tracts are more likely to move anisotropically, since they are restricted in their movement (they move more in the dimension parallel to the fiber tract rather than in the two dimensions orthogonal to it), whereas water molecules dispersed in the rest of the brain have less restricted movement and therefore display more isotropy. This difference in fractional anisotropy is exploited to create a map of the fiber tracts in the brains of the individual.
Remote sensing and radiative transfer modeling
Radiance fields (see Bidirectional reflectance distribution function (BRDF)) from a reflective surface are often not isotropic in nature. This makes calculations of the total energy being reflected from any scene a difficult quantity to calculate. In remote sensing applications, anisotropy functions can be derived for specific scenes, immensely simplifying the calculation of the net reflectance or (thereby) the net irradiance of a scene.
For example, let the BRDF be where 'i' denotes incident direction and 'v' denotes viewing direction (as if from a satellite or other instrument). And let P be the Planar Albedo, which represents the total reflectance from the scene.
It is of interest because, with knowledge of the anisotropy function as defined, a measurement of the BRDF from a single viewing direction (say, ) yields a measure of the total scene reflectance (planar albedo) for that specific incident geometry (say, ).
See also
Circular symmetry
References
External links
"Overview of Anisotropy"
DoITPoMS Teaching and Learning Package: "Introduction to Anisotropy"
"Gauge, and knitted fabric generally, is an anisotropic phenomenon"
Orientation (geometry)
Asymmetry | 0.765631 | 0.995805 | 0.762419 |
Exergy | Exergy, often referred to as "available energy" or "useful work potential", is a fundamental concept in the field of thermodynamics and engineering. It plays a crucial role in understanding and quantifying the quality of energy within a system and its potential to perform useful work. Exergy analysis has widespread applications in various fields, including energy engineering, environmental science, and industrial processes.
From a scientific and engineering perspective, second-law-based exergy analysis is valuable because it provides a number of benefits over energy analysis alone. These benefits include the basis for determining energy quality (or exergy content), enhancing the understanding of fundamental physical phenomena, and improving design, performance evaluation and optimization efforts. In thermodynamics, the exergy of a system is the maximum useful work that can be produced as the system is brought into equilibrium with its environment by an ideal process. The specification of an "ideal process" allows the determination of "maximum work" production. From a conceptual perspective, exergy is the "ideal" potential of a system to do work or cause a change as it achieves equilibrium with its environment. Exergy is also known as "availability". Exergy is non-zero when there is dis-equilibrium between the system and its environment, and exergy is zero when equilibrium is established (the state of maximum entropy for the system plus its environment).
Determining exergy was one of the original goals of thermodynamics. The term "exergy" was coined in 1956 by Zoran Rant (1904–1972) by using the Greek ex and ergon, meaning "from work", but the concept had been earlier developed by J. Willard Gibbs (the namesake of Gibbs free energy) in 1873.
Energy is neither created nor destroyed, but is simply converted from one form to another (see First law of thermodynamics). In contrast to energy, exergy is always destroyed when a process is non-ideal or irreversible (see Second law of thermodynamics). To illustrate, when someone states that "I used a lot of energy running up that hill", the statement contradicts the first law. Although the energy is not consumed, intuitively we perceive that something is. The key point is that energy has quality or measures of usefulness, and this energy quality (or exergy content) is what is consumed or destroyed. This occurs because everything, all real processes, produce entropy and the destruction of exergy or the rate of "irreversibility" is proportional to this entropy production (Gouy–Stodola theorem). Where entropy production may be calculated as the net increase in entropy of the system together with its surroundings. Entropy production is due to things such as friction, heat transfer across a finite temperature difference and mixing. In distinction from "exergy destruction", "exergy loss" is the transfer of exergy across the boundaries of a system, such as with mass or heat loss, where the exergy flow or transfer is potentially recoverable. The energy quality or exergy content of these mass and energy losses are low in many situations or applications, where exergy content is defined as the ratio of exergy to energy on a percentage basis. For example, while the exergy content of electrical work produced by a thermal power plant is 100%, the exergy content of low-grade heat rejected by the power plant, at say, 41 degrees Celsius, relative to an environment temperature of 25 degrees Celsius, is only 5%.
Definitions
Exergy is a combination property of a system and its environment because it depends on the state of both and is a consequence of dis-equilibrium between them. Exergy is neither a thermodynamic property of matter nor a thermodynamic potential of a system. Exergy and energy always have the same units, and the joule (symbol: J) is the unit of energy in the International System of Units (SI). The internal energy of a system is always measured from a fixed reference state and is therefore always a state function. Some authors define the exergy of the system to be changed when the environment changes, in which case it is not a state function. Other writers prefer a slightly alternate definition of the available energy or exergy of a system where the environment is firmly defined, as an unchangeable absolute reference state, and in this alternate definition, exergy becomes a property of the state of the system alone.
However, from a theoretical point of view, exergy may be defined without reference to any environment. If the intensive properties of different finitely extended elements of a system differ, there is always the possibility to extract mechanical work from the system. Yet, with such an approach one has to abandon the requirement that the environment is large enough relative to the "system" such that its intensive properties, such as temperature, are unchanged due to its interaction with the system. So that exergy is defined in an absolute sense, it will be assumed in this article that, unless otherwise stated, that the environment's intensive properties are unchanged by its interaction with the system.
For a heat engine, the exergy can be simply defined in an absolute sense, as the energy input times the Carnot efficiency, assuming the low-temperature heat reservoir is at the temperature of the environment. Since many systems can be modeled as a heat engine, this definition can be useful for many applications.
Terminology
The term exergy is also used, by analogy with its physical definition, in information theory related to reversible computing. Exergy is also synonymous with available energy, exergic energy, essergy (considered archaic), utilizable energy, available useful work, maximum (or minimum) work, maximum (or minimum) work content, reversible work, ideal work, availability or available work.
Implications
The exergy destruction of a cycle is the sum of the exergy destruction of the processes that compose that cycle. The exergy destruction of a cycle can also be determined without tracing the individual processes by considering the entire cycle as a single process and using one of the exergy destruction equations.
Examples
For two thermal reservoirs at temperatures TH and TC < TH, as considered by Carnot, the exergy is the work W that can be done by a reversible engine. Specifically, with QH the heat provided by the hot reservoir, Carnot's analysis gives W/QH = (TH − TC)/TH. Although, exergy or maximum work is determined by conceptually utilizing an ideal process, it is the property of a system in a given environment. Exergy analysis is not merely for reversible cycles, but for all cycles (including non-cyclic or non-ideal), and indeed for all thermodynamic processes.
As an example, consider the non-cyclic process of expansion of an ideal gas. For free expansion in an isolated system, the energy and temperature do not change, so by energy conservation no work is done. On the other hand, for expansion done against a moveable wall that always matched the (varying) pressure of the expanding gas (so the wall develops negligible kinetic energy), with no heat transfer (adiabatic wall), the maximum work would be done. This corresponds to the exergy. Thus, in terms of exergy, Carnot considered the exergy for a cyclic process with two thermal reservoirs (fixed temperatures). Just as the work done depends on the process, so the exergy depends on the process, reducing to Carnot's result for Carnot's case.
W. Thomson (from 1892, Lord Kelvin), as early as 1849 was exercised by what he called “lost energy”, which appears to be the same as “destroyed energy” and what has been called “anergy”. In 1874 he wrote that “lost energy” is the same as the energy dissipated by, e.g., friction, electrical conduction (electric field-driven charge diffusion), heat conduction (temperature-driven thermal diffusion), viscous processes (transverse momentum diffusion) and particle diffusion (ink in water). On the other hand, Kelvin did not indicate how to compute the “lost energy”. This awaited the 1931 and 1932 works of Onsager on irreversible processes.
Mathematical description
An application of the second law of thermodynamics
Exergy uses system boundaries in a way that is unfamiliar to many. We imagine the presence of a Carnot engine between the system and its reference environment even though this engine does not exist in the real world. Its only purpose is to measure the results of a "what-if" scenario to represent the most efficient work interaction possible between the system and its surroundings.
If a real-world reference environment is chosen that behaves like an unlimited reservoir that remains unaltered by the system, then Carnot's speculation about the consequences of a system heading towards equilibrium with time is addressed by two equivalent mathematical statements. Let B, the exergy or available work, decrease with time, and Stotal, the entropy of the system and its reference environment enclosed together in a larger isolated system, increase with time:
For macroscopic systems (above the thermodynamic limit), these statements are both expressions of the second law of thermodynamics if the following expression is used for exergy:
where the extensive quantities for the system are: U = Internal energy, V = Volume, and Ni = Moles of component i. The intensive quantities for the surroundings are: PR = Pressure, TR = temperature, μi, R = Chemical potential of component i. Indeed the total entropy of the universe reads:
the second term being the entropy of the surroundings to within a constant.
Individual terms also often have names attached to them: is called "available PV work", is called "entropic loss" or "heat loss" and the final term is called "available chemical energy."
Other thermodynamic potentials may be used to replace internal energy so long as proper care is taken in recognizing which natural variables correspond to which potential. For the recommended nomenclature of these potentials, see (Alberty, 2001). Equation is useful for processes where system volume, entropy, and the number of moles of various components change because internal energy is also a function of these variables and no others.
An alternative definition of internal energy does not separate available chemical potential from U. This expression is useful (when substituted into equation) for processes where system volume and entropy change, but no chemical reaction occurs:
In this case, a given set of chemicals at a given entropy and volume will have a single numerical value for this thermodynamic potential. A multi-state system may complicate or simplify the problem because the Gibbs phase rule predicts that intensive quantities will no longer be completely independent from each other.
A historical and cultural tangent
In 1848, William Thomson, 1st Baron Kelvin, asked (and immediately answered) the question
Is there any principle on which an absolute thermometric scale can be founded? It appears to me that Carnot's theory of the motive power of heat enables us to give an affirmative answer.
With the benefit of the hindsight contained in equation, we are able to understand the historical impact of Kelvin's idea on physics. Kelvin suggested that the best temperature scale would describe a constant ability for a unit of temperature in the surroundings to alter the available work from Carnot's engine. From equation:
Rudolf Clausius recognized the presence of a proportionality constant in Kelvin's analysis and gave it the name entropy in 1865 from the Greek for "transformation" because it quantifies the amount of energy lost during the conversion from heat to work. The available work from a Carnot engine is at its maximum when the surroundings are at a temperature of absolute zero.
Physicists then, as now, often look at a property with the word "available" or "utilizable" in its name with a certain unease. The idea of what is available raises the question of "available to what?" and raises a concern about whether such a property is anthropocentric. Laws derived using such a property may not describe the universe but instead, describe what people wish to see.
The field of statistical mechanics (beginning with the work of Ludwig Boltzmann in developing the Boltzmann equation) relieved many physicists of this concern. From this discipline, we now know that macroscopic properties may all be determined from properties on a microscopic scale where entropy is more "real" than temperature itself (see Thermodynamic temperature). Microscopic kinetic fluctuations among particles cause entropic loss, and this energy is unavailable for work because these fluctuations occur randomly in all directions. The anthropocentric act is taken, in the eyes of some physicists and engineers today, when someone draws a hypothetical boundary, in fact, he says: "This is my system. What occurs beyond it is surroundings." In this context, exergy is sometimes described as an anthropocentric property, both by some who use it and by some who don't. However, exergy is based on the dis-equilibrium between a system and its environment, so its very real and necessary to define the system distinctly from its environment. It can be agreed that entropy is generally viewed as a more fundamental property of matter than exergy.
A potential for every thermodynamic situation
In addition to and the other thermodynamic potentials are frequently used to determine exergy. For a given set of chemicals at a given entropy and pressure, enthalpy H is used in the expression:
For a given set of chemicals at a given temperature and volume, Helmholtz free energy A is used in the expression:
For a given set of chemicals at a given temperature and pressure, Gibbs free energy G is used in the expression:
where is evaluated at the isothermal system temperature, and is defined with respect to the isothermal temperature of the system's environment. The exergy is the energy reduced by the product of the entropy times the environment temperature , which is the slope or partial derivative of the internal energy with respect to entropy in the environment. That is, higher entropy reduces the exergy or free energy available relative to the energy level .
Work can be produced from this energy, such as in an isothermal process, but any entropy generation during the process will cause the destruction of exergy (irreversibility) and the reduction of these thermodynamic potentials. Further, exergy losses can occur if mass and energy is transferred out of the system at non-ambient or elevated temperature, pressure or chemical potential. Exergy losses are potentially recoverable though because the exergy has not been destroyed, such as what occurs in waste heat recovery systems (although the energy quality or exergy content is typically low). As a special case, an isothermal process operating at ambient temperature will have no thermally related exergy losses.
Exergy Analysis involving Radiative Heat Transfer
All matter emits radiation continuously as a result of its non-zero (absolute) temperature. This emitted energy flow is proportional to the material’s temperature raised to the fourth power. As a result, any radiation conversion device that seeks to absorb and convert radiation (while reflecting a fraction of the incoming source radiation) inherently emits its own radiation. Also, given that reflected and emitted radiation can occupy the same direction or solid angle, the entropy flows, and as a result, the exergy flows, are generally not independent. The entropy and exergy balance equations for a control volume (CV), re-stated to correctly apply to situations involving radiative transfer, are expressed as,
where or denotes entropy production within the control volume, and,
This rate equation for the exergy within an open system X takes into account the exergy transfer rates across the system boundary by heat transfer ( for conduction & convection, and by radiative fluxes), by mechanical or electrical work transfer, and by mass transfer, as well as taking into account the exergy destruction that occurs within the system due to irreversibility’s or non-ideal processes. Note that chemical exergy, kinetic energy, and gravitational potential energy have been excluded for simplicity.
The exergy irradiance or flux M, and the exergy radiance N (where M = πN for isotropic radiation), depend on the spectral and directional distribution of the radiation (for example, see the next section on ‘Exergy Flux of Radiation with an Arbitrary Spectrum’). Sunlight can be crudely approximated as blackbody, or more accurately, as graybody radiation. Noting that, although a graybody spectrum looks similar to a blackbody spectrum, the entropy and exergy are very different.
Petela determined that the exergy of isotropic blackbody radiation was given by the expression,
where the exergy within the enclosed system is X, c is the speed of light, V is the volume occupied by the enclosed radiation system or void, T is the material emission temperature, To is the environmental temperature, and x is the dimensionless temperature ratio To/T.
However, for decades this result was contested in terms of its relevance to the conversion of radiation fluxes, and in particular, solar radiation. For example, Bejan stated that “Petela’s efficiency is no more than a convenient, albeit artificial way, of non-dimensionalizing the calculated work output” and that Petela’s efficiency “is not a ‘conversion efficiency.’ ” However, it has been shown that Petela’s result represents the exergy of blackbody radiation. This was done by resolving a number of issues, including that of inherent irreversibility, defining the environment in terms of radiation, the effect of inherent emission by the conversion device and the effect of concentrating source radiation.
Exergy Flux of Radiation with an Arbitrary Spectrum (including Sunlight)
In general, terrestrial solar radiation has an arbitrary non-blackbody spectrum. Ground level spectrums can vary greatly due to reflection, scattering and absorption in the atmosphere. While the emission spectrums of thermal radiation in engineering systems can vary widely as well.
In determining the exergy of radiation with an arbitrary spectrum, it must be considered whether reversible or ideal conversion (zero entropy production) is possible. It has been shown that reversible conversion of blackbody radiation fluxes across an infinitesimal temperature difference is theoretically possible ]. However, this reversible conversion can only be theoretically achieved because equilibrium can exist between blackbody radiation and matter. However, non-blackbody radiation cannot even exist in equilibrium with itself, nor with its own emitting material.
Unlike blackbody radiation, non-blackbody radiation cannot exist in equilibrium with matter, so it appears likely that the interaction of non-blackbody radiation with matter is always an inherently irreversible process. For example, an enclosed non-blackbody radiation system (such as a void inside a solid mass) is unstable and will spontaneously equilibriate to blackbody radiation unless the enclosure is perfectly reflecting (i.e., unless there is no thermal interaction of the radiation with its enclosure – which is not possible in actual, or real, non-ideal systems). Consequently, a cavity initially devoid of thermal radiation inside a non-blackbody material will spontaneously and rapidly (due to the high velocity of the radiation), through a series of absorption and emission interactions, become filled with blackbody radiation rather than non-blackbody radiation.
The approaches by Petela and Karlsson both assume that reversible conversion of non-blackbody radiation is theoretically possible, that is, without addressing or considering the issue. Exergy is not a property of the system alone, it’s a property of both the system and its environment. Thus, it is of key importance non-blackbody radiation cannot exist in equilibrium with matter, indicating that the interaction of non-blackbody radiation with matter is an inherently irreversible process.
The flux (irradiance) of radiation with an arbitrary spectrum, based on the inherent irreversibility of non-blackbody radiation conversion, is given by the expression,
The exergy flux is expressed as a function of only the energy flux or irradiance and the environment temperature . For graybody radiation, the exergy flux is given by the expression,
As one would expect, the exergy flux of non-blackbody radiation reduces to the result for blackbody radiation when emissivity is equal to one.
Note that the exergy flux of graybody radiation can be a small fraction of the energy flux. For example, the ratio of exergy flux to energy flux for graybody radiation with emissivity is equal to 40.0%, for and . That is, a maximum of only 40% of the graybody energy flux can be converted to work in this case (already only 50% of that of the blackbody energy flux with the same emission temperature). Graybody radiation has a spectrum that looks similar to the blackbody spectrum, but the entropy and exergy flux cannot be accurately approximated as that of blackbody radiation with the same emission temperature. However, it can be reasonably approximated by the entropy flux of blackbody radiation with the same energy flux (lower emission temperature).
Blackbody radiation has the highest entropy-to-energy ratio of all radiation with the same energy flux, but the lowest entropy-to-energy ratio, and the highest exergy content, of all radiation with the same emission temperature. For example, the exergy content of graybody radiation is lower than that of blackbody radiation with the same emission temperature and decreases as emissivity decreases. For the example above with the exergy flux of the blackbody radiation source flux is 52.5% of the energy flux compared to 40.0% for graybody radiation with , or compared to 15.5% for graybody radiation with .
The Exergy Flux of Sunlight
In addition to the production of power directly from sunlight, solar radiation provides most of the exergy for processes on Earth, including processes that sustain living systems directly, as well as all fuels and energy sources that are used for transportation and electric power production (directly or indirectly). This is primarily with the exception of nuclear fission power plants and geothermal energy (due to natural fission decay). Solar energy is, for the most part, thermal radiation from the Sun with an emission temperature near 5762 Kelvin, but it also includes small amounts of higher energy radiation from the fusion reaction or higher thermal emission temperatures within the Sun. The source of most energy on Earth is nuclear in origin.
The figure below depicts typical solar radiation spectrums under clear sky conditions for AM0 (extraterrestrial solar radiation), AM1 (terrestrial solar radiation with solar zenith angle of 0 degrees) and AM4 (terrestrial solar radiation with solar zenith angle of 75.5 degrees). The solar spectrum at sea level (terrestrial solar spectrum) depends on a number of factors including the position of the Sun in the sky, atmospheric turbidity, the level of local atmospheric pollution, and the amount and type of cloud cover. These spectrums are for relatively clear air (α = 1.3, β = 0.04) assuming a U.S. standard atmosphere with 20 mm of precipitable water vapor and 3.4 mm of ozone. The Figure shows the spectral energy irradiance (W/m2μm) which does not provide information regarding the directional distribution of the solar radiation. The exergy content of the solar radiation, assuming that it is subtended by the solid angle of the ball of the Sun (no circumsolar), is 93.1%, 92.3% and 90.8%, respectively, for the AM0, AM1 and the AM4 spectrums.
The exergy content of terrestrial solar radiation is also reduced because of the diffuse component caused by the complex interaction of solar radiation, originally in a very small solid angle beam, with material in the Earth’s atmosphere. The characteristics and magnitude of diffuse terrestrial solar radiation depends on a number of factors, as mentioned, including the position of the Sun in the sky, atmospheric turbidity, the level of local atmospheric pollution, and the amount and type of cloud cover. Solar radiation under clear sky conditions exhibits a maximum intensity towards the Sun (circumsolar radiation) but also exhibits an increase in intensity towards the horizon (horizon brightening). In contrast for opaque overcast skies the solar radiation can be completely diffuse with a maximum intensity in the direction of the zenith and monotonically decreasing towards the horizon. The magnitude of the diffuse component generally varies with frequency, being highest in the ultraviolet region.
The dependence of the exergy content on directional distribution can be illustrated by considering, for example, the AM1 and AM4 terrestrial spectrums depicted in the figure, with the following simplified cases of directional distribution:
• For AM1: 80% of the solar radiation is contained in the solid angle subtended by the Sun, 10% is contained and isotropic in a solid angle 0.008 sr (this field of view includes circumsolar radiation), while the remaining 10% of the solar radiation is diffuse and isotropic in the solid angle 2π sr.
• For AM4: 65% of the solar radiation is contained in the solid angle subtended by the Sun, 20% of the solar radiation is contained and isotropic in a solid angle 0.008 sr, while the remaining 15% of the solar radiation is diffuse and isotropic in the solid angle 2π sr. Note that when the Sun is low in the sky the diffuse component can be the dominant part of the incident solar radiation.
For these cases of directional distribution, the exergy content of the terrestrial solar radiation for the AM1 and AM4 spectrum depicted are 80.8% and 74.0%, respectively. From these sample calculations it is evideνnt that the exergy content of terrestrial solar radiation is strongly dependent on the directional distribution of the radiation. This result is interesting because one might expect that the performance of a conversion device would depend on the incoming rate of photons and their spectral distribution but not on the directional distribution of the incoming photons. However, for a given incoming flux of photons with a certain spectral distribution, the entropy (level of disorder) is higher the more diffuse the directional distribution. From the second law of thermodynamics, the incoming entropy of the solar radiation cannot be destroyed and consequently reduces the maximum work output that can be obtained by a conversion device.
Chemical exergy
Similar to thermomechanical exergy, chemical exergy depends on the temperature and pressure of a system as well as on the composition. The key difference in evaluating chemical exergy versus thermomechanical exergy is that thermomechanical exergy does not take into account the difference in a system and the environment's chemical composition. If the temperature, pressure or composition of a system differs from the environment's state, then the overall system will have exergy.
The definition of chemical exergy resembles the standard definition of thermomechanical exergy, but with a few differences. Chemical exergy is defined as the maximum work that can be obtained when the considered system is brought into reaction with reference substances present in the environment. Defining the exergy reference environment is one of the most vital parts of analyzing chemical exergy. In general, the environment is defined as the composition of air at 25 °C and 1 atm of pressure. At these properties air consists of N2=75.67%, O2=20.35%, H2O(g)=3.12%, CO2=0.03% and other gases=0.83%. These molar fractions will become of use when applying Equation 8 below.
CaHbOc is the substance that is entering a system that one wants to find the maximum theoretical work of. By using the following equations, one can calculate the chemical exergy of the substance in a given system. Below, Equation 9 uses the Gibbs function of the applicable element or compound to calculate the chemical exergy. Equation 10 is similar but uses standard molar chemical exergy, which scientists have determined based on several criteria, including the ambient temperature and pressure that a system is being analyzed and the concentration of the most common components. These values can be found in thermodynamic books or in online tables.
Important equations
where:
is the Gibbs function of the specific substance in the system at . ( refers to the substance that is entering the system)
is the Universal gas constant (8.314462 J/mol•K)
is the temperature that the system is being evaluated at in absolute temperature
is the molar fraction of the given substance in the environment, i.e. air
where is the standard molar chemical exergy taken from a table for the specific conditions that the system is being evaluated.
Equation 10 is more commonly used due to the simplicity of only having to look up the standard chemical exergy for given substances. Using a standard table works well for most cases, even if the environmental conditions vary slightly, the difference is most likely negligible.
Total exergy
After finding the chemical exergy in a given system, one can find the total exergy by adding it to the thermomechanical exergy. Depending on the situation, the amount of chemical exergy added can be very small. If the system being evaluated involves combustion, the amount of chemical exergy is very large and necessary to find the total exergy of the system.
Irreversibility
Irreversibility accounts for the amount of exergy destroyed in a closed system, or in other words, the wasted work potential. This is also called dissipated energy. For highly efficient systems, the value of , is low, and vice versa. The equation to calculate the irreversibility of a closed system, as it relates to the exergy of that system, is as follows:
where , also denoted by , is the entropy generated by processes within the system. If then there are irreversibilities present in the system. If then there are no irreversibilities present in the system. The value of , the irreversibility, can not be negative, as this implies entropy destruction, a direct violation of the second law of thermodynamics.
Exergy analysis also relates the actual work of a work producing device to the maximal work, that could be obtained in the reversible or ideal process:
That is, the irreversibility is the ideal maximum work output minus the actual work production. Whereas, for a work consuming device such as refrigeration or heat pump, irreversibility is the actual work input minus the ideal minimum work input.
The first term at the right part is related to the difference in exergy at inlet and outlet of the system:
where is also denoted by .
For an isolated system there are no heat or work interactions or transfers of exergy between the system and its surroundings. The exergy of an isolated system can therefore only decrease, by a magnitude equal to the irreversibility of that system or process,
Applications
Applying equation to a subsystem yields:
This expression applies equally well for theoretical ideals in a wide variety of applications: electrolysis (decrease in G), galvanic cells and fuel cells (increase in G), explosives (increase in A), heating and refrigeration (exchange of H), motors (decrease in U) and generators (increase in U).
Utilization of the exergy concept often requires careful consideration of the choice of reference environment because, as Carnot knew, unlimited reservoirs do not exist in the real world. A system may be maintained at a constant temperature to simulate an unlimited reservoir in the lab or in a factory, but those systems cannot then be isolated from a larger surrounding environment. However, with a proper choice of system boundaries, a reasonable constant reservoir can be imagined. A process sometimes must be compared to "the most realistic impossibility," and this invariably involves a certain amount of guesswork.
Engineering applications
One goal of energy and exergy methods in engineering is to compute what comes into and out of several possible designs before a design is built. Energy input and output will always balance according to the First Law of Thermodynamics or the energy conservation principle. Exergy output will not equal the exergy input for real processes since a part of the exergy input is always destroyed according to the Second Law of Thermodynamics for real processes. After the input and output are calculated, an engineer will often want to select the most efficient process. An energy efficiency or first law efficiency will determine the most efficient process based on wasting as little energy as possible relative to energy inputs. An exergy efficiency or second-law efficiency will determine the most efficient process based on wasting and destroying as little available work as possible from a given input of available work, per unit of whatever the desired output is.
Exergy has been applied in a number of design applications in order to optimize systems or identify components or subsystems with the greatest potential for improvement. For instance, an exergy analysis of environmental control systems on the international space station revealed the oxygen generation assembly as the subsystem which destroyed the most exergy.
Exergy is particularly useful for broad engineering analyses with many systems of varied nature, since it can account for mechanical, electrical, nuclear, chemical, or thermal systems. For this reason, Exergy analysis has also been used to optimize the performance of rocket vehicles. Exergy analysis affords additional insight, relative to energy analysis alone, because it incorporates the second law, and considers both the system and its relationship with its environment. For example, exergy analysis has been used to compare possible power generation and storage systems on the moon, since exergy analysis is conducted in reference to the unique environmental operating conditions of a specific application, such as on the surface of the moon.
Application of exergy to unit operations in chemical plants was partially responsible for the huge growth of the chemical industry during the 20th century.
As a simple example of exergy, air at atmospheric conditions of temperature, pressure, and composition contains energy but no exergy when it is chosen as the thermodynamic reference state known as ambient. Individual processes on Earth such as combustion in a power plant often eventually result in products that are incorporated into the atmosphere, so defining this reference state for exergy is useful even though the atmosphere itself is not at equilibrium and is full of long and short term variations.
If standard ambient conditions are used for calculations during chemical plant operation when the actual weather is very cold or hot, then certain parts of a chemical plant might seem to have an exergy efficiency of greater than 100%. Without taking into account the non-standard atmospheric temperature variation, these calculations can give an impression of being a perpetual motion machine. Using actual conditions will give actual values, but standard ambient conditions are useful for initial design calculations.
Applications in natural resource utilization
In recent decades, utilization of exergy has spread outside of physics and engineering to the fields of industrial ecology, ecological economics, systems ecology, and energetics. Defining where one field ends and the next begins is a matter of semantics, but applications of exergy can be placed into rigid categories.
After the milestone work of Jan Szargut who emphasized the relation between exergy and availability,
it is necessary to remember "Exergy Ecology and Democracy".
by Goran Wall, a short essay, which evidences the strict relation that relates exergy disruption with environmental and social disruption.
From this activity it has derived a fundamental research activity in ecological economics and environmental accounting perform exergy-cost analyses in order to evaluate the impact of human activity on the current and future natural environment. As with ambient air, this often requires the unrealistic substitution of properties from a natural environment in place of the reference state environment of Carnot. For example, ecologists and others have developed reference conditions for the ocean and for the Earth's crust. Exergy values for human activity using this information can be useful for comparing policy alternatives based on the efficiency of utilizing natural resources to perform work. Typical questions that may be answered are:
Does the human production of one unit of an economic good by method A utilize more of a resource's exergy than by method B?
Does the human production of economic good A utilize more of a resource's exergy than the production of good B?
Does the human production of economic good A utilize a resource's exergy more efficiently than the production of good B?
There has been some progress in standardizing and applying these methods.
Measuring exergy requires the evaluation of a system's reference state environment. With respect to the applications of exergy on natural resource utilization, the process of quantifying a system requires the assignment of value (both utilized and potential) to resources that are not always easily dissected into typical cost-benefit terms. However, to fully realize the potential of a system to do work, it is becoming increasingly imperative to understand exergetic potential of natural resources, and how human interference alters this potential.
Referencing the inherent qualities of a system in place of a reference state environment is the most direct way that ecologists determine the exergy of a natural resource. Specifically, it is easiest to examine the thermodynamic properties of a system, and the reference substances that are acceptable within the reference environment. This determination allows for the assumption of qualities in a natural state: deviation from these levels may indicate a change in the environment caused by outside sources. There are three kinds of reference substances that are acceptable, due to their proliferation on the planet: gases within the atmosphere, solids within the Earth's crust, and molecules or ions in seawater. By understanding these basic models, it's possible to determine the exergy of multiple earth systems interacting, like the effects of solar radiation on plant life. These basic categories are utilized as the main components of a reference environment when examining how exergy can be defined through natural resources.
Other qualities within a reference state environment include temperature, pressure, and any number of combinations of substances within a defined area. Again, the exergy of a system is determined by the potential of that system to do work, so it is necessary to determine the baseline qualities of a system before it is possible to understand the potential of that system. The thermodynamic value of a resource can be found by multiplying the exergy of the resource by the cost of obtaining the resource and processing it.
Today, it is becoming increasingly popular to analyze the environmental impacts of natural resource utilization, especially for energy usage. To understand the ramifications of these practices, exergy is utilized as a tool for determining the impact potential of emissions, fuels, and other sources of energy. Combustion of fossil fuels, for example, is examined with respect to assessing the environmental impacts of burning coal, oil, and natural gas. The current methods for analyzing the emissions from these three products can be compared to the process of determining the exergy of the systems affected; specifically, it is useful to examine these with regard to the reference state environment of gases within the atmosphere. In this way, it is easier to determine how human action is affecting the natural environment.
Applications in sustainability
In systems ecology, researchers sometimes consider the exergy of the current formation of natural resources from a small number of exergy inputs (usually solar radiation, tidal forces, and geothermal heat). This application not only requires assumptions about reference states, but it also requires assumptions about the real environments of the past that might have been close to those reference states. Can we decide which is the most "realistic impossibility" over such a long period of time when we are only speculating about the reality?
For instance, comparing oil exergy to coal exergy using a common reference state would require geothermal exergy inputs to describe the transition from biological material to fossil fuels during millions of years in the Earth's crust, and solar radiation exergy inputs to describe the material's history before then when it was part of the biosphere. This would need to be carried out mathematically backwards through time, to a presumed era when the oil and coal could be assumed to be receiving the same exergy inputs from these sources. A speculation about a past environment is different from assigning a reference state with respect to known environments today. Reasonable guesses about real ancient environments may be made, but they are untestable guesses, and so some regard this application as pseudoscience or pseudo-engineering.
The field describes this accumulated exergy in a natural resource over time as embodied energy with units of the "embodied joule" or "emjoule".
The important application of this research is to address sustainability issues in a quantitative fashion through a sustainability measurement:
Does the human production of an economic good deplete the exergy of Earth's natural resources more quickly than those resources are able to receive exergy?
If so, how does this compare to the depletion caused by producing the same good (or a different one) using a different set of natural resources?
Exergy and environmental policy
Today environmental policies does not consider exergy as an instrument for a more equitable and effective environmental policy. Recently, exergy analysis allowed to obtain an important fault in today governmental GHGs emission balances, which often do not consider international transport related emissions, therefore the impacts of import/export are not accounted,
Therefore, some preliminary cases of the impacts of import export transportation and of technology had provided evidencing the opportunity of introducing an effective exergy based taxation which can reduce the fiscal impact on citizens.
In addition Exergy can be a precious instrument for an effective estimation of the path toward UN sustainable development goals (SDG).
Assigning one thermodynamically obtained value to an economic good
A technique proposed by systems ecologists is to consolidate the three exergy inputs described in the last section into the single exergy input of solar radiation, and to express the total input of exergy into an economic good as a solar embodied joule or sej. (See Emergy) Exergy inputs from solar, tidal, and geothermal forces all at one time had their origins at the beginning of the solar system under conditions which could be chosen as an initial reference state, and other speculative reference states could in theory be traced back to that time. With this tool we would be able to answer:
What fraction of the total human depletion of the Earth's exergy is caused by the production of a particular economic good?
What fraction of the total human and non-human depletion of the Earth's exergy is caused by the production of a particular economic good?
No additional thermodynamic laws are required for this idea, and the principles of energetics may confuse many issues for those outside the field. The combination of untestable hypotheses, unfamiliar jargon that contradicts accepted jargon, intense advocacy among its supporters, and some degree of isolation from other disciplines have contributed to this protoscience being regarded by many as a pseudoscience. However, its basic tenets are only a further utilization of the exergy concept.
Implications in the development of complex physical systems
A common hypothesis in systems ecology is that the design engineer's observation that a greater capital investment is needed to create a process with increased exergy efficiency is actually the economic result of a fundamental law of nature. By this view, exergy is the analogue of economic currency in the natural world. The analogy to capital investment is the accumulation of exergy into a system over long periods of time resulting in embodied energy. The analogy of capital investment resulting in a factory with high exergy efficiency is an increase in natural organizational structures with high exergy efficiency. (See Maximum power). Researchers in these fields describe biological evolution in terms of increases in organism complexity due to the requirement for increased exergy efficiency because of competition for limited sources of exergy.
Some biologists have a similar hypothesis. A biological system (or a chemical plant) with a number of intermediate compartments and intermediate reactions is more efficient because the process is divided up into many small substeps, and this is closer to the reversible ideal of an infinite number of infinitesimal substeps. Of course, an excessively large number of intermediate compartments comes at a capital cost that may be too high.
Testing this idea in living organisms or ecosystems is impossible for all practical purposes because of the large time scales and small exergy inputs involved for changes to take place. However, if this idea is correct, it would not be a new fundamental law of nature. It would simply be living systems and ecosystems maximizing their exergy efficiency by utilizing laws of thermodynamics developed in the 19th century.
Philosophical and cosmological implications
Some proponents of utilizing exergy concepts describe them as a biocentric or ecocentric alternative for terms like quality and value. The "deep ecology" movement views economic usage of these terms as an anthropocentric philosophy which should be discarded. A possible universal thermodynamic concept of value or utility appeals to those with an interest in monism.
For some, the result of this line of thinking about tracking exergy into the deep past is a restatement of the cosmological argument that the universe was once at equilibrium and an input of exergy from some First Cause created a universe full of available work. Current science is unable to describe the first 10−43 seconds of the universe (See Timeline of the Big Bang). An external reference state is not able to be defined for such an event, and (regardless of its merits), such an argument may be better expressed in terms of entropy.
Quality of energy types
The ratio of exergy to energy in a substance can be considered a measure of energy quality. Forms of energy such as macroscopic kinetic energy, electrical energy, and chemical Gibbs free energy are 100% recoverable as work, and therefore have exergy equal to their energy. However, forms of energy such as radiation and thermal energy can not be converted completely to work, and have exergy content less than their energy content. The exact proportion of exergy in a substance depends on the amount of entropy relative to the surrounding environment as determined by the Second Law of Thermodynamics.
Exergy is useful when measuring the efficiency of an energy conversion process. The exergetic, or 2nd Law, efficiency is a ratio of the exergy output divided by the exergy input. This formulation takes into account the quality of the energy, often offering a more accurate and useful analysis than efficiency estimates only using the First Law of Thermodynamics.
Work can be extracted also from bodies colder than the surroundings. When the flow of energy is coming into the body, work is performed by this energy obtained from the large reservoir, the surrounding. A quantitative treatment of the notion of energy quality rests on the definition of energy. According to the standard definition, Energy is a measure of the ability to do work. Work can involve the movement of a mass by a force that results from a transformation of energy. If there is an energy transformation, the second principle of energy flow transformations says that this process must involve the dissipation of some energy as heat. Measuring the amount of heat released is one way of quantifying the energy, or ability to do work and apply a force over a distance.
Exergy of heat available at a temperature
Maximal possible conversion of heat to work, or exergy content of heat, depends on the temperature at which heat is available and the temperature level at which the reject heat can be disposed, that is the temperature of the surrounding. The upper limit for conversion is known as Carnot efficiency and was discovered by Nicolas Léonard Sadi Carnot in 1824. See also Carnot heat engine.
Carnot efficiency is
where TH is the higher temperature and TC is the lower temperature, both as absolute temperature. From Equation 15 it is clear that in order to maximize efficiency one should maximize TH and minimize TC.
Exergy exchanged is then:
where Tsource is the temperature of the heat source, and To is the temperature of the surrounding.
Connection with economic value
Exergy in a sense can be understood as a measure of the value of energy. Since high-exergy energy carriers can be used for more versatile purposes, due to their ability to do more work, they can be postulated to hold more economic value. This can be seen in the prices of energy carriers, i.e. high-exergy energy carriers such as electricity tend to be more valuable than low-exergy ones such as various fuels or heat. This has led to the substitution of more valuable high-exergy energy carriers with low-exergy energy carriers, when possible. An example is heating systems, where higher investment to heating systems allows using low-exergy energy sources. Thus high-exergy content is being substituted with capital investments.
Exergy based Life Cycle Assessment (LCA)
Exergy of a system is the maximum useful work possible during a process that brings the system into equilibrium with a heat reservoir. Wall clearly states the relation between exergy analysis and resource accounting. This intuition confirmed by Dewulf Sciubba lead to exergo-economic accounting and to methods specifically dedicated to LCA such as exergetic material input per unit of service (EMIPS). The
concept of material input per unit of service (MIPS) is quantified in terms of the second law of thermodynamics, allowing the calculation of both resource input and service output in exergy terms. This exergetic material input per unit of service (EMIPS) has been elaborated for transport technology. The service not only takes into account the total mass to be transported
and the total distance, but also the mass per single transport and the delivery time. The applicability of the EMIPS methodology relates specifically to the transport system and allows an effective coupling with life cycle assessment. The exergy analysis according to EMIPS allowed the definition of a precise strategy for reducing environmental impacts of transport toward more sustainable transport. Such a strategy requires the reduction of the weight of vehicles, sustainable styles of driving, reducing the friction of tires, encouraging electric and hybrid vehicles, improving the walking and cycling environment in cities, and by enhancing the role of public transport, especially electric rail.
History
Carnot
In 1824, Sadi Carnot studied the improvements developed for steam engines by James Watt and others. Carnot utilized a purely theoretical perspective for these engines and developed new ideas. He wrote:
The question has often been raised whether the motive power of heat is unbounded, whether the possible improvements in steam engines have an assignable limit—a limit by which the nature of things will not allow to be passed by any means whatever... In order to consider in the most general way the principle of the production of motion by heat, it must be considered independently of any mechanism or any particular agent. It is necessary to establish principles applicable not only to steam-engines but to all imaginable heat-engines... The production of motion in steam-engines is always accompanied by a circumstance on which we should fix our attention. This circumstance is the re-establishing of equilibrium... Imagine two bodies A and B, kept each at a constant temperature, that of A being higher than that of B. These two bodies, to which we can give or from which we can remove the heat without causing their temperatures to vary, exercise the functions of two unlimited reservoirs...
Carnot next described what is now called the Carnot engine, and proved by a thought experiment that any heat engine performing better than this engine would be a perpetual motion machine. Even in the 1820s, there was a long history of science forbidding such devices. According to Carnot, "Such a creation is entirely contrary to ideas now accepted, to the laws of mechanics and of sound physics. It is inadmissible."
This description of an upper bound to the work that may be done by an engine was the earliest modern formulation of the second law of thermodynamics. Because it involves no mathematics, it still often serves as the entry point for a modern understanding of both the second law and entropy. Carnot's focus on heat engines, equilibrium, and heat reservoirs is also the best entry point for understanding the closely related concept of exergy.
Carnot believed in the incorrect caloric theory of heat that was popular during his time, but his thought experiment nevertheless described a fundamental limit of nature. As kinetic theory replaced caloric theory through the early and mid-19th century (see Timeline of thermodynamics), several scientists added mathematical precision to the first and second laws of thermodynamics and developed the concept of entropy. Carnot's focus on processes at the human scale (above the thermodynamic limit) led to the most universally applicable concepts in physics. Entropy and the second-law are applied today in fields ranging from quantum mechanics to physical cosmology.
Gibbs
In the 1870s, Josiah Willard Gibbs unified a large quantity of 19th century thermochemistry into one compact theory. Gibbs's theory incorporated the new concept of a chemical potential to cause change when distant from a chemical equilibrium into the older work begun by Carnot in describing thermal and mechanical equilibrium and their potentials for change. Gibbs's unifying theory resulted in the thermodynamic potential state functions describing differences from thermodynamic equilibrium.
In 1873, Gibbs derived the mathematics of "available energy of the body and medium" into the form it has today. (See the equations above). The physics describing exergy has changed little since that time.
Helmholtz
In the 1880s, German scientist Hermann von Helmholtz derived the equation for the maximum work which can be reversibly obtained from a closed system.
Rant
In 1956, Yugoslav scholar Zoran Rant proposed the concept of Exergy, extending Gibbs and Helmholtz' work. Since then, continuous development in exergy analysis has seen many applications in thermodynamics, and exergy has been accepted as the maximum theoretical useful work which can be obtained from a system with respect to its environment.
See also
Thermodynamic free energy
Entropy production
Energy: world resources and consumption
Emergy
Notes
References
Further reading
Stephen Jay Kline (1999). The Low-Down on Entropy and Interpretive Thermodynamics, La Cañada, CA: DCW Industries. .
External links
Energy, Incorporating Exergy, An International Journal
An Annotated Bibliography of Exergy/Availability
Exergy – a useful concept by Göran Wall
Exergetics textbook for self-study by Göran Wall
Exergy by Isidoro Martinez
Exergy calculator by The Exergoecology Portal
Global Exergy Resource Chart
Guidebook to IEA ECBCS Annex 37, Low Exergy Systems for Heating and Cooling of Buildings
Introduction to the Concept of Exergy
Thermodynamic free energy
State functions
Ecological economics | 0.768732 | 0.991783 | 0.762415 |
Potential flow | In fluid dynamics, potential flow or irrotational flow refers to a description of a fluid flow with no vorticity in it. Such a description typically arises in the limit of vanishing viscosity, i.e., for an inviscid fluid and with no vorticity present in the flow.
Potential flow describes the velocity field as the gradient of a scalar function: the velocity potential. As a result, a potential flow is characterized by an irrotational velocity field, which is a valid approximation for several applications. The irrotationality of a potential flow is due to the curl of the gradient of a scalar always being equal to zero.
In the case of an incompressible flow the velocity potential satisfies Laplace's equation, and potential theory is applicable. However, potential flows also have been used to describe compressible flows and Hele-Shaw flows. The potential flow approach occurs in the modeling of both stationary as well as nonstationary flows.
Applications of potential flow include: the outer flow field for aerofoils, water waves, electroosmotic flow, and groundwater flow. For flows (or parts thereof) with strong vorticity effects, the potential flow approximation is not applicable. In flow regions where vorticity is known to be important, such as wakes and boundary layers, potential flow theory is not able to provide reasonable predictions of the flow. Fortunately, there are often large regions of a flow where the assumption of irrotationality is valid which is why potential flow is used for various applications. For instance in: flow around aircraft, groundwater flow, acoustics, water waves, and electroosmotic flow.
Description and characteristics
In potential or irrotational flow, the vorticity vector field is zero, i.e.,
,
where is the velocity field and is the vorticity field. Like any vector field having zero curl, the velocity field can be expressed as the gradient of certain scalar, say which is called the velocity potential, since the curl of the gradient is always zero. We therefore have
The velocity potential is not uniquely defined since one can add to it an arbitrary function of time, say , without affecting the relevant physical quantity which is . The non-uniqueness is usually removed by suitably selecting appropriate initial or boundary conditions satisfied by and as such the procedure may vary from one problem to another.
In potential flow, the circulation around any simply-connected contour is zero. This can be shown using the Stokes theorem,
where is the line element on the contour and is the area element of any surface bounded by the contour. In multiply-connected space (say, around a contour enclosing solid body in two dimensions or around a contour enclosing a torus in three-dimensions) or in the presence of concentrated vortices, (say, in the so-called irrotational vortices or point vortices, or in smoke rings), the circulation need not be zero. In the former case, Stokes theorem cannot be applied and in the later case, is non-zero within the region bounded by the contour. Around a contour encircling an infinitely long solid cylinder with which the contour loops times, we have
where is a cyclic constant. This example belongs to a doubly-connected space. In an -tuply connected space, there are such cyclic constants, namely,
Incompressible flow
In case of an incompressible flow — for instance of a liquid, or a gas at low Mach numbers; but not for sound waves — the velocity has zero divergence:
Substituting here shows that satisfies the Laplace equation
where is the Laplace operator (sometimes also written ). Since solutions of the Laplace equation are harmonic functions, every harmonic function represents a potential flow solution. As evident, in the incompressible case, the velocity field is determined completely from its kinematics: the assumptions of irrotationality and zero divergence of flow. Dynamics in connection with the momentum equations, only have to be applied afterwards, if one is interested in computing pressure field: for instance for flow around airfoils through the use of Bernoulli's principle.
In incompressible flows, contrary to common misconception, the potential flow indeed satisfies the full Navier–Stokes equations, not just the Euler equations, because the viscous term
is identically zero. It is the inability of the potential flow to satisfy the required boundary conditions, especially near solid boundaries, makes it invalid in representing the required flow field. If the potential flow satisfies the necessary conditions, then it is the required solution of the incompressible Navier–Stokes equations.
In two dimensions, with the help of the harmonic function and its conjugate harmonic function (stream function), incompressible potential flow reduces to a very simple system that is analyzed using complex analysis (see below).
Compressible flow
Steady flow
Potential flow theory can also be used to model irrotational compressible flow. The derivation of the governing equation for from Eulers equation is quite straightforward. The continuity and the (potential flow) momentum equations for steady flows are given by
where the last equation follows from the fact that entropy is constant for a fluid particle and that square of the sound speed is . Eliminating from the two governing equations results in
The incompressible version emerges in the limit . Substituting here results in
where is expressed as a function of the velocity magnitude . For a polytropic gas, , where is the specific heat ratio and is the stagnation enthalpy. In two dimensions, the equation simplifies to
Validity: As it stands, the equation is valid for any inviscid potential flows, irrespective of whether the flow is subsonic or supersonic (e.g. Prandtl–Meyer flow). However in supersonic and also in transonic flows, shock waves can occur which can introduce entropy and vorticity into the flow making the flow rotational. Nevertheless, there are two cases for which potential flow prevails even in the presence of shock waves, which are explained from the (not necessarily potential) momentum equation written in the following form
where is the specific enthalpy, is the vorticity field, is the temperature and is the specific entropy. Since in front of the leading shock wave, we have a potential flow, Bernoulli's equation shows that is constant, which is also constant across the shock wave (Rankine–Hugoniot conditions) and therefore we can write
1) When the shock wave is of constant intensity, the entropy discontinuity across the shock wave is also constant i.e., and therefore vorticity production is zero. Shock waves at the pointed leading edge of two-dimensional wedge or three-dimensional cone (Taylor–Maccoll flow) has constant intensity. 2) For weak shock waves, the entropy jump across the shock wave is a third-order quantity in terms of shock wave strength and therefore can be neglected. Shock waves in slender bodies lies nearly parallel to the body and they are weak.
Nearly parallel flows: When the flow is predominantly unidirectional with small deviations such as in flow past slender bodies, the full equation can be further simplified. Let be the mainstream and consider small deviations from this velocity field. The corresponding velocity potential can be written as where characterizes the small departure from the uniform flow and satisfies the linearized version of the full equation. This is given by
where is the constant Mach number corresponding to the uniform flow. This equation is valid provided is not close to unity. When is small (transonic flow), we have the following nonlinear equation
where is the critical value of Landau derivative and is the specific volume. The transonic flow is completely characterized by the single parameter , which for polytropic gas takes the value . Under hodograph transformation, the transonic equation in two-dimensions becomes the Euler–Tricomi equation.
Unsteady flow
The continuity and the (potential flow) momentum equations for unsteady flows are given by
The first integral of the (potential flow) momentum equation is given by
where is an arbitrary function. Without loss of generality, we can set since is not uniquely defined. Combining these equations, we obtain
Substituting here results in
Nearly parallel flows: As in before, for nearly parallel flows, we can write (after introudcing a recaled time )
provided the constant Mach number is not close to unity. When is small (transonic flow), we have the following nonlinear equation
Sound waves: In sound waves, the velocity magntiude (or the Mach number) is very small, although the unsteady term is now comparable to the other leading terms in the equation. Thus neglecting all quadratic and higher-order terms and noting that in the same approximation, is a constant (for example, in polytropic gas ), we have
which is a linear wave equation for the velocity potential . Again the oscillatory part of the velocity vector is related to the velocity potential by , while as before is the Laplace operator, and is the average speed of sound in the homogeneous medium. Note that also the oscillatory parts of the pressure and density each individually satisfy the wave equation, in this approximation.
Applicability and limitations
Potential flow does not include all the characteristics of flows that are encountered in the real world. Potential flow theory cannot be applied for viscous internal flows, except for flows between closely spaced plates. Richard Feynman considered potential flow to be so unphysical that the only fluid to obey the assumptions was "dry water" (quoting John von Neumann). Incompressible potential flow also makes a number of invalid predictions, such as d'Alembert's paradox, which states that the drag on any object moving through an infinite fluid otherwise at rest is zero. More precisely, potential flow cannot account for the behaviour of flows that include a boundary layer. Nevertheless, understanding potential flow is important in many branches of fluid mechanics. In particular, simple potential flows (called elementary flows) such as the free vortex and the point source possess ready analytical solutions. These solutions can be superposed to create more complex flows satisfying a variety of boundary conditions. These flows correspond closely to real-life flows over the whole of fluid mechanics; in addition, many valuable insights arise when considering the deviation (often slight) between an observed flow and the corresponding potential flow. Potential flow finds many applications in fields such as aircraft design. For instance, in computational fluid dynamics, one technique is to couple a potential flow solution outside the boundary layer to a solution of the boundary layer equations inside the boundary layer. The absence of boundary layer effects means that any streamline can be replaced by a solid boundary with no change in the flow field, a technique used in many aerodynamic design approaches. Another technique would be the use of Riabouchinsky solids.
Analysis for two-dimensional incompressible flow
Potential flow in two dimensions is simple to analyze using conformal mapping, by the use of transformations of the complex plane. However, use of complex numbers is not required, as for example in the classical analysis of fluid flow past a cylinder. It is not possible to solve a potential flow using complex numbers in three dimensions.
The basic idea is to use a holomorphic (also called analytic) or meromorphic function , which maps the physical domain to the transformed domain . While , , and are all real valued, it is convenient to define the complex quantities
Now, if we write the mapping as
Then, because is a holomorphic or meromorphic function, it has to satisfy the Cauchy–Riemann equations
The velocity components , in the directions respectively, can be obtained directly from by differentiating with respect to . That is
So the velocity field is specified by
Both and then satisfy Laplace's equation:
So can be identified as the velocity potential and is called the stream function. Lines of constant are known as streamlines and lines of constant are known as equipotential lines (see equipotential surface).
Streamlines and equipotential lines are orthogonal to each other, since
Thus the flow occurs along the lines of constant and at right angles to the lines of constant .
is also satisfied, this relation being equivalent to . So the flow is irrotational. The automatic condition then gives the incompressibility constraint .
Examples of two-dimensional incompressible flows
Any differentiable function may be used for . The examples that follow use a variety of elementary functions; special functions may also be used. Note that multi-valued functions such as the natural logarithm may be used, but attention must be confined to a single Riemann surface.
Power laws
In case the following power-law conformal map is applied, from to :
then, writing in polar coordinates as , we have
In the figures to the right examples are given for several values of . The black line is the boundary of the flow, while the darker blue lines are streamlines, and the lighter blue lines are equi-potential lines. Some interesting powers are:
: this corresponds with flow around a semi-infinite plate,
: flow around a right corner,
: a trivial case of uniform flow,
: flow through a corner, or near a stagnation point, and
: flow due to a source doublet
The constant is a scaling parameter: its absolute value determines the scale, while its argument introduces a rotation (if non-zero).
Power laws with : uniform flow
If , that is, a power law with , the streamlines (i.e. lines of constant ) are a system of straight lines parallel to the -axis. This is easiest to see by writing in terms of real and imaginary components:
thus giving and . This flow may be interpreted as uniform flow parallel to the -axis.
Power laws with
If , then and the streamline corresponding to a particular value of are those points satisfying
which is a system of rectangular hyperbolae. This may be seen by again rewriting in terms of real and imaginary components. Noting that and rewriting and it is seen (on simplifying) that the streamlines are given by
The velocity field is given by , or
In fluid dynamics, the flowfield near the origin corresponds to a stagnation point. Note that the fluid at the origin is at rest (this follows on differentiation of at ). The streamline is particularly interesting: it has two (or four) branches, following the coordinate axes, i.e. and . As no fluid flows across the -axis, it (the -axis) may be treated as a solid boundary. It is thus possible to ignore the flow in the lower half-plane where and to focus on the flow in the upper halfplane. With this interpretation, the flow is that of a vertically directed jet impinging on a horizontal flat plate. The flow may also be interpreted as flow into a 90 degree corner if the regions specified by (say) are ignored.
Power laws with
If , the resulting flow is a sort of hexagonal version of the case considered above. Streamlines are given by, and the flow in this case may be interpreted as flow into a 60° corner.
Power laws with : doublet
If , the streamlines are given by
This is more easily interpreted in terms of real and imaginary components:
Thus the streamlines are circles that are tangent to the x-axis at the origin. The circles in the upper half-plane thus flow clockwise, those in the lower half-plane flow anticlockwise. Note that the velocity components are proportional to ; and their values at the origin is infinite. This flow pattern is usually referred to as a doublet, or dipole, and can be interpreted as the combination of a source-sink pair of infinite strength kept an infinitesimally small distance apart. The velocity field is given by
or in polar coordinates:
Power laws with : quadrupole
If , the streamlines are given by
This is the flow field associated with a quadrupole.
Line source and sink
A line source or sink of strength ( for source and for sink) is given by the potential
where in fact is the volume flux per unit length across a surface enclosing the source or sink. The velocity field in polar coordinates are
i.e., a purely radial flow.
Line vortex
A line vortex of strength is given by
where is the circulation around any simple closed contour enclosing the vortex. The velocity field in polar coordinates are
i.e., a purely azimuthal flow.
Analysis for three-dimensional incompressible flows
For three-dimensional flows, complex potential cannot be obtained.
Point source and sink
The velocity potential of a point source or sink of strength ( for source and for sink) in spherical polar coordinates is given by
where in fact is the volume flux across a closed surface enclosing the source or sink. The velocity field in spherical polar coordinates are
See also
Potential flow around a circular cylinder
Aerodynamic potential-flow code
Conformal mapping
Darwin drift
Flownet
Laplacian field
Laplace equation for irrotational flow
Potential theory
Stream function
Velocity potential
Helmholtz decomposition
Notes
References
Further reading
External links
— Java applets for exploring conformal maps
Potential Flow Visualizations - Interactive WebApps
Fluid dynamics | 0.769723 | 0.9905 | 0.762411 |
Akinetopsia | Akinetopsia (from Greek akinesia 'absence of movement' and opsis 'seeing'), also known as cerebral akinetopsia or motion blindness, is a term introduced by Semir Zeki to describe an extremely rare neuropsychological disorder, having only been documented in a handful of medical cases, in which a patient cannot perceive motion in their visual field, despite being able to see stationary objects without issue. The syndrome is the result of damage to visual area V5, whose cells are specialized to detect directional visual motion. There are varying degrees of akinetopsia: from seeing motion as frames of a cinema reel to an inability to discriminate any motion. There is currently no effective treatment or cure for akinetopsia.
Signs and symptoms
Akinetopsia can manifest in a spectrum of severity and some cases may be episodic or temporary. It may range from "inconspicuous akinetopsia" to "gross akinetopsia", based on symptom severity and the amount the akinetopsia affects the patient's quality of life.
Inconspicuous akinetopsia
Inconspicuous akinetopsia is often described by seeing motion as a cinema reel or a multiple exposure photograph. This is the most common kind of akinetopsia and many patients consider the stroboscopic vision as a nuisance. The akinetopsia often occurs with visual trailing (palinopsia), with afterimages being left at each frame of the motion. It is caused by prescription drugs, hallucinogen persisting perception disorder (HPPD), and persistent aura without infarction. The pathophysiology of akinetopsia palinopsia is not known, but it has been hypothesized to be due to inappropriate activation of physiological motion suppression mechanisms which are normally used to maintain visual stability during eye movements (e.g. saccadic suppression).
Gross akinetopsia
Gross akinetopsia is an extremely rare condition. Patients have profound motion blindness and struggle in performing the activities of daily living. Instead of seeing vision as a cinema reel, these patients have trouble perceiving gross motion. Most of what is known about this extremely rare condition was learned through the case study of one patient, LM. LM described pouring a cup of tea or coffee difficult "because the fluid appeared to be frozen, like a glacier". She did not know when to stop pouring, because she could not perceive the movement of the fluid rising. LM and other patients have also complained of having trouble following conversations, because lip movements and changing facial expressions were missed. LM stated she felt insecure when more than two people were walking around in a room: "people were suddenly here or there but I have not seen them moving". Movement is inferred by comparing the change in position of an object or person. LM and others have described crossing the street and driving cars to also be of great difficulty. The patient was still able to perceive movement of auditory and tactile stimuli.
A change in brain structure (typically lesions) disturbs the psychological process of understanding sensory information, in this case visual information. Disturbance of only visual motion is possible due to the anatomical separation of visual motion processing from other functions. Like akinetopsia, perception of color can also be selectively disturbed as in achromatopsia. There is an inability to see motion despite normal spatial acuity, flicker detection, stereo and color vision. Other intact functions include visual space perception and visual identification of shapes, objects, and faces. Besides simple perception, akinetopsia also disturbs visuomotor tasks, such as reaching for objects and catching objects. When doing tasks, feedback of one's own motion appears to be important.
Causes
Brain lesions
Akinetopsia may be an acquired deficit from lesions in the posterior side of the visual cortex. Lesions more often cause gross akinetopsia. The neurons of the middle temporal cortex respond to moving stimuli and hence the middle temporal cortex is the motion-processing area of the cerebral cortex. In the case of LM, the brain lesion was bilateral and symmetrical, and at the same time small enough not to affect other visual functions. Some unilateral lesions have been reported to impair motion perception as well. Akinetopsia through lesions is rare, because damage to the occipital lobe usually disturbs more than one visual function. Akinetopsia has also been reported as a result of traumatic brain injury.
Transcranial magnetic stimulation
Inconspicuous akinetopsia can be selectively and temporarily induced using transcranial magnetic stimulation (TMS) of area V5 of the visual cortex in healthy subjects. It is performed on a 1 cm² surface of the head, corresponding in position to area V5. With an 800-microsecond TMS pulse and a 28 ms stimulus at 11 degrees per second, V5 is incapacitated for about 20–30 ms. It is effective between −20 ms and +10 ms before and after onset of a moving visual stimulus. Inactivating V1 with TMS could induce some degree of akinetopsia 60–70 ms after the onset of the visual stimulus. TMS of V1 is not nearly as effective in inducing akinetopsia as TMS of V5.
Alzheimer's disease
Besides memory problems, Alzheimer's patients may have varying degrees of akinetopsia. This could contribute to their marked disorientation. While Pelak and Hoyt have recorded an Alzheimer's case study, there has not been much research done on the subject yet.
Antidepressants
Inconspicuous akinetopsia can be triggered by high doses of certain antidepressants with vision returning to normal once the dosage is reduced.
Areas of visual perception
Two relevant visual areas for motion processing are V5 and V1. These areas are separated by their function in vision. A functional area is a set of neurons with common selectivity and stimulation of this area, specifically behavioral influences. There have been over 30 specialized processing areas found in the visual cortex.
V5
V5, also known as visual area MT (middle temporal), is located laterally and ventrally in the temporal lobe, near the intersection of the ascending limb of the inferior temporal sulcus and the lateral occipital sulcus. All of the neurons in V5 are motion selective, and most are directionally selective. Evidence of functional specialization of V5 was first found in primates. Patients with akinetopsia tend to have unilateral or bi-lateral damage to the V5.
V1
V1, also known as the primary visual cortex, is located in Brodmann area 17. V1 is known for its pre-processing capabilities of visual information; however, it is no longer considered the only perceptually effective gateway to the cortex. Motion information can reach V5 without passing through V1 and a return input from V5 to V1 is not required for seeing simple visual motion. Motion-related signals arrive at V1 (60–70 ms) and V5 (< 30 ms) at different times, with V5 acting independently of V1. Patients with blindsight have damage to V1, but because V5 is intact, they can still sense motion. Inactivating V1 limits motion vision, but does not stop it completely.
Ventral and dorsal streams
Another thought on visual brain organization is the theory of streams for spatial vision, the ventral stream for perception and the dorsal stream for action. Since LM has impairment in both perception and action (such as grasping and catching actions), it has been suggested that V5 provides input to both perception and action processing streams.
Case studies
Potzl and Redlich's patient
In 1911, Potzl and Redlich reported a 58-year-old female patient with bilateral damage to her posterior brain. She described motion as if the object remained stationary but appeared at different successive positions. Additionally, she also lost a significant amount of her visual field and had anomic aphasia.
Goldstein and Gelb's patient
In 1918, Goldstein and Gelb reported a 24-year-old male who suffered a gunshot wound in the posterior brain. The patient reported no impression of movement. He could state the new position of the object (left, right, up, down), but saw "nothing in between". While Goldestein and Gelb believed the patient had damaged the lateral and medial parts of the left occipital lobe, it was later indicated that both occipital lobes were probably affected, due to the bilateral, concentric loss of his visual field. He lost his visual field beyond a 30-degree eccentricity and could not identify visual objects by their proper names.
"LM"
Most of what is known about akinetopsia was learned from LM, a 43-year-old female admitted into the hospital October 1978 complaining of headache and vertigo. LM was diagnosed with thrombosis of the superior sagittal sinus which resulted in bilateral, symmetrical lesions posterior of the visual cortex. These lesions were verified by PET and MRI in 1994. LM had minimal motion perception that was preserved as perhaps a function of V1, as a function of a "higher" order visual cortical area, or some functional sparing of V5.
LM found no effective treatment, so she learned to avoid conditions with multiple visual motion stimuli, i.e. by not looking at or fixating them. She developed very efficient coping strategies to do this and nevertheless lived her life. In addition, she estimated the distance of moving vehicles by means of sound detection in order to continue to cross the street.
LM was tested in three areas against a 24-year-old female subject with normal vision:
Visual functions other than movement vision
LM had no evidence of a color discrimination deficit in either center or periphery of visual fields. Her recognition time for visual objects and words was slightly higher than the control, but not statistically significant. There was no restriction in her visual field and no scotoma.
Disturbance of movement vision
LM's impression of movement depended on the direction of the movement (horizontal vs vertical), the velocity, and whether she fixated in the center of the motion path or tracked the object with her eyes. Circular light targets were used as stimuli.
In studies, LM reported some impression of horizontal movement at a speed of 14 degrees of her predetermined visual field per second (deg/s) while fixating in the middle of the motion path, with difficulty seeing motion both below and above this velocity. When allowed to track the moving spot, she had some horizontal movement vision up to 18 deg/s. For vertical movement, the patient could only see motion below 10 deg/s fixated or 13 deg/s when tracking the target. The patient described her perceptual experience for stimulus velocities higher than 18 and 13 deg/s, respectively as "one light spot left or right" or "one light spot up or down" and "sometimes at successive positions in between", but never as motion.
Motion in depth
To determine perception of motion in depth, studies were done in which the experimenter moved a black painted wooden cube on a tabletop either towards the patient or away in line of sight. After 20 trials at 3 or 6 deg/s, the patient had no clear impression of movement. However she knew the object had changed in position, she knew the size of the cube, and she could correctly judge the distance of the cube in relation to other nearby objects.
Inner and outer visual fields
Detection of movement in the inner and outer visual fields was tested. Within her inner visual field, LM could detect some motion, with horizontal motion more easily distinguished than vertical motion. In her peripheral visual field, the patient was never able to detect any direction of movement. LM's ability to judge velocities was also tested. LM underestimated velocities over 12 deg/s.
Motion aftereffect and Phi phenomenon
Motion aftereffect of vertical stripes moving in a horizontal direction and a rotating spiral were tested. She was able to detect motion in both patterns, but reported motion aftereffect in only 3 of the 10 trials for the stripes, and no effect for the rotating spiral. She also never reported any impression of motion in depth of the spiral. In Phi phenomenon two circular spots of light appear alternating. It appears that the spot moves from one location to the other. Under no combination of conditions did the patient report any apparent movement. She always reported two independent light spots.
Visually guided pursuit eye and finger movements
LM was to follow the path of a wire mounted onto a board with her right index finger. The test was performed under purely tactile (blindfolded), purely visual (glass over the board), or tactile-visual condition. The patient performed best in the purely tactile condition and very poorly in the visual condition. She did not benefit from the visual information in the tactile-visual condition either. The patient reported that the difficulty was between her finger and her eyes. She could not follow her finger with her eyes if she moved her finger too fast.
Additional experiments
In 1994, several other observations of LM's capabilities were made using a stimulus with a random distribution of light squares on a dark background that moved coherently. With this stimulus, LM could always determine the axis of motion (vertical, horizontal), but not always the direction. If a few static squares were added to the moving display, identification of direction fell to chance, but identification of the axis of motion was still accurate. If a few squares were moving opposite and orthogonal to the predominant direction, her performance on both direction and axis fell to chance. She was also unable to identify motion in oblique directions, such as 45, 135, 225, and 315 degrees, and always gave answers in cardinal directions, 0, 90, 180, and 270 degrees.
"TD"
In 2019, Heutink and colleagues described a 37-year old female patient (TD) with akinetopsia, who was admitted to Royal Dutch Visio, Centre of Expertise for blind and partially sighted people. TD suffered an ischaemic infarction of the occipitotemporal region in the right hemisphere and a smaller infarction in the left occipital hemisphere. MRI confirmed that the damaged brain areas contained area V5 in both hemispheres. TD experienced problems with perceiving visual motion and also reported that bright colours and sharp contrasts made her feel sick. TD also had problems perceiving objects that were more than ± 5 meters away from her. Although TD had some impairments of lower visual functions, these could not explain the problems she experienced with regard to motion perception. Neuropsychological assessment revealed no evidence of Balint's Syndrome, hemispatial neglect or visual extinction, prosopagnosia or object agnosia. There was some evidence for impaired spatial processing. On several behavioural tests, TD showed a specific and selective impairment of motion perception that was comparable to LM's performance.
TD's ability to determine the direction of movement was tested using a task in which small grey blocks all moved in the same direction with the same speed against a black background. The blocks could move in four directions: right to left, left to right, upward and downward. Speed of movement was varied from 2, 4.5, 9, 15 and 24 degrees per second. Speed and direction were varied randomly across trials. TD had perfect perception of motion direction at speed up to 9 degrees per second. When speed of targets was above 9 degrees per second, TD's performance dropped dramatically to 50% correct at a speed of 15 degrees per second and 0% correct at 24 degrees per second. When the blocks moved at 24 degrees per second, TD consistently reported the exact opposite direction of the actual movement.
Pelak and Hoyt's Alzheimer's patient
In 2000, a 70-year-old man presented with akinetopsia. He had stopped driving two years prior because he could no longer "see movement while driving". His wife noted that he could not judge the speed of another car or how far away it was. He had difficulty watching television with significant action or movement, such as sporting events or action-filled TV shows. He frequently commented to his wife that he could not "see anything going on". When objects began to move they would disappear. He could, however, watch the news, because no significant action occurred. In addition he had signs of Balint's syndrome (mild simultanagnosia, optic ataxia, and optic apraxia).
Pelak and Hoyt's TBI patient
In 2003, a 60-year-old man complained of the inability to perceive visual motion following a traumatic brain injury, two years prior, in which a large cedar light pole fell and struck his head. He gave examples of his difficulty as a hunter. He was unable to notice game, to track other hunters, or to see his dog coming towards him. Instead, these objects would appear in one location and then another, without any movement being seen between the two locations. He had difficulties driving and following a group conversation. He lost his place when vertically or horizontally scanning a written document and was unable to visualize three-dimensional images from two-dimensional blueprints.
References
Visual disturbances and blindness
Agnosia
Visual perception
Neuropsychology
Dementia | 0.768544 | 0.992018 | 0.76241 |
Time of flight | Time of flight (ToF) is the measurement of the time taken by an object, particle or wave (be it acoustic, electromagnetic, etc.) to travel a distance through a medium. This information can then be used to measure velocity or path length, or as a way to learn about the particle or medium's properties (such as composition or flow rate). The traveling object may be detected directly (direct time of flight, dToF, e.g., via an ion detector in mass spectrometry) or indirectly (indirect time of flight, iToF, e.g., by light scattered from an object in laser doppler velocimetry). Time of flight technology has found valuable applications in the monitoring and characterization of material and biomaterials, hydrogels included.
Overview
In electronics, one of the earliest devices using the principle are ultrasonic distance-measuring devices, which emit an ultrasonic pulse and are able to measure the distance to a solid object based on the time taken for the wave to bounce back to the emitter. The ToF method is also used to estimate the electron mobility. Originally, it was designed for measurement of low-conductive thin films, later adjusted for common semiconductors. This experimental technique is used for metal-dielectric-metal structures as well as organic field-effect transistors. The excess charges are generated by application of the laser or voltage pulse.
For Magnetic Resonance Angiography (MRA), ToF is a major underlying method. In this method, blood entering the imaged area is not yet saturated, giving it a much higher signal when using short echo time and flow compensation. It can be used in the detection of aneurysm, stenosis or dissection.
In time-of-flight mass spectrometry, ions are accelerated by an electrical field to the same kinetic energy with the velocity of the ion depending on the mass-to-charge ratio. Thus the time-of-flight is used to measure velocity, from which the mass-to-charge ratio can be determined. The time-of-flight of electrons is used to measure their kinetic energy.
In near-infrared spectroscopy, the ToF method is used to measure the media-dependent optical pathlength over a range of optical wavelengths, from which composition and properties of the media can be analyzed.
In ultrasonic flow meter measurement, ToF is used to measure speed of signal propagation upstream and downstream of flow of a media, in order to estimate total flow velocity. This measurement is made in a collinear direction with the flow.
In planar Doppler velocimetry (optical flow meter measurement), ToF measurements are made perpendicular to the flow by timing when individual particles cross two or more locations along the flow (collinear measurements would require generally high flow velocities and extremely narrow-band optical filters).
In optical interferometry, the pathlength difference between sample and reference arms can be measured by ToF methods, such as frequency modulation followed by phase shift measurement or cross correlation of signals. Such methods are used in laser radar and laser tracker systems for medium-long range distance measurement.
In neutron time-of-flight scattering, a pulsed monochromatic neutron beam is scattered by a sample. The energy spectrum of the scattered neutrons is measured via time of flight.
In kinematics, ToF is the duration in which a projectile is traveling through the air. Given the initial velocity of a particle launched from the ground, the downward (i.e. gravitational) acceleration , and the projectile's angle of projection θ (measured relative to the horizontal), then a simple rearrangement of the SUVAT equation
results in this equation
for the time of flight of a projectile.
In mass spectrometry
The time-of-flight principle can be applied for mass spectrometry. Ions are accelerated by an electric field of known strength. This acceleration results in an ion having the same kinetic energy as any other ion that has the same charge. The velocity of the ion depends on the mass-to-charge ratio. The time that it subsequently takes for the particle to reach a detector at a known distance is measured. This time will depend on the mass-to-charge ratio of the particle (heavier particles reach lower speeds). From this time and the known experimental parameters one can find the mass-to-charge ratio of the ion. The elapsed time from the instant a particle leaves a source to the instant it reaches a detector.
In flow meters
An ultrasonic flow meter measures the velocity of a liquid or gas through a pipe using acoustic sensors. This has some advantages over other measurement techniques. The results are slightly affected by temperature, density or conductivity. Maintenance is inexpensive because there are no moving parts.
Ultrasonic flow meters come in three different types: transmission (contrapropagating transit time) flowmeters, reflection (Doppler) flowmeters, and open-channel flowmeters. Transit time flowmeters work by measuring the time difference between an ultrasonic pulse sent in the flow direction and an ultrasound pulse sent opposite the flow direction. Doppler flowmeters measure the doppler shift resulting in reflecting an ultrasonic beam off either small particles in the fluid, air bubbles in the fluid, or the flowing fluid's turbulence. Open channel flow meters measure upstream levels in front of flumes or weirs.
Optical time-of-flight sensors consist of two light beams projected into the fluid whose detection is either interrupted or instigated by the passage of small particles (which are assumed to be following the flow). This is not dissimilar from the optical beams used as safety devices in motorized garage doors or as triggers in alarm systems. The speed of the particles is calculated by knowing the spacing between the two beams. If there is only one detector, then the time difference can be measured via autocorrelation. If there are two detectors, one for each beam, then direction can also be known. Since the location of the beams is relatively easy to determine, the precision of the measurement depends primarily on how small the setup can be made. If the beams are too far apart, the flow could change substantially between them, thus the measurement becomes an average over that space. Moreover, multiple particles could reside between them at any given time, and this would corrupt the signal since the particles are indistinguishable. For such a sensor to provide valid data, it must be small relative to the scale of the flow and the seeding density. MOEMS approaches yield extremely small packages, making such sensors applicable in a variety of situations.
In physics
Usually the time-of-flight tube used in mass spectrometry is praised for simplicity, but for precision measurements of charged low energy particles the electric and the magnetic field in the tube has to be controlled within 10 mV and 1 nT respectively.
The work function homogeneity of the tube can be controlled by a Kelvin probe. The magnetic field can be measured by a fluxgate compass. High frequencies are passively shielded and damped by radar absorbent material. To generate arbitrary low frequencies field the screen is parted into plates (overlapping and connected by capacitors) with bias voltage on each plate and a bias current on coil behind plate whose flux is closed by an outer core. In this way the tube can be configured to act as a weak achromatic quadrupole lens with an aperture with a grid and a delay line detector in the diffraction plane to do angle resolved measurements. Changing the field the angle of the field of view can be changed and a deflecting bias can be superimposed to scan through all angles.
When no delay line detector is used focusing the ions onto a detector can be accomplished through the use of two or three einzel lenses placed in the vacuum tube located between the ion source and the detector.
The sample should be immersed into the tube with holes and apertures for and against stray light to do magnetic experiments and to control the electrons from their start.
Camera
Detector
See also
Propagation delay
Round-trip time
Time of arrival
Time of transmission
References
Mass spectrometry
Spectroscopy
Time measurement systems | 0.772583 | 0.986826 | 0.762406 |
Ecology | Ecology is the natural science of the relationships among living organisms, including humans, and their physical environment. Ecology considers organisms at the individual, population, community, ecosystem, and biosphere levels. Ecology overlaps with the closely related sciences of biogeography, evolutionary biology, genetics, ethology, and natural history.
Ecology is a branch of biology, and is the study of abundance, biomass, and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species; and patterns of biodiversity and its effect on ecosystem processes.
Ecology has practical applications in conservation biology, wetland management, natural resource management (agroecology, agriculture, forestry, agroforestry, fisheries, mining, tourism), urban planning (urban ecology), community health, economics, basic and applied science, and human social interaction (human ecology).
The word ecology was coined in 1866 by the German scientist Ernst Haeckel. The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory.
Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living (abiotic) components of their environment. Ecosystem processes, such as primary production, nutrient cycling, and niche construction, regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living (biotic) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate, global biogeochemical cycles, water filtration, soil formation, erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value.
Levels, scope, and scale of organization
The scope of ecology contains a wide array of interacting levels of organization spanning micro-level (e.g., cells) to a planetary scale (e.g., biosphere) phenomena. Ecosystems, for example, contain abiotic resources and interacting life forms (i.e., individual organisms that aggregate into populations which aggregate into distinct ecological communities). Because ecosystems are dynamic and do not necessarily follow a linear successional route, changes might occur quickly or slowly over thousands of years before specific forest successional stages are brought about by biological processes. An ecosystem's area can vary greatly, from tiny to vast. A single tree is of little consequence to the classification of a forest ecosystem, but is critically relevant to organisms living in and on it. Several generations of an aphid population can exist over the lifespan of a single leaf. Each of those aphids, in turn, supports diverse bacterial communities. The nature of connections in ecological communities cannot be explained by knowing the details of each species in isolation, because the emergent pattern is neither revealed nor predicted until the ecosystem is studied as an integrated whole. Some ecological principles, however, do exhibit collective properties where the sum of the components explain the properties of the whole, such as birth rates of a population being equal to the sum of individual births over a designated time frame.
The main subdisciplines of ecology, population (or community) ecology and ecosystem ecology, exhibit a difference not only in scale but also in two contrasting paradigms in the field. The former focuses on organisms' distribution and abundance, while the latter focuses on materials and energy fluxes.
Hierarchy
The scale of ecological dynamics can operate like a closed system, such as aphids migrating on a single tree, while at the same time remaining open about broader scale influences, such as atmosphere or climate. Hence, ecologists classify ecosystems hierarchically by analyzing data collected from finer scale units, such as vegetation associations, climate, and soil types, and integrate this information to identify emergent patterns of uniform organization and processes that operate on local to regional, landscape, and chronological scales.
To structure the study of ecology into a conceptually manageable framework, the biological world is organized into a nested hierarchy, ranging in scale from genes, to cells, to tissues, to organs, to organisms, to species, to populations, to guilds, to communities, to ecosystems, to biomes, and up to the level of the biosphere. This framework forms a panarchy and exhibits non-linear behaviors; this means that "effect and cause are disproportionate, so that small changes to critical variables, such as the number of nitrogen fixers, can lead to disproportionate, perhaps irreversible, changes in the system properties."
Biodiversity
Biodiversity (an abbreviation of "biological diversity") describes the diversity of life from genes to ecosystems and spans every level of biological organization. The term has several interpretations, and there are many ways to index, measure, characterize, and represent its complex organization. Biodiversity includes species diversity, ecosystem diversity, and genetic diversity and scientists are interested in the way that this diversity affects the complex ecological processes operating at and among these respective levels. Biodiversity plays an important role in ecosystem services which by definition maintain and improve human quality of life. Conservation priorities and management techniques require different approaches and considerations to address the full ecological scope of biodiversity. Natural capital that supports populations is critical for maintaining ecosystem services and species migration (e.g., riverine fish runs and avian insect control) has been implicated as one mechanism by which those service losses are experienced. An understanding of biodiversity has practical applications for species and ecosystem-level conservation planners as they make management recommendations to consulting firms, governments, and industry.
Habitat
The habitat of a species describes the environment over which a species is known to occur and the type of community that is formed as a result. More specifically, "habitats can be defined as regions in environmental space that are composed of multiple dimensions, each representing a biotic or abiotic environmental variable; that is, any component or characteristic of the environment related directly (e.g. forage biomass and quality) or indirectly (e.g. elevation) to the use of a location by the animal." For example, a habitat might be an aquatic or terrestrial environment that can be further categorized as a montane or alpine ecosystem. Habitat shifts provide important evidence of competition in nature where one population changes relative to the habitats that most other individuals of the species occupy. For example, one population of a species of tropical lizard (Tropidurus hispidus) has a flattened body relative to the main populations that live in open savanna. The population that lives in an isolated rock outcrop hides in crevasses where its flattened body offers a selective advantage. Habitat shifts also occur in the developmental life history of amphibians, and in insects that transition from aquatic to terrestrial habitats. Biotope and habitat are sometimes used interchangeably, but the former applies to a community's environment, whereas the latter applies to a species' environment.
Niche
Definitions of the niche date back to 1917, but G. Evelyn Hutchinson made conceptual advances in 1957 by introducing a widely adopted definition: "the set of biotic and abiotic conditions in which a species is able to persist and maintain stable population sizes." The ecological niche is a central concept in the ecology of organisms and is sub-divided into the fundamental and the realized niche. The fundamental niche is the set of environmental conditions under which a species is able to persist. The realized niche is the set of environmental plus ecological conditions under which a species persists. The Hutchinsonian niche is defined more technically as a "Euclidean hyperspace whose dimensions are defined as environmental variables and whose size is a function of the number of values that the environmental values may assume for which an organism has positive fitness."
Biogeographical patterns and range distributions are explained or predicted through knowledge of a species' traits and niche requirements. Species have functional traits that are uniquely adapted to the ecological niche. A trait is a measurable property, phenotype, or characteristic of an organism that may influence its survival. Genes play an important role in the interplay of development and environmental expression of traits. Resident species evolve traits that are fitted to the selection pressures of their local environment. This tends to afford them a competitive advantage and discourages similarly adapted species from having an overlapping geographic range. The competitive exclusion principle states that two species cannot coexist indefinitely by living off the same limiting resource; one will always out-compete the other. When similarly adapted species overlap geographically, closer inspection reveals subtle ecological differences in their habitat or dietary requirements. Some models and empirical studies, however, suggest that disturbances can stabilize the co-evolution and shared niche occupancy of similar species inhabiting species-rich communities. The habitat plus the niche is called the ecotope, which is defined as the full range of environmental and biological variables affecting an entire species.
Niche construction
Organisms are subject to environmental pressures, but they also modify their habitats. The regulatory feedback between organisms and their environment can affect conditions from local (e.g., a beaver pond) to global scales, over time and even after death, such as decaying logs or silica skeleton deposits from marine organisms. The process and concept of ecosystem engineering are related to niche construction, but the former relates only to the physical modifications of the habitat whereas the latter also considers the evolutionary implications of physical changes to the environment and the feedback this causes on the process of natural selection. Ecosystem engineers are defined as: "organisms that directly or indirectly modulate the availability of resources to other species, by causing physical state changes in biotic or abiotic materials. In so doing they modify, maintain and create habitats."
The ecosystem engineering concept has stimulated a new appreciation for the influence that organisms have on the ecosystem and evolutionary process. The term "niche construction" is more often used in reference to the under-appreciated feedback mechanisms of natural selection imparting forces on the abiotic niche. An example of natural selection through ecosystem engineering occurs in the nests of social insects, including ants, bees, wasps, and termites. There is an emergent homeostasis or homeorhesis in the structure of the nest that regulates, maintains and defends the physiology of the entire colony. Termite mounds, for example, maintain a constant internal temperature through the design of air-conditioning chimneys. The structure of the nests themselves is subject to the forces of natural selection. Moreover, a nest can survive over successive generations, so that progeny inherit both genetic material and a legacy niche that was constructed before their time.
Biome
Biomes are larger units of organization that categorize regions of the Earth's ecosystems, mainly according to the structure and composition of vegetation. There are different methods to define the continental boundaries of biomes dominated by different functional types of vegetative communities that are limited in distribution by climate, precipitation, weather, and other environmental variables. Biomes include tropical rainforest, temperate broadleaf and mixed forest, temperate deciduous forest, taiga, tundra, hot desert, and polar desert. Other researchers have recently categorized other biomes, such as the human and oceanic microbiomes. To a microbe, the human body is a habitat and a landscape. Microbiomes were discovered largely through advances in molecular genetics, which have revealed a hidden richness of microbial diversity on the planet. The oceanic microbiome plays a significant role in the ecological biogeochemistry of the planet's oceans.
Biosphere
The largest scale of ecological organization is the biosphere: the total sum of ecosystems on the planet. Ecological relationships regulate the flux of energy, nutrients, and climate all the way up to the planetary scale. For example, the dynamic history of the planetary atmosphere's CO2 and O2 composition has been affected by the biogenic flux of gases coming from respiration and photosynthesis, with levels fluctuating over time in relation to the ecology and evolution of plants and animals. Ecological theory has also been used to explain self-emergent regulatory phenomena at the planetary scale: for example, the Gaia hypothesis is an example of holism applied in ecological theory. The Gaia hypothesis states that there is an emergent feedback loop generated by the metabolism of living organisms that maintains the core temperature of the Earth and atmospheric conditions within a narrow self-regulating range of tolerance.
Population ecology
Population ecology studies the dynamics of species populations and how these populations interact with the wider environment. A population consists of individuals of the same species that live, interact, and migrate through the same niche and habitat.
A primary law of population ecology is the Malthusian growth model which states, "a population will grow (or decline) exponentially as long as the environment experienced by all individuals in the population remains constant." Simplified population models usually starts with four variables: death, birth, immigration, and emigration.
An example of an introductory population model describes a closed population, such as on an island, where immigration and emigration does not take place. Hypotheses are evaluated with reference to a null hypothesis which states that random processes create the observed data. In these island models, the rate of population change is described by:
where N is the total number of individuals in the population, b and d are the per capita rates of birth and death respectively, and r is the per capita rate of population change.
Using these modeling techniques, Malthus' population principle of growth was later transformed into a model known as the logistic equation by Pierre Verhulst:
where N(t) is the number of individuals measured as biomass density as a function of time, t, r is the maximum per-capita rate of change commonly known as the intrinsic rate of growth, and is the crowding coefficient, which represents the reduction in population growth rate per individual added. The formula states that the rate of change in population size will grow to approach equilibrium, where, when the rates of increase and crowding are balanced, . A common, analogous model fixes the equilibrium, as K, which is known as the "carrying capacity."
Population ecology builds upon these introductory models to further understand demographic processes in real study populations. Commonly used types of data include life history, fecundity, and survivorship, and these are analyzed using mathematical techniques such as matrix algebra. The information is used for managing wildlife stocks and setting harvest quotas. In cases where basic models are insufficient, ecologists may adopt different kinds of statistical methods, such as the Akaike information criterion, or use models that can become mathematically complex as "several competing hypotheses are simultaneously confronted with the data."
Metapopulations and migration
The concept of metapopulations was defined in 1969 as "a population of populations which go extinct locally and recolonize". Metapopulation ecology is another statistical approach that is often used in conservation research. Metapopulation models simplify the landscape into patches of varying levels of quality, and metapopulations are linked by the migratory behaviours of organisms. Animal migration is set apart from other kinds of movement because it involves the seasonal departure and return of individuals from a habitat. Migration is also a population-level phenomenon, as with the migration routes followed by plants as they occupied northern post-glacial environments. Plant ecologists use pollen records that accumulate and stratify in wetlands to reconstruct the timing of plant migration and dispersal relative to historic and contemporary climates. These migration routes involved an expansion of the range as plant populations expanded from one area to another. There is a larger taxonomy of movement, such as commuting, foraging, territorial behavior, stasis, and ranging. Dispersal is usually distinguished from migration because it involves the one-way permanent movement of individuals from their birth population into another population.
In metapopulation terminology, migrating individuals are classed as emigrants (when they leave a region) or immigrants (when they enter a region), and sites are classed either as sources or sinks. A site is a generic term that refers to places where ecologists sample populations, such as ponds or defined sampling areas in a forest. Source patches are productive sites that generate a seasonal supply of juveniles that migrate to other patch locations. Sink patches are unproductive sites that only receive migrants; the population at the site will disappear unless rescued by an adjacent source patch or environmental conditions become more favorable. Metapopulation models examine patch dynamics over time to answer potential questions about spatial and demographic ecology. The ecology of metapopulations is a dynamic process of extinction and colonization. Small patches of lower quality (i.e., sinks) are maintained or rescued by a seasonal influx of new immigrants. A dynamic metapopulation structure evolves from year to year, where some patches are sinks in dry years and are sources when conditions are more favorable. Ecologists use a mixture of computer models and field studies to explain metapopulation structure.
Community ecology
Community ecology is the study of the interactions among a collection of species that inhabit the same geographic area. Community ecologists study the determinants of patterns and processes for two or more interacting species. Research in community ecology might measure species diversity in grasslands in relation to soil fertility. It might also include the analysis of predator-prey dynamics, competition among similar plant species, or mutualistic interactions between crabs and corals.
Ecosystem ecology
Ecosystems may be habitats within biomes that form an integrated whole and a dynamically responsive system having both physical and biological complexes. Ecosystem ecology is the science of determining the fluxes of materials (e.g. carbon, phosphorus) between different pools (e.g., tree biomass, soil organic material). Ecosystem ecologists attempt to determine the underlying causes of these fluxes. Research in ecosystem ecology might measure primary production (g C/m^2) in a wetland in relation to decomposition and consumption rates (g C/m^2/y). This requires an understanding of the community connections between plants (i.e., primary producers) and the decomposers (e.g., fungi and bacteria).
The underlying concept of an ecosystem can be traced back to 1864 in the published work of George Perkins Marsh ("Man and Nature"). Within an ecosystem, organisms are linked to the physical and biological components of their environment to which they are adapted. Ecosystems are complex adaptive systems where the interaction of life processes form self-organizing patterns across different scales of time and space. Ecosystems are broadly categorized as terrestrial, freshwater, atmospheric, or marine. Differences stem from the nature of the unique physical environments that shapes the biodiversity within each. A more recent addition to ecosystem ecology are technoecosystems, which are affected by or primarily the result of human activity.
Food webs
A food web is the archetypal ecological network. Plants capture solar energy and use it to synthesize simple sugars during photosynthesis. As plants grow, they accumulate nutrients and are eaten by grazing herbivores, and the energy is transferred through a chain of organisms by consumption. The simplified linear feeding pathways that move from a basal trophic species to a top consumer is called the food chain. Food chains in an ecological community create a complex food web. Food webs are a type of concept map that is used to illustrate and study pathways of energy and material flows.
Empirical measurements are generally restricted to a specific habitat, such as a cave or a pond, and principles gleaned from small-scale studies are extrapolated to larger systems. Feeding relations require extensive investigations, e.g. into the gut contents of organisms, which can be difficult to decipher, or stable isotopes can be used to trace the flow of nutrient diets and energy through a food web. Despite these limitations, food webs remain a valuable tool in understanding community ecosystems.
Food webs illustrate important principles of ecology: some species have many weak feeding links (e.g., omnivores) while some are more specialized with fewer stronger feeding links (e.g., primary predators). Such linkages explain how ecological communities remain stable over time and eventually can illustrate a "complete" web of life.
The disruption of food webs may have a dramatic impact on the ecology of individual species or whole ecosystems. For instance, the replacement of an ant species by another (invasive) ant species has been shown to affect how elephants reduce tree cover and thus the predation of lions on zebras.
Trophic levels
A trophic level (from Greek troph, τροφή, trophē, meaning "food" or "feeding") is "a group of organisms acquiring a considerable majority of its energy from the lower adjacent level (according to ecological pyramids) nearer the abiotic source." Links in food webs primarily connect feeding relations or trophism among species. Biodiversity within ecosystems can be organized into trophic pyramids, in which the vertical dimension represents feeding relations that become further removed from the base of the food chain up toward top predators, and the horizontal dimension represents the abundance or biomass at each level. When the relative abundance or biomass of each species is sorted into its respective trophic level, they naturally sort into a 'pyramid of numbers'.
Species are broadly categorized as autotrophs (or primary producers), heterotrophs (or consumers), and Detritivores (or decomposers). Autotrophs are organisms that produce their own food (production is greater than respiration) by photosynthesis or chemosynthesis. Heterotrophs are organisms that must feed on others for nourishment and energy (respiration exceeds production). Heterotrophs can be further sub-divided into different functional groups, including primary consumers (strict herbivores), secondary consumers (carnivorous predators that feed exclusively on herbivores), and tertiary consumers (predators that feed on a mix of herbivores and predators). Omnivores do not fit neatly into a functional category because they eat both plant and animal tissues. It has been suggested that omnivores have a greater functional influence as predators because compared to herbivores, they are relatively inefficient at grazing.
Trophic levels are part of the holistic or complex systems view of ecosystems. Each trophic level contains unrelated species that are grouped together because they share common ecological functions, giving a macroscopic view of the system. While the notion of trophic levels provides insight into energy flow and top-down control within food webs, it is troubled by the prevalence of omnivory in real ecosystems. This has led some ecologists to "reiterate that the notion that species clearly aggregate into discrete, homogeneous trophic levels is fiction." Nonetheless, recent studies have shown that real trophic levels do exist, but "above the herbivore trophic level, food webs are better characterized as a tangled web of omnivores."
Keystone species
A keystone species is a species that is connected to a disproportionately large number of other species in the food-web. Keystone species have lower levels of biomass in the trophic pyramid relative to the importance of their role. The many connections that a keystone species holds means that it maintains the organization and structure of entire communities. The loss of a keystone species results in a range of dramatic cascading effects (termed trophic cascades) that alters trophic dynamics, other food web connections, and can cause the extinction of other species. The term keystone species was coined by Robert Paine in 1969 and is a reference to the keystone architectural feature as the removal of a keystone species can result in a community collapse just as the removal of the keystone in an arch can result in the arch's loss of stability.
Sea otters (Enhydra lutris) are commonly cited as an example of a keystone species because they limit the density of sea urchins that feed on kelp. If sea otters are removed from the system, the urchins graze until the kelp beds disappear, and this has a dramatic effect on community structure. Hunting of sea otters, for example, is thought to have led indirectly to the extinction of the Steller's sea cow (Hydrodamalis gigas). While the keystone species concept has been used extensively as a conservation tool, it has been criticized for being poorly defined from an operational stance. It is difficult to experimentally determine what species may hold a keystone role in each ecosystem. Furthermore, food web theory suggests that keystone species may not be common, so it is unclear how generally the keystone species model can be applied.
Complexity
Complexity is understood as a large computational effort needed to piece together numerous interacting parts exceeding the iterative memory capacity of the human mind. Global patterns of biological diversity are complex. This biocomplexity stems from the interplay among ecological processes that operate and influence patterns at different scales that grade into each other, such as transitional areas or ecotones spanning landscapes. Complexity stems from the interplay among levels of biological organization as energy, and matter is integrated into larger units that superimpose onto the smaller parts. "What were wholes on one level become parts on a higher one." Small scale patterns do not necessarily explain large scale phenomena, otherwise captured in the expression (coined by Aristotle) 'the sum is greater than the parts'.
"Complexity in ecology is of at least six distinct types: spatial, temporal, structural, process, behavioral, and geometric." From these principles, ecologists have identified emergent and self-organizing phenomena that operate at different environmental scales of influence, ranging from molecular to planetary, and these require different explanations at each integrative level. Ecological complexity relates to the dynamic resilience of ecosystems that transition to multiple shifting steady-states directed by random fluctuations of history. Long-term ecological studies provide important track records to better understand the complexity and resilience of ecosystems over longer temporal and broader spatial scales. These studies are managed by the International Long Term Ecological Network (LTER). The longest experiment in existence is the Park Grass Experiment, which was initiated in 1856. Another example is the Hubbard Brook study, which has been in operation since 1960.
Holism
Holism remains a critical part of the theoretical foundation in contemporary ecological studies. Holism addresses the biological organization of life that self-organizes into layers of emergent whole systems that function according to non-reducible properties. This means that higher-order patterns of a whole functional system, such as an ecosystem, cannot be predicted or understood by a simple summation of the parts. "New properties emerge because the components interact, not because the basic nature of the components is changed."
Ecological studies are necessarily holistic as opposed to reductionistic. Holism has three scientific meanings or uses that identify with ecology: 1) the mechanistic complexity of ecosystems, 2) the practical description of patterns in quantitative reductionist terms where correlations may be identified but nothing is understood about the causal relations without reference to the whole system, which leads to 3) a metaphysical hierarchy whereby the causal relations of larger systems are understood without reference to the smaller parts. Scientific holism differs from mysticism that has appropriated the same term. An example of metaphysical holism is identified in the trend of increased exterior thickness in shells of different species. The reason for a thickness increase can be understood through reference to principles of natural selection via predation without the need to reference or understand the biomolecular properties of the exterior shells.
Relation to evolution
Ecology and evolutionary biology are considered sister disciplines of the life sciences. Natural selection, life history, development, adaptation, populations, and inheritance are examples of concepts that thread equally into ecological and evolutionary theory. Morphological, behavioural, and genetic traits, for example, can be mapped onto evolutionary trees to study the historical development of a species in relation to their functions and roles in different ecological circumstances. In this framework, the analytical tools of ecologists and evolutionists overlap as they organize, classify, and investigate life through common systematic principles, such as phylogenetics or the Linnaean system of taxonomy. The two disciplines often appear together, such as in the title of the journal Trends in Ecology and Evolution. There is no sharp boundary separating ecology from evolution, and they differ more in their areas of applied focus. Both disciplines discover and explain emergent and unique properties and processes operating across different spatial or temporal scales of organization. While the boundary between ecology and evolution is not always clear, ecologists study the abiotic and biotic factors that influence evolutionary processes, and evolution can be rapid, occurring on ecological timescales as short as one generation.
Behavioural ecology
All organisms can exhibit behaviours. Even plants express complex behaviour, including memory and communication. Behavioural ecology is the study of an organism's behaviour in its environment and its ecological and evolutionary implications. Ethology is the study of observable movement or behaviour in animals. This could include investigations of motile sperm of plants, mobile phytoplankton, zooplankton swimming toward the female egg, the cultivation of fungi by weevils, the mating dance of a salamander, or social gatherings of amoeba.
Adaptation is the central unifying concept in behavioural ecology. Behaviours can be recorded as traits and inherited in much the same way that eye and hair colour can. Behaviours can evolve by means of natural selection as adaptive traits conferring functional utilities that increases reproductive fitness.
Predator-prey interactions are an introductory concept into food-web studies as well as behavioural ecology. Prey species can exhibit different kinds of behavioural adaptations to predators, such as avoid, flee, or defend. Many prey species are faced with multiple predators that differ in the degree of danger posed. To be adapted to their environment and face predatory threats, organisms must balance their energy budgets as they invest in different aspects of their life history, such as growth, feeding, mating, socializing, or modifying their habitat. Hypotheses posited in behavioural ecology are generally based on adaptive principles of conservation, optimization, or efficiency. For example, "[t]he threat-sensitive predator avoidance hypothesis predicts that prey should assess the degree of threat posed by different predators and match their behaviour according to current levels of risk" or "[t]he optimal flight initiation distance occurs where expected postencounter fitness is maximized, which depends on the prey's initial fitness, benefits obtainable by not fleeing, energetic escape costs, and expected fitness loss due to predation risk."
Elaborate sexual displays and posturing are encountered in the behavioural ecology of animals. The birds-of-paradise, for example, sing and display elaborate ornaments during courtship. These displays serve a dual purpose of signalling healthy or well-adapted individuals and desirable genes. The displays are driven by sexual selection as an advertisement of quality of traits among suitors.
Cognitive ecology
Cognitive ecology integrates theory and observations from evolutionary ecology and neurobiology, primarily cognitive science, in order to understand the effect that animal interaction with their habitat has on their cognitive systems and how those systems restrict behavior within an ecological and evolutionary framework. "Until recently, however, cognitive scientists have not paid sufficient attention to the fundamental fact that cognitive traits evolved under particular natural settings. With consideration of the selection pressure on cognition, cognitive ecology can contribute intellectual coherence to the multidisciplinary study of cognition." As a study involving the 'coupling' or interactions between organism and environment, cognitive ecology is closely related to enactivism, a field based upon the view that "...we must see the organism and environment as bound together in reciprocal specification and selection...".
Social ecology
Social-ecological behaviours are notable in the social insects, slime moulds, social spiders, human society, and naked mole-rats where eusocialism has evolved. Social behaviours include reciprocally beneficial behaviours among kin and nest mates and evolve from kin and group selection. Kin selection explains altruism through genetic relationships, whereby an altruistic behaviour leading to death is rewarded by the survival of genetic copies distributed among surviving relatives. The social insects, including ants, bees, and wasps are most famously studied for this type of relationship because the male drones are clones that share the same genetic make-up as every other male in the colony. In contrast, group selectionists find examples of altruism among non-genetic relatives and explain this through selection acting on the group; whereby, it becomes selectively advantageous for groups if their members express altruistic behaviours to one another. Groups with predominantly altruistic members survive better than groups with predominantly selfish members.
Coevolution
Ecological interactions can be classified broadly into a host and an associate relationship. A host is any entity that harbours another that is called the associate. Relationships between species that are mutually or reciprocally beneficial are called mutualisms. Examples of mutualism include fungus-growing ants employing agricultural symbiosis, bacteria living in the guts of insects and other organisms, the fig wasp and yucca moth pollination complex, lichens with fungi and photosynthetic algae, and corals with photosynthetic algae. If there is a physical connection between host and associate, the relationship is called symbiosis. Approximately 60% of all plants, for example, have a symbiotic relationship with arbuscular mycorrhizal fungi living in their roots forming an exchange network of carbohydrates for mineral nutrients.
Indirect mutualisms occur where the organisms live apart. For example, trees living in the equatorial regions of the planet supply oxygen into the atmosphere that sustains species living in distant polar regions of the planet. This relationship is called commensalism because many others receive the benefits of clean air at no cost or harm to trees supplying the oxygen. If the associate benefits while the host suffers, the relationship is called parasitism. Although parasites impose a cost to their host (e.g., via damage to their reproductive organs or propagules, denying the services of a beneficial partner), their net effect on host fitness is not necessarily negative and, thus, becomes difficult to forecast. Co-evolution is also driven by competition among species or among members of the same species under the banner of reciprocal antagonism, such as grasses competing for growth space. The Red Queen Hypothesis, for example, posits that parasites track down and specialize on the locally common genetic defense systems of its host that drives the evolution of sexual reproduction to diversify the genetic constituency of populations responding to the antagonistic pressure.
Biogeography
Biogeography (an amalgamation of biology and geography) is the comparative study of the geographic distribution of organisms and the corresponding evolution of their traits in space and time. The Journal of Biogeography was established in 1974. Biogeography and ecology share many of their disciplinary roots. For example, the theory of island biogeography, published by the Robert MacArthur and Edward O. Wilson in 1967 is considered one of the fundamentals of ecological theory.
Biogeography has a long history in the natural sciences concerning the spatial distribution of plants and animals. Ecology and evolution provide the explanatory context for biogeographical studies. Biogeographical patterns result from ecological processes that influence range distributions, such as migration and dispersal. and from historical processes that split populations or species into different areas. The biogeographic processes that result in the natural splitting of species explain much of the modern distribution of the Earth's biota. The splitting of lineages in a species is called vicariance biogeography and it is a sub-discipline of biogeography. There are also practical applications in the field of biogeography concerning ecological systems and processes. For example, the range and distribution of biodiversity and invasive species responding to climate change is a serious concern and active area of research in the context of global warming.
r/K selection theory
A population ecology concept is r/K selection theory, one of the first predictive models in ecology used to explain life-history evolution. The premise behind the r/K selection model is that natural selection pressures change according to population density. For example, when an island is first colonized, density of individuals is low. The initial increase in population size is not limited by competition, leaving an abundance of available resources for rapid population growth. These early phases of population growth experience density-independent forces of natural selection, which is called r-selection. As the population becomes more crowded, it approaches the island's carrying capacity, thus forcing individuals to compete more heavily for fewer available resources. Under crowded conditions, the population experiences density-dependent forces of natural selection, called K-selection.
In the r/K-selection model, the first variable r is the intrinsic rate of natural increase in population size and the second variable K is the carrying capacity of a population. Different species evolve different life-history strategies spanning a continuum between these two selective forces. An r-selected species is one that has high birth rates, low levels of parental investment, and high rates of mortality before individuals reach maturity. Evolution favours high rates of fecundity in r-selected species. Many kinds of insects and invasive species exhibit r-selected characteristics. In contrast, a K-selected species has low rates of fecundity, high levels of parental investment in the young, and low rates of mortality as individuals mature. Humans and elephants are examples of species exhibiting K-selected characteristics, including longevity and efficiency in the conversion of more resources into fewer offspring.
Molecular ecology
The important relationship between ecology and genetic inheritance predates modern techniques for molecular analysis. Molecular ecological research became more feasible with the development of rapid and accessible genetic technologies, such as the polymerase chain reaction (PCR). The rise of molecular technologies and the influx of research questions into this new ecological field resulted in the publication Molecular Ecology in 1992. Molecular ecology uses various analytical techniques to study genes in an evolutionary and ecological context. In 1994, John Avise also played a leading role in this area of science with the publication of his book, Molecular Markers, Natural History and Evolution. Newer technologies opened a wave of genetic analysis into organisms once difficult to study from an ecological or evolutionary standpoint, such as bacteria, fungi, and nematodes. Molecular ecology engendered a new research paradigm for investigating ecological questions considered otherwise intractable. Molecular investigations revealed previously obscured details in the tiny intricacies of nature and improved resolution into probing questions about behavioural and biogeographical ecology. For example, molecular ecology revealed promiscuous sexual behaviour and multiple male partners in tree swallows previously thought to be socially monogamous. In a biogeographical context, the marriage between genetics, ecology, and evolution resulted in a new sub-discipline called phylogeography.
Human ecology
Ecology is as much a biological science as it is a human science. Human ecology is an interdisciplinary investigation into the ecology of our species. "Human ecology may be defined: (1) from a bioecological standpoint as the study of man as the ecological dominant in plant and animal communities and systems; (2) from a bioecological standpoint as simply another animal affecting and being affected by his physical environment; and (3) as a human being, somehow different from animal life in general, interacting with physical and modified environments in a distinctive and creative way. A truly interdisciplinary human ecology will most likely address itself to all three." The term was formally introduced in 1921, but many sociologists, geographers, psychologists, and other disciplines were interested in human relations to natural systems centuries prior, especially in the late 19th century.
The ecological complexities human beings are facing through the technological transformation of the planetary biome has brought on the Anthropocene. The unique set of circumstances has generated the need for a new unifying science called coupled human and natural systems that builds upon, but moves beyond the field of human ecology. Ecosystems tie into human societies through the critical and all-encompassing life-supporting functions they sustain. In recognition of these functions and the incapability of traditional economic valuation methods to see the value in ecosystems, there has been a surge of interest in social-natural capital, which provides the means to put a value on the stock and use of information and materials stemming from ecosystem goods and services. Ecosystems produce, regulate, maintain, and supply services of critical necessity and beneficial to human health (cognitive and physiological), economies, and they even provide an information or reference function as a living library giving opportunities for science and cognitive development in children engaged in the complexity of the natural world. Ecosystems relate importantly to human ecology as they are the ultimate base foundation of global economics as every commodity, and the capacity for exchange ultimately stems from the ecosystems on Earth.
Restoration Ecology
Ecology is an employed science of restoration, repairing disturbed sites through human intervention, in natural resource management, and in environmental impact assessments. Edward O. Wilson predicted in 1992 that the 21st century "will be the era of restoration in ecology". Ecological science has boomed in the industrial investment of restoring ecosystems and their processes in abandoned sites after disturbance. Natural resource managers, in forestry, for example, employ ecologists to develop, adapt, and implement ecosystem based methods into the planning, operation, and restoration phases of land-use. Another example of conservation is seen on the east coast of the United States in Boston, MA. The city of Boston implemented the Wetland Ordinance, improving the stability of their wetland environments by implementing soil amendments that will improve groundwater storage and flow, and trimming or removal of vegetation that could cause harm to water quality. Ecological science is used in the methods of sustainable harvesting, disease, and fire outbreak management, in fisheries stock management, for integrating land-use with protected areas and communities, and conservation in complex geo-political landscapes.
Relation to the environment
The environment of ecosystems includes both physical parameters and biotic attributes. It is dynamically interlinked and contains resources for organisms at any time throughout their life cycle. Like ecology, the term environment has different conceptual meanings and overlaps with the concept of nature. Environment "includes the physical world, the social world of human relations and the built world of human creation." The physical environment is external to the level of biological organization under investigation, including abiotic factors such as temperature, radiation, light, chemistry, climate and geology. The biotic environment includes genes, cells, organisms, members of the same species (conspecifics) and other species that share a habitat.
The distinction between external and internal environments, however, is an abstraction parsing life and environment into units or facts that are inseparable in reality. There is an interpenetration of cause and effect between the environment and life. The laws of thermodynamics, for example, apply to ecology by means of its physical state. With an understanding of metabolic and thermodynamic principles, a complete accounting of energy and material flow can be traced through an ecosystem. In this way, the environmental and ecological relations are studied through reference to conceptually manageable and isolated material parts. After the effective environmental components are understood through reference to their causes; however, they conceptually link back together as an integrated whole, or holocoenotic system as it was once called. This is known as the dialectical approach to ecology. The dialectical approach examines the parts but integrates the organism and the environment into a dynamic whole (or umwelt). Change in one ecological or environmental factor can concurrently affect the dynamic state of an entire ecosystem.
Disturbance and resilience
A disturbance is any process that changes or removes biomass from a community, such as a fire, flood, drought, or predation. Disturbances are both the cause and product of natural fluctuations within an ecological community. Biodiversity can protect ecosystems from disturbances.
The effect of a disturbance is often hard to predict, but there are numerous examples in which a single species can massively disturb an ecosystem. For example, a single-celled protozoan has been able to kill up to 100% of sea urchins in some coral reefs in the Red Sea and Western Indian Ocean. Sea urchins enable complex reef ecosystems to thrive by eating algae that would otherwise inhibit coral growth. Similarly, invasive species can wreak havoc on ecosystems. For instance, invasive Burmese pythons have caused a 98% decline of small mammals in the Everglades.
Metabolism and the early atmosphere
The Earth was formed approximately 4.5 billion years ago. As it cooled and a crust and oceans formed, its atmosphere transformed from being dominated by hydrogen to one composed mostly of methane and ammonia. Over the next billion years, the metabolic activity of life transformed the atmosphere into a mixture of carbon dioxide, nitrogen, and water vapor. These gases changed the way that light from the sun hit the Earth's surface and greenhouse effects trapped heat. There were untapped sources of free energy within the mixture of reducing and oxidizing gasses that set the stage for primitive ecosystems to evolve and, in turn, the atmosphere also evolved.
Throughout history, the Earth's atmosphere and biogeochemical cycles have been in a dynamic equilibrium with planetary ecosystems. The history is characterized by periods of significant transformation followed by millions of years of stability. The evolution of the earliest organisms, likely anaerobic methanogen microbes, started the process by converting atmospheric hydrogen into methane (4H2 + CO2 → CH4 + 2H2O). Anoxygenic photosynthesis reduced hydrogen concentrations and increased atmospheric methane, by converting hydrogen sulfide into water or other sulfur compounds (for example, 2H2S + CO2 + hv → CH2O + H2O + 2S). Early forms of fermentation also increased levels of atmospheric methane. The transition to an oxygen-dominant atmosphere (the Great Oxidation) did not begin until approximately 2.4–2.3 billion years ago, but photosynthetic processes started 0.3 to 1 billion years prior.
Radiation: heat, temperature and light
The biology of life operates within a certain range of temperatures. Heat is a form of energy that regulates temperature. Heat affects growth rates, activity, behaviour, and primary production. Temperature is largely dependent on the incidence of solar radiation. The latitudinal and longitudinal spatial variation of temperature greatly affects climates and consequently the distribution of biodiversity and levels of primary production in different ecosystems or biomes across the planet. Heat and temperature relate importantly to metabolic activity. Poikilotherms, for example, have a body temperature that is largely regulated and dependent on the temperature of the external environment. In contrast, homeotherms regulate their internal body temperature by expending metabolic energy.
There is a relationship between light, primary production, and ecological energy budgets. Sunlight is the primary input of energy into the planet's ecosystems. Light is composed of electromagnetic energy of different wavelengths. Radiant energy from the sun generates heat, provides photons of light measured as active energy in the chemical reactions of life, and also acts as a catalyst for genetic mutation. Plants, algae, and some bacteria absorb light and assimilate the energy through photosynthesis. Organisms capable of assimilating energy by photosynthesis or through inorganic fixation of H2S are autotrophs. Autotrophs—responsible for primary production—assimilate light energy which becomes metabolically stored as potential energy in the form of biochemical enthalpic bonds.
Physical environments
Water
Diffusion of carbon dioxide and oxygen is approximately 10,000 times slower in water than in air. When soils are flooded, they quickly lose oxygen, becoming hypoxic (an environment with O2 concentration below 2 mg/liter) and eventually completely anoxic where anaerobic bacteria thrive among the roots. Water also influences the intensity and spectral composition of light as it reflects off the water surface and submerged particles. Aquatic plants exhibit a wide variety of morphological and physiological adaptations that allow them to survive, compete, and diversify in these environments. For example, their roots and stems contain large air spaces (aerenchyma) that regulate the efficient transportation of gases (for example, CO2 and O2) used in respiration and photosynthesis. Salt water plants (halophytes) have additional specialized adaptations, such as the development of special organs for shedding salt and osmoregulating their internal salt (NaCl) concentrations, to live in estuarine, brackish, or oceanic environments. Anaerobic soil microorganisms in aquatic environments use nitrate, manganese ions, ferric ions, sulfate, carbon dioxide, and some organic compounds; other microorganisms are facultative anaerobes and use oxygen during respiration when the soil becomes drier. The activity of soil microorganisms and the chemistry of the water reduces the oxidation-reduction potentials of the water. Carbon dioxide, for example, is reduced to methane (CH4) by methanogenic bacteria. The physiology of fish is also specially adapted to compensate for environmental salt levels through osmoregulation. Their gills form electrochemical gradients that mediate salt excretion in salt water and uptake in fresh water.
Gravity
The shape and energy of the land are significantly affected by gravitational forces. On a large scale, the distribution of gravitational forces on the earth is uneven and influences the shape and movement of tectonic plates as well as influencing geomorphic processes such as orogeny and erosion. These forces govern many of the geophysical properties and distributions of ecological biomes across the Earth. On the organismal scale, gravitational forces provide directional cues for plant and fungal growth (gravitropism), orientation cues for animal migrations, and influence the biomechanics and size of animals. Ecological traits, such as allocation of biomass in trees during growth are subject to mechanical failure as gravitational forces influence the position and structure of branches and leaves. The cardiovascular systems of animals are functionally adapted to overcome the pressure and gravitational forces that change according to the features of organisms (e.g., height, size, shape), their behaviour (e.g., diving, running, flying), and the habitat occupied (e.g., water, hot deserts, cold tundra).
Pressure
Climatic and osmotic pressure places physiological constraints on organisms, especially those that fly and respire at high altitudes, or dive to deep ocean depths. These constraints influence vertical limits of ecosystems in the biosphere, as organisms are physiologically sensitive and adapted to atmospheric and osmotic water pressure differences. For example, oxygen levels decrease with decreasing pressure and are a limiting factor for life at higher altitudes. Water transportation by plants is another important ecophysiological process affected by osmotic pressure gradients. Water pressure in the depths of oceans requires that organisms adapt to these conditions. For example, diving animals such as whales, dolphins, and seals are specially adapted to deal with changes in sound due to water pressure differences. Differences between hagfish species provide another example of adaptation to deep-sea pressure through specialized protein adaptations.
Wind and turbulence
Turbulent forces in air and water affect the environment and ecosystem distribution, form, and dynamics. On a planetary scale, ecosystems are affected by circulation patterns in the global trade winds. Wind power and the turbulent forces it creates can influence heat, nutrient, and biochemical profiles of ecosystems. For example, wind running over the surface of a lake creates turbulence, mixing the water column and influencing the environmental profile to create thermally layered zones, affecting how fish, algae, and other parts of the aquatic ecosystem are structured. Wind speed and turbulence also influence evapotranspiration rates and energy budgets in plants and animals. Wind speed, temperature and moisture content can vary as winds travel across different land features and elevations. For example, the westerlies come into contact with the coastal and interior mountains of western North America to produce a rain shadow on the leeward side of the mountain. The air expands and moisture condenses as the winds increase in elevation; this is called orographic lift and can cause precipitation. This environmental process produces spatial divisions in biodiversity, as species adapted to wetter conditions are range-restricted to the coastal mountain valleys and unable to migrate across the xeric ecosystems (e.g., of the Columbia Basin in western North America) to intermix with sister lineages that are segregated to the interior mountain systems.
Fire
Plants convert carbon dioxide into biomass and emit oxygen into the atmosphere. By approximately 350 million years ago (the end of the Devonian period), photosynthesis had brought the concentration of atmospheric oxygen above 17%, which allowed combustion to occur. Fire releases CO2 and converts fuel into ash and tar. Fire is a significant ecological parameter that raises many issues pertaining to its control and suppression. While the issue of fire in relation to ecology and plants has been recognized for a long time, Charles Cooper brought attention to the issue of forest fires in relation to the ecology of forest fire suppression and management in the 1960s.
Native North Americans were among the first to influence fire regimes by controlling their spread near their homes or by lighting fires to stimulate the production of herbaceous foods and basketry materials. Fire creates a heterogeneous ecosystem age and canopy structure, and the altered soil nutrient supply and cleared canopy structure opens new ecological niches for seedling establishment. Most ecosystems are adapted to natural fire cycles. Plants, for example, are equipped with a variety of adaptations to deal with forest fires. Some species (e.g., Pinus halepensis) cannot germinate until after their seeds have lived through a fire or been exposed to certain compounds from smoke. Environmentally triggered germination of seeds is called serotiny. Fire plays a major role in the persistence and resilience of ecosystems.
Soils
Soil is the living top layer of mineral and organic dirt that covers the surface of the planet. It is the chief organizing centre of most ecosystem functions, and it is of critical importance in agricultural science and ecology. The decomposition of dead organic matter (for example, leaves on the forest floor), results in soils containing minerals and nutrients that feed into plant production. The whole of the planet's soil ecosystems is called the pedosphere where a large biomass of the Earth's biodiversity organizes into trophic levels. Invertebrates that feed and shred larger leaves, for example, create smaller bits for smaller organisms in the feeding chain. Collectively, these organisms are the detritivores that regulate soil formation. Tree roots, fungi, bacteria, worms, ants, beetles, centipedes, spiders, mammals, birds, reptiles, amphibians, and other less familiar creatures all work to create the trophic web of life in soil ecosystems. Soils form composite phenotypes where inorganic matter is enveloped into the physiology of a whole community. As organisms feed and migrate through soils they physically displace materials, an ecological process called bioturbation. This aerates soils and stimulates heterotrophic growth and production. Soil microorganisms are influenced by and are fed back into the trophic dynamics of the ecosystem. No single axis of causality can be discerned to segregate the biological from geomorphological systems in soils. Paleoecological studies of soils places the origin for bioturbation to a time before the Cambrian period. Other events, such as the evolution of trees and the colonization of land in the Devonian period played a significant role in the early development of ecological trophism in soils.
Biogeochemistry and climate
Ecologists study and measure nutrient budgets to understand how these materials are regulated, flow, and recycled through the environment. This research has led to an understanding that there is global feedback between ecosystems and the physical parameters of this planet, including minerals, soil, pH, ions, water, and atmospheric gases. Six major elements (hydrogen, carbon, nitrogen, oxygen, sulfur, and phosphorus; H, C, N, O, S, and P) form the constitution of all biological macromolecules and feed into the Earth's geochemical processes. From the smallest scale of biology, the combined effect of billions upon billions of ecological processes amplify and ultimately regulate the biogeochemical cycles of the Earth. Understanding the relations and cycles mediated between these elements and their ecological pathways has significant bearing toward understanding global biogeochemistry.
The ecology of global carbon budgets gives one example of the linkage between biodiversity and biogeochemistry. It is estimated that the Earth's oceans hold 40,000 gigatonnes (Gt) of carbon, that vegetation and soil hold 2070 Gt, and that fossil fuel emissions are 6.3 Gt carbon per year. There have been major restructurings in these global carbon budgets during the Earth's history, regulated to a large extent by the ecology of the land. For example, through the early-mid Eocene volcanic outgassing, the oxidation of methane stored in wetlands, and seafloor gases increased atmospheric CO2 (carbon dioxide) concentrations to levels as high as 3500 ppm.
In the Oligocene, from twenty-five to thirty-two million years ago, there was another significant restructuring of the global carbon cycle as grasses evolved a new mechanism of photosynthesis, C4 photosynthesis, and expanded their ranges. This new pathway evolved in response to the drop in atmospheric CO2 concentrations below 550 ppm. The relative abundance and distribution of biodiversity alters the dynamics between organisms and their environment such that ecosystems can be both cause and effect in relation to climate change. Human-driven modifications to the planet's ecosystems (e.g., disturbance, biodiversity loss, agriculture) contributes to rising atmospheric greenhouse gas levels. Transformation of the global carbon cycle in the next century is projected to raise planetary temperatures, lead to more extreme fluctuations in weather, alter species distributions, and increase extinction rates. The effect of global warming is already being registered in melting glaciers, melting mountain ice caps, and rising sea levels. Consequently, species distributions are changing along waterfronts and in continental areas where migration patterns and breeding grounds are tracking the prevailing shifts in climate. Large sections of permafrost are also melting to create a new mosaic of flooded areas having increased rates of soil decomposition activity that raises methane (CH4) emissions. There is concern over increases in atmospheric methane in the context of the global carbon cycle, because methane is a greenhouse gas that is 23 times more effective at absorbing long-wave radiation than CO2 on a 100-year time scale. Hence, there is a relationship between global warming, decomposition and respiration in soils and wetlands producing significant climate feedbacks and globally altered biogeochemical cycles.
History
Early beginnings
Ecology has a complex origin, due in large part to its interdisciplinary nature. Ancient Greek philosophers such as Hippocrates and Aristotle were among the first to record observations on natural history. However, they viewed life in terms of essentialism, where species were conceptualized as static unchanging things while varieties were seen as aberrations of an idealized type. This contrasts against the modern understanding of ecological theory where varieties are viewed as the real phenomena of interest and having a role in the origins of adaptations by means of natural selection. Early conceptions of ecology, such as a balance and regulation in nature can be traced to Herodotus (died c. 425 BC), who described one of the earliest accounts of mutualism in his observation of "natural dentistry". Basking Nile crocodiles, he noted, would open their mouths to give sandpipers safe access to pluck leeches out, giving nutrition to the sandpiper and oral hygiene for the crocodile. Aristotle was an early influence on the philosophical development of ecology. He and his student Theophrastus made extensive observations on plant and animal migrations, biogeography, physiology, and their behavior, giving an early analogue to the modern concept of an ecological niche.
Ernst Haeckel (left) and Eugenius Warming (right), two founders of ecology
Ecological concepts such as food chains, population regulation, and productivity were first developed in the 1700s, through the published works of microscopist Antonie van Leeuwenhoek (1632–1723) and botanist Richard Bradley (1688?–1732). Biogeographer Alexander von Humboldt (1769–1859) was an early pioneer in ecological thinking and was among the first to recognize ecological gradients, where species are replaced or altered in form along environmental gradients, such as a cline forming along a rise in elevation. Humboldt drew inspiration from Isaac Newton, as he developed a form of "terrestrial physics". In Newtonian fashion, he brought a scientific exactitude for measurement into natural history and even alluded to concepts that are the foundation of a modern ecological law on species-to-area relationships. Natural historians, such as Humboldt, James Hutton, and Jean-Baptiste Lamarck (among others) laid the foundations of the modern ecological sciences. The term "ecology" was coined by Ernst Haeckel in his book Generelle Morphologie der Organismen (1866). Haeckel was a zoologist, artist, writer, and later in life a professor of comparative anatomy.
Opinions differ on who was the founder of modern ecological theory. Some mark Haeckel's definition as the beginning; others say it was Eugenius Warming with the writing of Oecology of Plants: An Introduction to the Study of Plant Communities (1895), or Carl Linnaeus' principles on the economy of nature that matured in the early 18th century. Linnaeus founded an early branch of ecology that he called the economy of nature. His works influenced Charles Darwin, who adopted Linnaeus' phrase on the economy or polity of nature in The Origin of Species. Linnaeus was the first to frame the balance of nature as a testable hypothesis. Haeckel, who admired Darwin's work, defined ecology in reference to the economy of nature, which has led some to question whether ecology and the economy of nature are synonymous.
From Aristotle until Darwin, the natural world was predominantly considered static and unchanging. Prior to The Origin of Species, there was little appreciation or understanding of the dynamic and reciprocal relations between organisms, their adaptations, and the environment. An exception is the 1789 publication Natural History of Selborne by Gilbert White (1720–1793), considered by some to be one of the earliest texts on ecology. While Charles Darwin is mainly noted for his treatise on evolution, he was one of the founders of soil ecology, and he made note of the first ecological experiment in The Origin of Species. Evolutionary theory changed the way that researchers approached the ecological sciences.
Since 1900
Modern ecology is a young science that first attracted substantial scientific attention toward the end of the 19th century (around the same time that evolutionary studies were gaining scientific interest). The scientist Ellen Swallow Richards adopted the term "oekology" (which eventually morphed into home economics) in the U.S. as early as 1892.
In the early 20th century, ecology transitioned from a more descriptive form of natural history to a more analytical form of scientific natural history. Frederic Clements published the first American ecology book in 1905, presenting the idea of plant communities as a superorganism. This publication launched a debate between ecological holism and individualism that lasted until the 1970s. Clements' superorganism concept proposed that ecosystems progress through regular and determined stages of seral development that are analogous to the developmental stages of an organism. The Clementsian paradigm was challenged by Henry Gleason, who stated that ecological communities develop from the unique and coincidental association of individual organisms. This perceptual shift placed the focus back onto the life histories of individual organisms and how this relates to the development of community associations.
The Clementsian superorganism theory was an overextended application of an idealistic form of holism. The term "holism" was coined in 1926 by Jan Christiaan Smuts, a South African general and polarizing historical figure who was inspired by Clements' superorganism concept. Around the same time, Charles Elton pioneered the concept of food chains in his classical book Animal Ecology. Elton defined ecological relations using concepts of food chains, food cycles, and food size, and described numerical relations among different functional groups and their relative abundance. Elton's 'food cycle' was replaced by 'food web' in a subsequent ecological text. Alfred J. Lotka brought in many theoretical concepts applying thermodynamic principles to ecology.
In 1942, Raymond Lindeman wrote a landmark paper on the trophic dynamics of ecology, which was published posthumously after initially being rejected for its theoretical emphasis. Trophic dynamics became the foundation for much of the work to follow on energy and material flow through ecosystems. Robert MacArthur advanced mathematical theory, predictions, and tests in ecology in the 1950s, which inspired a resurgent school of theoretical mathematical ecologists. Ecology also has developed through contributions from other nations, including Russia's Vladimir Vernadsky and his founding of the biosphere concept in the 1920s and Japan's Kinji Imanishi and his concepts of harmony in nature and habitat segregation in the 1950s. Scientific recognition of contributions to ecology from non-English-speaking cultures is hampered by language and translation barriers.
Ecology surged in popular and scientific interest during the 1960–1970s environmental movement. There are strong historical and scientific ties between ecology, environmental management, and protection. The historical emphasis and poetic naturalistic writings advocating the protection of wild places by notable ecologists in the history of conservation biology, such as Aldo Leopold and Arthur Tansley, have been seen as far removed from urban centres where, it is claimed, the concentration of pollution and environmental degradation is located. Palamar (2008) notes an overshadowing by mainstream environmentalism of pioneering women in the early 1900s who fought for urban health ecology (then called euthenics) and brought about changes in environmental legislation. Women such as Ellen Swallow Richards and Julia Lathrop, among others, were precursors to the more popularized environmental movements after the 1950s.
In 1962, marine biologist and ecologist Rachel Carson's book Silent Spring helped to mobilize the environmental movement by alerting the public to toxic pesticides, such as DDT, bioaccumulating in the environment. Carson used ecological science to link the release of environmental toxins to human and ecosystem health. Since then, ecologists have worked to bridge their understanding of the degradation of the planet's ecosystems with environmental politics, law, restoration, and natural resources management.
See also
Carrying capacity
Chemical ecology
Climate justice
Circles of Sustainability
Cultural ecology
Dialectical naturalism
Ecological death
Ecological empathy
Ecological overshoot
Ecological psychology
Ecology movement
Ecosophy
Ecopsychology
Human ecology
Industrial ecology
Information ecology
Landscape ecology
Natural resource
Normative science
Philosophy of ecology
Political ecology
Theoretical ecology
Sensory ecology
Sexecology
Spiritual ecology
Sustainable development
Lists
Glossary of ecology
Index of biology articles
List of ecologists
Outline of biology
Terminology of ecology
Notes
References
External links
The Nature Education Knowledge Project: Ecology
Biogeochemistry
Emergence | 0.76311 | 0.999069 | 0.7624 |
MAgPIE | MAgPIE is a non-linear, recursive, dynamic-optimization, global land and water-use model with a cost-minimization objective function.
MAgPIE was developed and is employed by the land-use group working at the Potsdam Institute for Climate Impact Research (PIK). It links regional economic information with grid-based biophysical constraints simulated by the dynamic vegetation and hydrology model LPJmL. MAgPIE considers spatially-explicit patterns of production, land use change and water constraints in different world regions, consistently linking economic development with food and energy demand.
The Model
The model is based on static yield functions in order to model potential crop productivity and its related water use. For the biophysical supply simulation, spatially explicit 0.5° data is aggregated to a consistent number of clusters. Ten world regions represent the demand side of the model. Required calories for the demand categories (food and non-food energy intake) are determined by a cross-sectional country regression based on population and income projections. In order to fulfill the demand, the model allocates 19 cropping and 5 livestock activities to the spatially-explicit land and water resources, subject to resource, management and cost constraints. From 1995 MAgPIE simulates time-steps of 10 years. For each period the optimal land use pattern from the previous period is used as a starting point.
Demand
The demand for agricultural products is fixed for every region and every time-step. The drivers of agricultural demand are: time, income and population growth. Total demand is composed of: food demand, material demand, feed demand and seed demand. Food demand depends on food energy demand, and the share of crop and livestock products in the diet. Within livestock products, the share of different products (Ruminant meat, chicken meat, other meat, milk, eggs) is fixed at 1995 levels. The same is valid for the share of crops within total food calories and material demand. The share of livestock products in the total consumed food calories is an important driver for the land-use sector. Different statistical models are used to estimate plausible future scenarios. A calibration is used to reach the livestock shares of the Food Balance Sheets for 1995 for each region.
Feed for livestock is produced as a mixture of concentrates, fodder, livestock products (e.g. bone meal), pasture, crop residues and conversion by-products (e.g. rapeseed cake) at predefined proportions. These differences in the livestock systems cause different emission levels from livestock.
Biophysical Inputs
The biophysical inputs for the simulations are obtained from the grid-based model LPJmL. The global vegetation model with managed land (LPJmL) also delivers values for water availability and requirements for each grid cell as well as the carbon content of the different vegetation types. Cropland, pasture, and irrigation water are fixed inputs in limited supply in each grid cell.
Cost Types
MAgPIE takes four different cost types into account: production costs for crop and livestock production, investments in technological change, land conversion costs and intra-regional transport costs. By minimizing these four cost components on a global scale for the current time step, the model solution is obtained. Production costs in MAgPIE imply costs for labor, capital and intermediate inputs. They are specific for all crop and livestock types and are implemented as costs per area for crops (US$/ha) and costs per production unit of livestock (US$/ton).
MAgPIE has two options to increase total production in agriculture at additional costs: land expansion and intensification. In MAgPIE the latter can be achieved by investments in technological change (TC). Investing in technological change triggers yield increases which lead then to a higher total production. At the same time the corresponding increases in agricultural land-use intensity raises costs for further yield increases. The reason is that intensification on land which is already used intensively is more expensive than intensification on extensively-used land.
To increase production another alternative is to expand cropland into non-agricultural land. The conversion causes additional costs for the preparation of new land and basic infrastructure investments, which are also taken into account. Intraregional transport costs arise for each commodity unit as a function of the distance to intraregional markets and therefore restrict land expansion in MAgPIE. This depends on the quality and accessibility of infrastructure. Intra-regional transport costs are higher for less accessible areas than for more accessible regions. This leads to higher overall costs of cropland expansion in those cases.
References
Land use
Mathematical modeling | 0.762791 | 0.999483 | 0.762397 |
DPSIR | DPSIR (drivers, pressures, state, impact, and response model of intervention) is a causal framework used to describe the interactions between society and the environment. It seeks to analyze and assess environmental problems by bringing together various scientific disciplines, environmental managers, and stakeholders, and solve them by incorporating sustainable development. First, the indicators are categorized into "drivers" which put "pressures" in the "state" of the system, which in turn results in certain "impacts" that will lead to various "responses" to maintain or recover the system under consideration. It is followed by the organization of available data, and suggestion of procedures to collect missing data for future analysis. Since its formulation in the late 1990s, it has been widely adopted by international organizations for ecosystem-based study in various fields like biodiversity, soil erosion, and groundwater depletion and contamination. In recent times, the framework has been used in combination with other analytical methods and models, to compensate for its shortcomings. It is employed to evaluate environmental changes in ecosystems, identify the social and economic pressures on a system, predict potential challenges and improve management practices. The flexibility and general applicability of the framework make it a resilient tool that can be applied in social, economic, and institutional domains as well.
History
The Driver-Pressure-State-Impact-Response framework was developed by the European Environment Agency (EEA) in 1999. It was built upon several existing environmental reporting frameworks, like the Pressure-State-Response (PSR) framework developed by the Organization for Economic Co-operation and Development (OECD) in 1993, which itself was an extension of Rapport and Friend's Stress-Response (SR) framework (1979). The PSR framework simplified environmental problems and solutions into variables that stress the cause-effect relationship between human activities that exert pressure on the environment, the state of the environment, and society's response to the condition. Since it focused on anthropocentric pressures and responses, it did not effectively factor natural variability into the pressure category. This led to the development of the expanded Driving Force-State-Response (DSR) framework, by the United Nations Commission on Sustainable Development (CSD) in 1997. A primary modification was the expansion of the concept of “pressure” to include social, political, economic, demographic, and natural system pressures. However, by replacing “pressure” with “driving force”, the model failed to account for the underlying reasons for the pressure, much like its antecedent. It also did not address the motivations behind responses to changes in the state of the environment. The refined DPSIR model sought to address these shortcomings of its predecessors by addressing root causes of the human activities that impact the environment, by incorporating natural variability as a pressure on the current state and addressing responses to the impact of changes in state on human well-being. Unlike PSR and DSR, DPSIR is not a model, but a means of classifying and disseminating information related to environmental challenges. Since its conception, it has evolved into modified frameworks like Driver-Pressure-Chemical State-Ecological State-Response (DPCER), Driver-Pressure-State-Welfare-Response (DPSWR), and Driver-Pressure-State-Ecosystem-Response (DPSER).
The DPSIR Framework
Driver (Driving Force)
Driver refers to the social, demographic, and economic developments which influence the human activities that have a direct impact on the environment. They can further be subdivided into primary and secondary driving forces. Primary driving forces refer to technological and societal actors that motivate human activities like population growth and distribution of wealth. The developments induced by these drivers give rise to secondary driving forces, which are human activities triggering “pressures” and “impacts”, like land-use changes, urban expansion and industrial developments. Drivers can also be identified as underlying or immediate, physical or socio-economic, and natural or anthropogenic, based on the scope and sector in which they are being used.
Pressure
Pressure represents the consequence of the driving force, which in turn affects the state of the environment. They are usually depicted as unwanted and negative, based on the concept that any change in the environment caused by human activities is damaging and degrading. Pressures can have effects on the short run (e.g.: deforestation), or the long run (e.g.: climate change), which if known with sufficient certainty, can be expressed as a probability. They can be both human-induced, like emissions, fuel extraction, and solid waste generation, and natural processes, like solar radiation and volcanic eruptions. Pressures can also be sub-categorized as endogenic managed pressures, when they stem from within the system and can be controlled (e.g.: land claim, power generation), and as exogenic unmanaged pressures, when they stem from outside the system and cannot be controlled (e.g.: climate change, geomorphic activities).
State
State describes the physical, chemical and biological condition of the environment or observable temporal changes in the system. It may refer to natural systems (e.g.: atmospheric CO2 concentrations, temperature), socio-economic systems (e.g.: living conditions of humans, economic situations of an industry), or a combination of both (e.g.: number of tourists, size of current population). It includes a wide range of features, like physico-chemical characteristics of ecosystems, quantity and quality of resources or “carrying capacity”, management of fragile species and ecosystems, living conditions for humans, and exposure or the effects of pressures on humans. It is not intended to just be static, but to reflect current trends as well, like increasing eutrophication and change in biodiversity.
Impact
Impact refers to how changes in the state of the system affect human well-being. It is often measured in terms of damages to the environment or human health, like migration, poverty, and increased vulnerability to diseases, but can also be identified and quantified without any positive or negative connotation, by simply indicating a change in the environmental parameters. Impact can be ecologic (e.g.: reduction of wetlands, biodiversity loss), socio-economic (e.g.: reduced tourism), or a combination of both. Its definition may vary depending on the discipline and methodology applied. For instance, it refers to the effect on living beings and non-living domains of ecosystems in biosciences (e.g.: modifications in the chemical composition of air or water), whereas it is associated with the effects on human systems related to changes in the environmental functions in socio-economic sciences (e.g.: physical and mental health).
Response
Response refers to actions taken to correct the problems of the previous stages, by adjusting the drivers, reducing the pressure on the system, bringing the system back to its initial state, and mitigating the impacts. It can be associated uniquely with policy action, or to different levels of the society, including groups and/or individuals from the private, government or non-governmental sectors. Responses are mostly designed and/or implemented as political actions of protection, mitigation, conservation, or promotion. A mix of effective top-down political action and bottom-up social awareness can also be developed as responses, such as eco-communities or improved waste recycling rates.
Criticisms and Limitations
Despite the adaptability of the framework, it has faced several criticisms. One of the main goals of the framework is to provide environmental managers, scientists of various disciplines, and stakeholders with a common forum and language to identify, analyze and assess environmental problems and consequences. However, several notable authors have mentioned that it lacks a well-defined set of categories, which undermines the comparability between studies, even if they are similar. For instance, climate change can be considered as a natural driver, but is primarily caused by greenhouse gases (GSG) produced by human activities, which may be categorized under “pressure”. A wastewater treatment plant is considered a response while dealing with water pollution, but a pressure when effluent runoff leading to eutrophication is taken into account. This ambivalence of variables associated with the framework has been criticized as a lack of good communication between researchers and between stakeholders and policymakers. Another criticism is the misguiding simplicity of the framework, which ignores the complex synergy between the categories. For instance, an impact can be caused by various different state conditions and responses to other impacts, which is not addressed by DPSIR. Some authors also argue that the framework is flawed as it does not clearly illustrate the cause-effect linkage for environmental problems. The reasons behind these contextual differences seem to be differences in opinions, characteristics of specific case studies, misunderstanding of the concepts and inadequate knowledge of the system under consideration.
DPSIR was initially proposed as a conceptual framework rather than a practical guidance, by global organizations. This means that at a local level, analyses using the framework can cause some significant problems. DPSIR does not encourage the examination of locally specific attributes for individual decisions, which when aggregated, could have potentially large impacts on sustainability. For instance, a farmer who chooses a particular way of livelihood may not create any consequential alterations on the system, but the aggregation of farmers making similar choices will have a measurable and tangible effect. Any efforts to evaluate sustainability without considering local knowledge could lead to misrepresentations of local situations, misunderstandings of what works in particular areas and even project failure.
While there is no explicit hierarchy of authority in the DPSIR framework, the power difference between “developers” and the “developing” could be perceived as the contributor to the lack of focus on local, informal responses at the scale of drivers and pressures, thus compromising the validity of any analysis conducted using it. The “developers” refer to the Non-Governmental Organizations (NGOs), State mechanisms and other international organizations with the privilege to access various resources and power to use knowledge to change the world, and the “developing” refers to local communities. According to this criticism, the latter is less capable of responding to environmental problems than the former. This undermines valuable indigenous knowledge about various components of the framework in a particular region, since the inclusion of the knowledge is almost exclusively left at the discretion of the “developers”.
Another limitation of the framework is the exclusion of social and economic developments on the environment, particularly for future scenarios. Furthermore, DPSIR does not explicitly prioritize responses and fails to determine the effectiveness of each response individually, when working with complex systems. This has been one of the most criticized drawbacks of the framework, since it fails to capture the dynamic nature of real-world problems, which cannot be expressed by simple causal relations.
Applications
Despite its criticisms, DPSIR continues to be widely used to frame and assess environmental problems to identify appropriate responses. Its main objective is to support sustainable management of natural resources. DPSIR structures indicators related to the environmental problem addressed with reference to the political objectives and focuses on supposed causal relationships effectively, such that it appeals to policy actors. Some examples include the assessment of the pressure of alien species, evaluation of impacts of developmental activities on the coastal environment and society, identification of economic elements affecting global wildfire activities, and cost-benefit analysis (CBA) and gross domestic product (GDP) correction.
To compensate for its shortcomings, DPSIR is also used in conjunction with several analytical methods and models. It has been used in conjunction with Multiple-Criteria Decision Making (MCDM) for desertification risk management, with Analytic Hierarchy Process (AHP) to study urban green electricity power, and with Tobit model to assess freshwater ecosystems. The framework itself has also been modified to assess specific systems, like DPSWR, which focuses on the impacts on human welfare alone, by shifting ecological impact to the state category. Another approach is a differential DPSIR (ΔDPSIR), which evaluates the changes in drivers, pressures and state after implementing a management response, making it valuable both as a scientific output and a system management tool. The flexibility offered by the framework makes it an effective tool with numerous applications, provided the system is properly studied and understood by the stakeholders.
References
External links
DPSIR-Model of the European Environment Agency (EEA)
Environmental terminology
Industrial ecology | 0.778238 | 0.979639 | 0.762392 |
Cauchy stress tensor | In continuum mechanics, the Cauchy stress tensor (symbol , named after Augustin-Louis Cauchy), also called true stress tensor or simply stress tensor, completely defines the state of stress at a point inside a material in the deformed state, placement, or configuration. The second order tensor consists of nine components and relates a unit-length direction vector e to the traction vector T(e) across an imaginary surface perpendicular to e:
The SI base units of both stress tensor and traction vector are newton per square metre (N/m2) or pascal (Pa), corresponding to the stress scalar. The unit vector is dimensionless.
The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is the Mohr's circle for stress.
The Cauchy stress tensor is used for stress analysis of material bodies experiencing small deformations: it is a central concept in the linear theory of elasticity. For large deformations, also called finite deformations, other measures of stress are required, such as the Piola–Kirchhoff stress tensor, the Biot stress tensor, and the Kirchhoff stress tensor.
According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations (Cauchy's equations of motion for zero acceleration). At the same time, according to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine. However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers.
There are certain invariants associated with the stress tensor, whose values do not depend upon the coordinate system chosen, or the area element upon which the stress tensor operates. These are the three eigenvalues of the stress tensor, which are called the principal stresses.
Euler–Cauchy stress principle – stress vector
The Euler–Cauchy stress principle states that upon any surface (real or imaginary) that divides the body, the action of one part of the body on the other is equivalent (equipollent) to the system of distributed forces and couples on the surface dividing the body, and it is represented by a field , called the traction vector, defined on the surface and assumed to depend continuously on the surface's unit vector .
To formulate the Euler–Cauchy stress principle, consider an imaginary surface passing through an internal material point dividing the continuous body into two segments, as seen in Figure 2.1a or 2.1b (one may use either the cutting plane diagram or the diagram with the arbitrary volume inside the continuum enclosed by the surface ).
Following the classical dynamics of Newton and Euler, the motion of a material body is produced by the action of externally applied forces which are assumed to be of two kinds: surface forces and body forces . Thus, the total force applied to a body or to a portion of the body can be expressed as:
Only surface forces will be discussed in this article as they are relevant to the Cauchy stress tensor.
When the body is subjected to external surface forces or contact forces , following Euler's equations of motion, internal contact forces and moments are transmitted from point to point in the body, and from one segment to the other through the dividing surface , due to the mechanical contact of one portion of the continuum onto the other (Figure 2.1a and 2.1b). On an element of area containing , with normal vector , the force distribution is equipollent to a contact force exerted at point P and surface moment . In particular, the contact force is given by
where is the mean surface traction.
Cauchy's stress principle asserts that as becomes very small and tends to zero the ratio becomes and the couple stress vector vanishes. In specific fields of continuum mechanics the couple stress is assumed not to vanish; however, classical branches of continuum mechanics address non-polar materials which do not consider couple stresses and body moments.
The resultant vector is defined as the surface traction, also called stress vector, traction, or traction vector. given by at the point associated with a plane with a normal vector :
This equation means that the stress vector depends on its location in the body and the orientation of the plane on which it is acting.
This implies that the balancing action of internal contact forces generates a contact force density or Cauchy traction field that represents a distribution of internal contact forces throughout the volume of the body in a particular configuration of the body at a given time . It is not a vector field because it depends not only on the position of a particular material point, but also on the local orientation of the surface element as defined by its normal vector .
Depending on the orientation of the plane under consideration, the stress vector may not necessarily be perpendicular to that plane, i.e. parallel to , and can be resolved into two components (Figure 2.1c):
one normal to the plane, called normal stress
where is the normal component of the force to the differential area
and the other parallel to this plane, called the shear stress
where is the tangential component of the force to the differential surface area . The shear stress can be further decomposed into two mutually perpendicular vectors.
Cauchy's postulate
According to the Cauchy Postulate, the stress vector remains unchanged for all surfaces passing through the point and having the same normal vector at , i.e., having a common tangent at . This means that the stress vector is a function of the normal vector only, and is not influenced by the curvature of the internal surfaces.
Cauchy's fundamental lemma
A consequence of Cauchy's postulate is Cauchy's Fundamental Lemma, also called the Cauchy reciprocal theorem, which states that the stress vectors acting on opposite sides of the same surface are equal in magnitude and opposite in direction. Cauchy's fundamental lemma is equivalent to Newton's third law of motion of action and reaction, and is expressed as
Cauchy's stress theorem—stress tensor
The state of stress at a point in the body is then defined by all the stress vectors T(n) associated with all planes (infinite in number) that pass through that point. However, according to Cauchy's fundamental theorem, also called Cauchy's stress theorem, merely by knowing the stress vectors on three mutually perpendicular planes, the stress vector on any other plane passing through that point can be found through coordinate transformation equations.
Cauchy's stress theorem states that there exists a second-order tensor field σ(x, t), called the Cauchy stress tensor, independent of n, such that T is a linear function of n:
This equation implies that the stress vector T(n) at any point P in a continuum associated with a plane with normal unit vector n can be expressed as a function of the stress vectors on the planes perpendicular to the coordinate axes, i.e. in terms of the components σij of the stress tensor σ.
To prove this expression, consider a tetrahedron with three faces oriented in the coordinate planes, and with an infinitesimal area dA oriented in an arbitrary direction specified by a normal unit vector n (Figure 2.2). The tetrahedron is formed by slicing the infinitesimal element along an arbitrary plane with unit normal n. The stress vector on this plane is denoted by T(n). The stress vectors acting on the faces of the tetrahedron are denoted as T(e1), T(e2), and T(e3), and are by definition the components σij of the stress tensor σ. This tetrahedron is sometimes called the Cauchy tetrahedron. The equilibrium of forces, i.e. Euler's first law of motion (Newton's second law of motion), gives:
where the right-hand-side represents the product of the mass enclosed by the tetrahedron and its acceleration: ρ is the density, a is the acceleration, and h is the height of the tetrahedron, considering the plane n as the base. The area of the faces of the tetrahedron perpendicular to the axes can be found by projecting dA into each face (using the dot product):
and then substituting into the equation to cancel out dA:
To consider the limiting case as the tetrahedron shrinks to a point, h must go to 0 (intuitively, the plane n is translated along n toward O). As a result, the right-hand-side of the equation approaches 0, so
Assuming a material element (see figure at the top of the page) with planes perpendicular to the coordinate axes of a Cartesian coordinate system, the stress vectors associated with each of the element planes, i.e. T(e1), T(e2), and T(e3) can be decomposed into a normal component and two shear components, i.e. components in the direction of the three coordinate axes. For the particular case of a surface with normal unit vector oriented in the direction of the x1-axis, denote the normal stress by σ11, and the two shear stresses as σ12 and σ13:
In index notation this is
The nine components σij of the stress vectors are the components of a second-order Cartesian tensor called the Cauchy stress tensor, which can be used to completely define the state of stress at a point and is given by
where σ11, σ22, and σ33 are normal stresses, and σ12, σ13, σ21, σ23, σ31, and σ32 are shear stresses. The first index i indicates that the stress acts on a plane normal to the Xi -axis, and the second index j denotes the direction in which the stress acts (For example, σ12 implies that the stress is acting on the plane that is normal to the 1st axis i.e.;X1 and acts along the 2nd axis i.e.;X2). A stress component is positive if it acts in the positive direction of the coordinate axes, and if the plane where it acts has an outward normal vector pointing in the positive coordinate direction.
Thus, using the components of the stress tensor
or, equivalently,
Alternatively, in matrix form we have
The Voigt notation representation of the Cauchy stress tensor takes advantage of the symmetry of the stress tensor to express the stress as a six-dimensional vector of the form:
The Voigt notation is used extensively in representing stress–strain relations in solid mechanics and for computational efficiency in numerical structural mechanics software.
Transformation rule of the stress tensor
It can be shown that the stress tensor is a contravariant second order tensor, which is a statement of how it transforms under a change of the coordinate system. From an xi-system to an xi' -system, the components σij in the initial system are transformed into the components σij' in the new system according to the tensor transformation rule (Figure 2.4):
where A is a rotation matrix with components aij. In matrix form this is
Expanding the matrix operation, and simplifying terms using the symmetry of the stress tensor, gives
The Mohr circle for stress is a graphical representation of this transformation of stresses.
Normal and shear stresses
The magnitude of the normal stress component σn of any stress vector T(n) acting on an arbitrary plane with normal unit vector n at a given point, in terms of the components σij of the stress tensor σ, is the dot product of the stress vector and the normal unit vector:
The magnitude of the shear stress component τn, acting orthogonal to the vector n, can then be found using the Pythagorean theorem:
where
Balance laws – Cauchy's equations of motion
Cauchy's first law of motion
According to the principle of conservation of linear momentum, if the continuum body is in static equilibrium it can be demonstrated that the components of the Cauchy stress tensor in every material point in the body satisfy the equilibrium equations:
,
where
For example, for a hydrostatic fluid in equilibrium conditions, the stress tensor takes on the form:
where is the hydrostatic pressure, and is the kronecker delta.
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of equilibrium equations
|-
|Consider a continuum body (see Figure 4) occupying a volume , having a surface area , with defined traction or surface forces per unit area acting on every point of the body surface, and body forces per unit of volume on every point within the volume . Thus, if the body is in equilibrium the resultant force acting on the volume is zero, thus:
By definition the stress vector is , then
Using the Gauss's divergence theorem to convert a surface integral to a volume integral gives
For an arbitrary volume the integral vanishes, and we have the equilibrium equations
|}
Cauchy's second law of motion
According to the principle of conservation of angular momentum, equilibrium requires that the summation of moments with respect to an arbitrary point is zero, which leads to the conclusion that the stress tensor is symmetric, thus having only six independent stress components, instead of the original nine:
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of symmetry of the stress tensor
|-
| Summing moments about point O (Figure 4) the resultant moment is zero as the body is in equilibrium. Thus,
where is the position vector and is expressed as
Knowing that and using Gauss's divergence theorem to change from a surface integral to a volume integral, we have
The second integral is zero as it contains the equilibrium equations. This leaves the first integral, where , therefore
For an arbitrary volume V, we then have
which is satisfied at every point within the body. Expanding this equation we have
, , and
or in general
This proves that the stress tensor is symmetric
|}
However, in the presence of couple-stresses, i.e. moments per unit volume, the stress tensor is non-symmetric. This also is the case when the Knudsen number is close to one, , or the continuum is a non-Newtonian fluid, which can lead to rotationally non-invariant fluids, such as polymers.
Principal stresses and stress invariants
At every point in a stressed body there are at least three planes, called principal planes, with normal vectors , called principal directions, where the corresponding stress vector is perpendicular to the plane, i.e., parallel or in the same direction as the normal vector , and where there are no normal shear stresses . The three stresses normal to these principal planes are called principal stresses.
The components of the stress tensor depend on the orientation of the coordinate system at the point under consideration. However, the stress tensor itself is a physical quantity and as such, it is independent of the coordinate system chosen to represent it. There are certain invariants associated with every tensor which are also independent of the coordinate system. For example, a vector is a simple tensor of rank one. In three dimensions, it has three components. The value of these components will depend on the coordinate system chosen to represent the vector, but the magnitude of the vector is a physical quantity (a scalar) and is independent of the Cartesian coordinate system chosen to represent the vector (so long as it is normal). Similarly, every second rank tensor (such as the stress and the strain tensors) has three independent invariant quantities associated with it. One set of such invariants are the principal stresses of the stress tensor, which are just the eigenvalues of the stress tensor. Their direction vectors are the principal directions or eigenvectors.
A stress vector parallel to the normal unit vector is given by:
where is a constant of proportionality, and in this particular case corresponds to the magnitudes of the normal stress vectors or principal stresses.
Knowing that and , we have
This is a homogeneous system, i.e. equal to zero, of three linear equations where are the unknowns. To obtain a nontrivial (non-zero) solution for , the determinant matrix of the coefficients must be equal to zero, i.e. the system is singular. Thus,
Expanding the determinant leads to the characteristic equation
where
The characteristic equation has three real roots , i.e. not imaginary due to the symmetry of the stress tensor. The , and , are the principal stresses, functions of the eigenvalues . The eigenvalues are the roots of the characteristic polynomial. The principal stresses are unique for a given stress tensor. Therefore, from the characteristic equation, the coefficients , and , called the first, second, and third stress invariants, respectively, always have the same value regardless of the coordinate system's orientation.
For each eigenvalue, there is a non-trivial solution for in the equation . These solutions are the principal directions or eigenvectors defining the plane where the principal stresses act. The principal stresses and principal directions characterize the stress at a point and are independent of the orientation.
A coordinate system with axes oriented to the principal directions implies that the normal stresses are the principal stresses and the stress tensor is represented by a diagonal matrix:
The principal stresses can be combined to form the stress invariants, , , and . The first and third invariant are the trace and determinant respectively, of the stress tensor. Thus,
Because of its simplicity, the principal coordinate system is often useful when considering the state of the elastic medium at a particular point. Principal stresses are often expressed in the following equation for evaluating stresses in the x and y directions or axial and bending stresses on a part. The principal normal stresses can then be used to calculate the von Mises stress and ultimately the safety factor and margin of safety.
Using just the part of the equation under the square root is equal to the maximum and minimum shear stress for plus and minus. This is shown as:
Maximum and minimum shear stresses
The maximum shear stress or maximum principal shear stress is equal to one-half the difference between the largest and smallest principal stresses, and acts on the plane that bisects the angle between the directions of the largest and smallest principal stresses, i.e. the plane of the maximum shear stress is oriented from the principal stress planes. The maximum shear stress is expressed as
Assuming then
When the stress tensor is non zero the normal stress component acting on the plane for the maximum shear stress is non-zero and it is equal to
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
!Derivation of the maximum and minimum shear stresses
|-
|The normal stress can be written in terms of principal stresses as
Knowing that , the shear stress in terms of principal stresses components is expressed as
The maximum shear stress at a point in a continuum body is determined by maximizing subject to the condition that
This is a constrained maximization problem, which can be solved using the Lagrangian multiplier technique to convert the problem into an unconstrained optimization problem. Thus, the stationary values (maximum and minimum values)of occur where the gradient of is parallel to the gradient of .
The Lagrangian function for this problem can be written as
where is the Lagrangian multiplier (which is different from the use to denote eigenvalues).
The extreme values of these functions are
thence
These three equations together with the condition may be solved for and
By multiplying the first three equations by and , respectively, and knowing that we obtain
Adding these three equations we get
this result can be substituted into each of the first three equations to obtain
Doing the same for the other two equations we have
A first approach to solve these last three equations is to consider the trivial solution . However, this option does not fulfill the constraint .
Considering the solution where and , it is determine from the condition that , then from the original equation for it is seen that .
The other two possible values for can be obtained similarly by assuming
and
and
Thus, one set of solutions for these four equations is:
These correspond to minimum values for and verifies that there are no shear stresses on planes normal to the principal directions of stress, as shown previously.
A second set of solutions is obtained by assuming and . Thus we have
To find the values for and we first add these two equations
Knowing that for
and
we have
and solving for we have
Then solving for we have
and
The other two possible values for can be obtained similarly by assuming
and
and
Therefore, the second set of solutions for , representing a maximum for is
Therefore, assuming , the maximum shear stress is expressed by
and it can be stated as being equal to one-half the difference between the largest and smallest principal stresses, acting on the plane that bisects the angle between the directions of the largest and smallest principal stresses.
|}
Stress deviator tensor
The stress tensor can be expressed as the sum of two other stress tensors:
a mean hydrostatic stress tensor or volumetric stress tensor or mean normal stress tensor, , which tends to change the volume of the stressed body; and
a deviatoric component called the stress deviator tensor, , which tends to distort it.
So
where is the mean stress given by
Pressure is generally defined as negative one-third the trace of the stress tensor minus any stress the divergence of the velocity contributes with, i.e.
where is a proportionality constant (i.e. the first of the Lamé parameters), is the divergence operator, is the k:th Cartesian coordinate, is the flow velocity and is the k:th Cartesian component of .
The deviatoric stress tensor can be obtained by subtracting the hydrostatic stress tensor from the Cauchy stress tensor:
Invariants of the stress deviator tensor
As it is a second order tensor, the stress deviator tensor also has a set of invariants, which can be obtained using the same procedure used to calculate the invariants of the stress tensor. It can be shown that the principal directions of the stress deviator tensor are the same as the principal directions of the stress tensor . Thus, the characteristic equation is
where , and are the first, second, and third deviatoric stress invariants, respectively. Their values are the same (invariant) regardless of the orientation of the coordinate system chosen. These deviatoric stress invariants can be expressed as a function of the components of or its principal values , , and , or alternatively, as a function of or its principal values , , and . Thus,
Because , the stress deviator tensor is in a state of pure shear.
A quantity called the equivalent stress or von Mises stress is commonly used in solid mechanics. The equivalent stress is defined as
Octahedral stresses
Considering the principal directions as the coordinate axes, a plane whose normal vector makes equal angles with each of the principal axes (i.e. having direction cosines equal to ) is called an octahedral plane. There are a total of eight octahedral planes (Figure 6). The normal and shear components of the stress tensor on these planes are called octahedral normal stress and octahedral shear stress , respectively. Octahedral plane passing through the origin is known as the π-plane (π not to be confused with mean stress denoted by π in above section) . On the π-plane, .
Knowing that the stress tensor of point O (Figure 6) in the principal axes is
the stress vector on an octahedral plane is then given by:
The normal component of the stress vector at point O associated with the octahedral plane is
which is the mean normal stress or hydrostatic stress. This value is the same in all eight octahedral planes.
The shear stress on the octahedral plane is then
See also
Cauchy momentum equation
Critical plane analysis
Stress–energy tensor
Notes
References
Tensor physical quantities
Solid mechanics
Continuum mechanics
Structural analysis | 0.764568 | 0.997137 | 0.76238 |
Gifted education | Gifted education (also known as gifted and talented education (GATE), talented and gifted programs (TAG), or G&T education) is a sort of education used for children who have been identified as gifted or talented.
The main approaches to gifted education are enrichment and acceleration. An enrichment program teaches additional, deeper material, but keeps the student progressing through the curriculum at the same rate as other students. For example, after the gifted students have completed the normal work in the curriculum, an enrichment program might provide them with additional information about a subject. An acceleration program advances the student through the standard curriculum faster than normal. This is normally done by having the students skip one to two grades.
Being gifted and talented usually means being able to score in the top percentile on IQ exams. The percentage of students selected varies, generally with 10% or fewer being selected for gifted education programs. However, for a child to have distinct gifted abilities it is to be expected to score in the top one percent of students.
Forms
Attempts to provide gifted education can be classified in several ways. Most gifted students benefit from a combination of approaches at different times.
Acceleration
People are advanced to a higher-level class covering material more suited to their abilities and preparedness. This may take the form of skipping grades or completing the normal curriculum in a shorter-than-normal period of time ("telescoping"). Subject acceleration (also called partial acceleration) is a flexible approach that can advance a student in one subject, such as mathematics or language, without changing other studies, such as history or science. This type of acceleration is usually based upon achievement testing, rather than IQ.
Some colleges offer early entrance programs that give gifted younger students the opportunity to attend college early. In the U.S., many community colleges allow advanced students to enroll with the consent of school officials and the pupil's parents.
Acceleration presents gifted children with academic material from established curricula that is commensurate with their ability and preparedness, and for this reason is a low-cost option from the perspective of the school. This may result in a small number of children taking classes targeted at older children. For the majority of gifted students, acceleration is beneficial both academically and socially. Whole grade skipping is considered rapid acceleration. Some advocates have argued that the disadvantages of being retained in a standard mixed-ability classroom are substantially worse than any shortcomings of acceleration. For example, psychologist Miraca Gross reports: "the majority of these children [retained in a typical classroom] are socially rejected [by their peers with typical academic talents], isolated, and deeply unhappy. Children of IQ 180+ who are retained in the regular classroom are even more seriously at risk and experience severe emotional distress." These accelerated children should be placed together in one class if possible. Research suggests that acceleration might have an impact long after students graduate from high school. For example, one study shows that high-IQ individuals who experienced full-grade acceleration earned higher incomes as adults.
Cluster grouping
Cluster grouping is the gathering of four to six gifted and talented and/or high achieving students in a single classroom for the entire school day. Cluster teachers are specially trained in differentiating for gifted learners. Clusters are typically used in upper elementary grades. Within a cluster group, instruction may include enrichment and extensions, higher-order thinking skills, pretesting and differentiation, compacting, an accelerated pace, and more complexity in content.
Colloquium
Like acceleration, colloquium provides advanced material for high school students. In colloquium, students take Advanced Placement (AP) courses. However, colloquium is different from AP classes because students are usually given more projects than students in AP classes. Students in colloquium also generally study topics more in depth and sometimes in a different way than students enrolled in AP classes do. Colloquium is a form that takes place in a traditional public school. In colloquium, subjects are grouped together. Subjects are taught at different times of the day; however, usually what is being taught in one subject will connect with another subject. For example, if the students are learning about colonial America in History, then they might also be analyzing text from The Scarlet Letter in English. Some schools may only have colloquium in certain subjects. In schools where colloquium is only offered in English and History, colloquium students usually take Advanced Placement courses in math and science and vice versa.
Compacting
In compacting, the regular school material is compacted by pretesting the student to establish which skills and content have already been mastered. Pretests can be presented on a daily basis (pupils doing the most difficult items on a worksheet first and skipping the rest if they are performed correctly), or before a week or longer unit of instructional time. When a student demonstrates an appropriate level of proficiency, further repetitive practice can be safely skipped, thus reducing boredom and freeing up time for the student to work on more challenging material.
Enrichment
On the primary school level, students spend all class time with their peers, but receive extra material to challenge them. Enrichment may be as simple as a modified assignment provided by the regular classroom teacher, or it might include formal programs such as Odyssey of the Mind, Destination Imagination or academic competitions such as Brain Bowl, Future Problem Solving, Science Olympiad, National History Day, science fairs, or spelling bees. Programmes of enrichment activities may also be organised outside the school day (e.g. the ASCEND project in secondary science education). This work is done in addition to, and not instead of, any regular school work assigned. Critics of this approach argue that it requires gifted students to do more work instead of the same amount at an advanced level. On the secondary school level sometimes an option is to take more courses such as English, Spanish, Latin, philosophy, or science or to engage in extracurricular activities. Some perceive there to be a necessary choice between enrichment and acceleration, as if the two were mutually exclusive alternatives. However, other researchers see the two as complements to each other.
Full-time separate classes or schools
Some gifted students are educated in either a separate class or a separate school. These classes and schools are sometimes called "congregated gifted programs" or "dedicated gifted programs."
Some independent schools have a primary mission to serve the needs of the academically gifted. Such schools are relatively scarce and often difficult for families to locate. One resource for locating gifted schools in the United States can be found on the National Association for Gifted Children's resource directory accessible through their home page. Such schools often need to work to guard their mission from occasional charges of elitism, support the professional growth and training of their staff, write curriculum units that are specifically designed to meet the social, emotional, and academic talents of their students, and educate their parent population at all ages.
Some gifted and talented classes offer self-directed or individualized studies, where the students lead a class themselves and decide on their own task, tests, and all other assignments. These separate classes or schools tend to be more expensive than regular classes, due to smaller class sizes and lower student-to-teacher rations. Not-for-profit (non-profit) schools often can offer lower costs than for-profit schools. Either way, they are in high demand and parents often have to pay part of the costs.
Hobby
Activities such as reading, creative writing, sport, computer games, chess, music, dance, foreign languages, and art give an extra intellectual challenge outside of school hours.
Homeschooling
An umbrella term encompassing a variety of educational activities conducted at home, including those for gifted children: part-time schooling; school at home; classes, groups, mentors and tutors; and unschooling. In many US states, the population of gifted students who are being homeschooled is rising quite rapidly, as school districts responding to budgetary issues and standards-based policies are cutting what limited gifted education programs remain in existence, and families seek educational opportunities that are tailored to each child's unique needs.
Pull-out
Gifted students are pulled out of a heterogeneous classroom to spend a portion of their time in a gifted class. These programs vary widely, from carefully designed half-day academic programs to a single hour each week of educational challenges. Generally, these programs are ineffective at promoting academic advancement unless the material covered contains extensions and enrichment to the core curriculum. The majority of pull-out programs include an assortment of critical thinking drills, creative exercises, and subjects typically not introduced in standard curricula. Much of the material introduced in gifted pull-out programs deals with the study of logic, and its application to fields ranging from philosophy to mathematics. Students are encouraged to apply these empirical reasoning skills to every aspect of their education both in and outside of class.
Self-pacing
Self-pacing methods, such as the Montessori Method, use flexible grouping practices to allow children to advance at their own pace. Self-pacing can be beneficial for all children and is not targeted specifically at those identified as gifted or talented, but it can allow children to learn at a highly accelerated rate. Directed Studies are usually based on self-pacing.
Summer enrichment
These offer a variety of courses that mainly take place in the summer. Summer schools are popular in the United States. Entrance fees are required for such programs, and programs typically focus on one subject, or class, for the duration of the camp.
Several examples of this type of program are:
Center for Talented Youth
CTYI
GERI: Gifted Education Resource Institute, Purdue University
Johns Hopkins University
Center for Talent Development, Northwestern University
Within the United States, in addition to programs designed by the state, some counties also choose to form their own Talented and Gifted Programs. Sometimes this means that an individual county will form its own TAG program; sometimes several counties will come together if not enough gifted students are present in a single county. Generally, a TAG program focuses on a specific age group, particularly the local TAG programs. This could mean elementary age, high school age, or by years such as ages 9 through 14.
These classes are generally organized so that students have the opportunity to choose several courses they wish to participate in. Courses offered often vary between subjects, but are not typically strictly academically related to that subject. For example, a TAG course that could be offered in history could be the students learning about a certain event and then acting it out in a performance to be presented to parents on the last night of the program. These courses are designed to challenge the students to think in new ways and not merely to be lectured as they are in school.
Identifying gifted children
The term "Gifted Assessment" is typically applied to a process of using norm-referenced psychometric tests administered by a qualified psychologist or psychometrist with the goal of identifying children whose intellectual functioning is significantly advanced as compared to the appropriate reference group (i.e., individuals of their age, gender, and country). The cut-off score for differentiating this group is usually determined by district school boards and can differ slightly from area to area, however, the majority defines this group as students scoring in the top 2 percentiles on one of the accepted tests of intellectual (cognitive) functioning or IQ. Some school boards also require a child to demonstrate advanced academic standing on individualized achievement tests and/or through their classroom performance. Identifying gifted children is often difficult but is very important because typical school teachers are not qualified to educate a gifted student. This can lead to a situation where a gifted child is bored, underachieves and misbehaves in class.
Individual IQ testing is usually the optimal method to identify giftedness among children. However it does not distinguish well among those found to be gifted. Therefore, examiners prefer using a variety of tests to first identify giftedness and then further differentiate. This is often done by using individual IQ tests and then group or individual achievement tests. There is no standard consensus on which tests to use, as each test is better suited for a certain role.
The two most popular tests for identifying giftedness in the school-age population are the WISC-IV and the SB5. The WIAT-III is considered the most popular academic achievement test to determine a child's aggregate learned knowledge.
Although a newer WISC version, the WISC-V, was developed in late 2014, the WISC-IV is still the most commonplace test. It has been translated into several languages including Spanish, Portuguese, Norwegian, Swedish, French, German, Dutch, Japanese, Chinese, Korean, and Italian. The WISC-IV assesses a child's cognitive abilities, with respect to age group. Coupled with results from other tests, the WISC accurately depicts a child's developmental and psychological needs for the future.
The SB5 is an intelligence test that determines cognitive abilities and can be administered to persons in virtually any age group. It assesses a series of intelligence indicators including fluid reasoning, general knowledge, quantitative reasoning, spatial processing, and working memory. The SB5 makes use of both verbal and nonverbal testing.
The WIAT-III cannot assess all components of learned knowledge, but does give an understanding of a child's ability to acquire skills and knowledge through formal education. This test measures aspects of the learning process that take place in a traditional school setting in reading, writing, math, and oral language. Although the WIAT-III tests a wide range of material, it is designed primarily to assess children's learning before adolescence.
Versions of these tests exist for each age group. However it is recommended to begin assessment as early as possible, with approximately eight years of age being the optimal time to test. Testing allows identification of specific needs of students and help to plan an education early.
Out-of-group achievement testing (such as taking the SAT or ACT early) can also help to identify these students early on (see SMPY) and is implemented by various talent search programs in use by education programs. Out-of-group testing can also help to differentiate children who have scored in the highest percentiles in a single IQ test.
Testing alone cannot accurately identify every gifted child. Teacher and parent nominations are essential additions to the objective information provided by grades and scores. Parents are encouraged to keep portfolios of their children's work, and documentation of their early signs of gifted behavior.
Studies of giftedness
The development of early intelligence tests by Alfred Binet led to the Stanford-Binet IQ test developed by Lewis Terman. Terman began long-term studies of gifted children with a view to checking if the popular view "early ripe, early rot" was true. The Terman Genetic Studies of Genius longitudinal study has been described by successor researchers who conducted the study after Terman's death and also by an independent researcher who had full access to the study files.
Modern studies by James and Kulik conclude that gifted students benefit least from working in a mixed-level class, and benefit most from learning with other similarly advanced students in accelerated or enriched classes.
Definition of giftedness
Educational authorities differ on the definition of giftedness: even when using the same IQ test to define giftedness, they may disagree on what gifted means—one may take up the top two percent of the population, another might take up the top five percent of a population, which may be within a state, district, or school. Within a single school district, there can be substantial differences in the distribution of measured IQ. The IQ for the top percentile at a high-performing school may be quite different from that at a lower performing school.
Peter Marshall obtained his doctorate in 1995, for research carried out in this field in the years from 1986. At the time, he was the first Research Director of the Mensa Foundation for Gifted Children. His work challenged the difficult childhood hypothesis, concluding that gifted children, by and large, do not have any more difficult childhoods than mainstream children and, in fact, that where they do, their giftedness probably helps them cope better than mainstream children and provided the material for his subsequent book Educating a Gifted Child.
In Identifying Gifted Children: A Practical Guide, Susan K. Johnsen (2004) explains that gifted children all exhibit the potential for high performance in the areas included in the United States federal definition of gifted and talented students:
The National Association for Gifted Children in the United States defines giftedness as:
This definition has been adopted in part or completely by the majority of the states in the United States. Most have some definition similar to that used in the State of Texas, whose definition states:
The major characteristics of these definitions are (a) the diversity of areas in which performance may be exhibited (e.g., intellectual, creative, artistic, leadership, academic), (b) the comparison with other groups (e.g., those in general education classrooms or of the same age, experience, or environment), and (c) the use of terms that imply a need for development of the gift (e.g., capability and potential).
Reliance on IQ
In her book, Identifying Gifted Children: A Practical Guide, Susan K. Johnsen (2004) writes that schools should use a variety of measures of students' capability and potential when identifying gifted children. These measures may include portfolios of student work, classroom observations, achievement measures, and intelligence scores. Most educational professionals accept that no single measure can be used in isolation to accurately identify every gifted child.
Even if the notion of IQ is generally useful for identifying academically talented students who would benefit from further services, the question of the cutoff point for giftedness is still important. As noted above, different authorities often define giftedness differently.
History
Classical era to Renaissance
Gifted and talented education dates back thousands of years. Plato (c. 427–c. 347 BCE) advocated providing specialized education for intellectually gifted young men and women. In China's Tang dynasty (580–618 CE), child prodigies were summoned to the imperial court for specialized education. Throughout the Renaissance, those who exhibited creative talent in art, architecture, and literature were supported by both the government and private patronage.
Francis Galton
Francis Galton conducted one of the earliest Western studies of human intellectual abilities. Between 1888 and 1894, Galton tested more than 7,500 individuals to measure their natural intellectual abilities. He found that if a parent deviates from the norm, so will the child, but to a lesser extent than the parent. This was one of the earliest observed examples of regression toward the mean. Galton believed that individuals could be improved through interventions in heredity, a movement he named eugenics. He categorized individuals as gifted, capable, average, or degenerate, and he recommended breeding between the first two categories, and forced abstinence for the latter two. His term for the most intelligent and talented people was "eminent". After studying England's most prominent families, Galton concluded that one's eminence was directly related to the individual's direct line of heredity.
Lewis Terman
At Stanford University in 1918, Lewis Terman adapted Alfred Binet's Binet-Simon intelligence test into the Stanford-Binet test, and introduced intelligence quotient (IQ) scoring for the test. According to Terman, the IQ was one's mental age compared to one's chronological age, based on the mental age norms he compiled after studying a sample of children. He defined intelligence as "the ability to carry on abstract thinking". During World War I Terman was a commissioned officer of the United States Army, and collaborated with other psychologists in developing intelligence tests for new recruits to the armed forces. For the first time, intelligence testing was given to a wide population of drafted soldiers.
After the war, Terman undertook an extensive longitudinal study of 643 children in California who scored at IQ 140 or above, the Genetic Studies of Genius, continuing to evaluate them throughout their lives. Subjects of these case studies were called "Termites" and the studies contacted the children in 1921, and again in 1930, 1947, and 1959 after his death. Terman's studies have to date been the most extensive on high-functioning children, and are still quoted in psychological literature today. Terman claimed to have disproven common misconceptions, such as that highly intelligent children were prone to ill physical and mental health, that their intelligence burned out early in their lives, or that they either achieved greatly or underachieved.
Leta Hollingworth
A professional colleague of Terman's, Leta Hollingworth was the first in the United States to study how best to serve students who showed evidence of high performance on tests. Although recognizing Terman's and Galton's beliefs that heredity played a vital role in intelligence, Hollingworth gave similar credit to home environment and school structure. Hollingworth worked to dispel the pervasive belief that "bright children take care of themselves" and emphasized the importance of early identification, daily contact, and grouping gifted children with others with similar abilities. Hollingworth performed an 18-year-long study of 50 children in New York City who scored 155 or above on the Stanford-Binet, and studied smaller groups of children who scored above a 180. She also ran a school in New York City for bright students that employed a curriculum of student-led exploration, as opposed to a teacher providing students with a more advanced curriculum they would encounter later in life.
Cold War
One unforeseen result of the launch of Sputnik by the Soviet Union was the immediate emphasis on education for bright students in the United States, and this settled the question whether the federal government should get involved in public education at all. The National Defense Education Act (NDEA) was passed by Congress in 1958 with $1 billion US to bolster science, math, and technology in public education. The National Defense Education Act would lead to other achievements such as forerunning the moon landing and the implementation of Advanced Placement, (A.P.), coursework. Educators immediately pushed to identify gifted students and serve them in schools. Students chosen for gifted services were given intelligence tests with a strict cutoff, usually at 130, which meant that students who scored below 130 were not identified.
Marland Report
The impact of the NDEA was evident in schools for years after, but a study on how effective education was meeting the needs of gifted students was initiated by the United States Department of Education in 1969. The Marland Report, completed in 1972, for the first time presented a general definition of giftedness, and urged districts to adopt it. The report also allowed students to show high functioning on talents and skills not measurable by an intelligence test. The Marland Report defined gifted as
The report's definition continues to be the basis of the definition of giftedness in most districts and states.
A Nation at Risk
In 1983, the result of an 18-month-long study of secondary students was published as A Nation at Risk, and claimed that students in the United States were no longer receiving superior education, and in fact, could not compete with students from other developed countries in many academic exercises. One of the recommendations the book made was to increase services to gifted education programs, citing curriculum enrichment or acceleration specifically. The US federal government was also urged to create standards for the identification and servicing of gifted students.
Jacob Javits Gifted and Talented Students Education Act
The Jacob Javits Gifted and Talented Students Education Act was passed in 1988 as part of the Elementary and Secondary Education Act (ESEA). Instead of funding district-level gifted education programs, the Javits Act instead has three primary components: the research of effective methods of testing, identification, and programming, which is performed at the National Research Center on the Gifted and Talented; the awarding of grants to colleges, states, and districts that focus on underrepresented populations of gifted students; and grants awarded to state and districts for program implementation.
Annual funding for grants must be passed by US Congress, and totaled $9.6 million US in 2007, but the money is not promised. While he was president, George W. Bush eliminated the money every year of his term, but members of Congress overrode the president to make sure the grant money was distributed.
No Child Left Behind
The most recent US federal education initiative was signed into law in 2002. The goal of No Child Left Behind (NCLB) is to bring the proficiency of all students to grade level but critics note it does not address the needs of gifted students who perform above grade level. The act imposes punishments on schools, administrators, and teachers when students do not achieve to the plan's designs, but does not address any achievement standards for high-functioning students, forcing schools and teachers to spend their time with low-achieving students. An article in The Washington Post declared, "The unmistakable message to teachers – and to students – is that it makes no difference whether a child barely meets the proficiency standard or far exceeds it." Gifted services have been recently eroding as a result of the new legislation, according to a 2006 article in The New York Times.
A Nation Deceived
In 2004, the John Templeton Foundation sponsored a report titled A Nation Deceived: How Schools Hold Back America's Brightest Students, highlighting the disparity between the research on acceleration (which generally supports it, both from an academic and a psychological point of view), and the educational practices in the US that are often contrary to the conclusions of that research. The Institute for Research and Policy on Acceleration (IRPA) was established in 2006 at The Connie Belin and Jacqueline N. Blank International Center for Gifted Education and Talent Development at the University of Iowa College of Education through the support of the John Templeton Foundation following the publication of this report.
Global implementation
Australia
Public gifted education in Australia varies significantly from state to state. New South Wales has 95 primary schools with opportunity classes catering to students in year 5 and 6. New South Wales also has 17 fully selective secondary schools and 25 partially selective secondary schools. Western Australia has selective programs in 17 high schools, including Perth Modern School, a fully selective school. Queensland has three Queensland Academies catering to students in years 10, 11 and 12. South Australia has programs in three public high schools catering to students in years 8, 9 and 10, including Glenunga International High School. The Victorian Government commissioned a parliamentary inquiry into the education of gifted and talented children in 2012. One recommendation from the inquiry was for the Victorian Government to list the schools with programs, but the government has not implemented this recommendation. Some private schools have developed programs for gifted children.
Brazil
The Centre for Talent and Potential Development (CEDET) is a special education center created by Zenita Guenther in Lavras, MG, Brazil, in 1993. CEDET is run by the Lavras School System with technical and civil responsibility delegated to the Association of Parents and Friends for Supporting Talent (ASPAT). Its main goal is to cultivate the proper physical and social environment for complementing and supplementing educational support to the gifted and talented student. At present, there are 512 gifted students age 7 to 17 enrolled at CEDET, around 5% of Lavras Basic School population. The students come from thirteen municipal schools, eight state schools and two private schools, plus a group of students from nearby communities brought in by their families.
Canada
In Alberta, the Calgary Board of Education (CBE) has various elementary, middle and high schools offering the GATE Program, standing for Gifted and Talented Education, for grades 4–12, or divisions 2–4. The program for students, who, through an IQ test, ranked in the Very Superior Range; falling into Gifted or Genius. For each of the three divisions, there are two schools offering GATE, one for the north side of the city (CBE areas I, II and III) and one for the south side (CBE areas IV and V). For Division 2, or grades 4–6, it is available at Hillhurst Elementary School for the North and Nellie McClung Elementary School for the South. For Division 3, or grades 7–9, it is available at Queen Elizabeth High School for the north and John Ware Junior High School for the south. For Division 3, or grades 10–12, Queen Elizabeth High School, a joint junior high–senior high, offers it for the north and Henry Wise Wood Senior High School offers it for the south. GATE classes go more in-depth and cover some curriculum for the following grade level, with tougher assignments and a faster learning pace. Students benefit from being around other students like them. These students attend the school alongside regular students and those in other programs (e.g. International Baccalaureate and Advanced Placement.) In the 2014–2015 school year, students from grades 4–7 in the south will be attending Louis Riel Junior High School, already home to a science program, and students in the regular program there will be moved to Nellie McClung and John Ware. Students at John Ware will be phased out: eighth grade GATE will end in June 2015, and ninth grade GATE will end in 2016, while GATE will be expanding to grade 9 at Louis Riel by September 2016. Prior to John Ware, the GATE program was housed at Elboya. A large number of teachers from Nellie McClung and John Ware will be moving to the new location, which was picked to deal with student population issues and to concentrate resources. Notable alumni of the CBE GATE Program include the 36th mayor of Calgary, Naheed Nenshi, from Queen Elizabeth High School.
Westmount Charter School in Calgary is a K–12 charter school specifically dedicated to gifted education.
In British Columbia, the Vancouver Board of Education's gifted program is called Multi-Age Cluster Class or MACC. This is a full-time program for highly gifted elementary students from grades 4 to 7. Through project-based learning, students are challenged to use higher order thinking skills. Another focus of the program is autonomous learning; students are encouraged to self-monitor, self-reflect and seek out enrichment opportunities. Entrance to the program is initiated through referral followed by a review by a screening committee. IQ tests are used but not exclusively. Students are also assessed by performance, cognitive ability tests, and motivation. There are four MACCs in Vancouver: grade 4/5 and grade 6/7 at Sir William Osler Elementary, grade 5/6/7 at Tecumseh Elementary, and a French-immersion grade 5/6/7 at Kerrisdale Elementary.
At a lower scale, in Ontario, the Peel District School Board operates its Regional Enhanced Program at The Woodlands School, Lorne Park Secondary School, Glenforest Secondary School, Heart Lake Secondary School and Humberview Secondary School to provide students an opportunity to develop and explore skills in a particular area of interest. Students identified as gifted (which the PDSB classifies as "enhanced") may choose to attend the nearest of these high schools instead of their assigned home high school. In the Regional Enhanced Program, enhanced students take core courses (primarily, but not limited to English, mathematics, and the sciences) in an environment surrounded by fellow enhanced peers. The classes often contain modified assignments that encourage students to be creative.
Hong Kong
Definition of giftedness
The Education Commission Report No. 4 issued in 1990 recommended a policy on gifted education for schools in Hong Kong and suggested that a broad definition of giftedness using multiple criteria should be adopted.
Gifted children generally have exceptional achievement or potential in one or more of the following domains:
a high level of measured intelligence;
specific academic aptitude in a subject area;
creative thinking;
superior talent in visual and performing arts;
natural leadership of peers; and
psychomotor ability – outstanding performance or ingenuity in athletics, mechanical skills or other areas requiring gross or fine motor coordination;
The multi-dimensional aspect of intelligence has been promoted by Professor Howard Gardner from the Harvard Graduate School of Education in his theory of multiple intelligences. In his introduction to the tenth anniversary edition of his classic work Frames of Mind, he says:
In the heyday of the psychometric and behaviorist eras, it was generally believed that intelligence was a single entity that was inherited; and that human beings – initially a blank slate – could be trained to learn anything, provided that it was presented in an appropriate way. Nowadays an increasing number of researchers believe precisely the opposite; that there exists a multitude of intelligences, quite independent of each other; that each intelligence has its own strengths and constraints; that the mind is far from unencumbered at birth; and that it is unexpectedly difficult to teach things that go against early 'naive' theories of that challenge the natural lines of force within an intelligence and its matching domains. (Gardner 1993: xxiii)
Howard Gardner initially formulated a list of seven intelligences, but later added an eighth, that are intrinsic to the human mind: linguistic, logical/mathematical, visual/spatial, musical, bodily kinesthetic, intrapersonal, interpersonal, and naturalist intelligences.
It has become widely accepted at both local and international scales to adopt a broad definition of giftedness using multiple criteria to formulate gifted education policy.
Mission and principles
The mission of gifted education is to systematically and strategically explore and develop the potential of gifted students. Gifted learners are to be provided with opportunities to receive education at appropriate levels in a flexible teaching and learning environment.
The guiding principles for gifted education in Hong Kong are:
Nurturing multiple intelligences as a requirement of basic education for all students and an essential part of the mission for all schools
The needs of gifted children are best met within their own schools though it is recognized that opportunities to learn with similarly gifted students are important. Schools have an obligation to provide stimulating and challenging learning opportunities for their students
The identification of gifted students should recognize the breadth of multiple intelligences
Schools should ensure that the social and emotional, as well as the intellectual, needs of gifted children are recognized and met.
Framework
Based on these guiding principles, a three-tier gifted education framework was adopted in 2000. Levels 1 & 2 are recognised as being school-based whilst Level 3 is the responsibility of the HKAGE. The intention is that Level 1 serves the entire school population, irrespective of ability, that Level 2 deals with between 2–10% of the ability group, and that Level 3 caters for the top 2% of students.
Level 1:
A. To immerse the core elements advocated in gifted education i.e. High-order thinking skills, creativity and personal-social competence in the curriculum for ALL students;
B. To differentiate teaching through appropriate grouping of students to meet the different needs of the groups with enrichment and extension of curriculum across ALL subjects in regular classrooms.
Level 2:
C. To conduct pull-out programmes of generic nature outside the regular classroom to allow systematic training for a homogeneous group of students (e.g. Creativity training, leadership training, etc.);
D. To conduct pull-out programme in specific areas (e.g. Maths, Arts, etc.) outside the regular classroom to allow systematic training for students with outstanding performance in specific domains.
Level 3:
E. Tertiary institutions and other educational organizations / bodies, such as the Hong Kong Academy for Gifted Education and other universities in Hong Kong to provide a wide and increasing range of programmes for gifted students
India
In India, Jnana Prabodhini Prashala started in 1968, is probably the first school for gifted education. The motto is "motivating intelligence for social change." The school, located in central Pune, admits 80 students each year after thorough testing, which includes two written papers and an interview. The psychology department of Jnana Prabodhini has worked on J. P. Guilford's model of intelligence.
Iran
National Organization for Development of Exceptional Talents (NODET), also known as SAMPAD, are national middle and high schools in Iran developed specifically for the development of exceptionally talented students in Iran. NODET was established in 1976 (as NIOGATE) and re-established in 1987.
Admission to NODET schools is selective and based on a comprehensive nationwide entrance examination procedure.
Every year thousands of students apply to enter the schools, from which less than 5% are chosen for the 99 middle schools and 98 high schools within the country. All applicants must have a minimum GPA of 19 (out of 20) for attending the entrance exam. In 2006, 87,081 boys and 83,596 girls from 56 cities applied, and 6,888 students were accepted for the 2007 middle school classes. The admission process is much more selective in big cities like Tehran, Isfahan, Mashhad and Karaj in which less than 150 students are accepted after two exams and interviews, out of over 50,000 applicants.
The top NODET (and Iranian) schools are Allameh Helli High School and Shahid Madani High School (in Tabriz), Farzanegan High School located in Tehran, Shahid Ejei High School located in Isfahan, Shahid Hashemi Nejad High School located in Mashhad and Shahid Soltani School located in Karaj. Courses taught in NODET schools are college-level in fields such as biology, chemistry, mathematics, physics and English. The best teachers from the ministry of education are chosen mainly by the school's principal and faculty to teach at NODET schools. Schools mainly have only two majors (normal schools have three majors), math/physics and experimental sciences (like math/physics but with biology as the primary course). Even though social sciences are taught, there is much less emphasis on these subjects due to the lack of interest by both students and the organization.
Norway
Norway has no centre for gifted or talented children or youth. However, there is the privately run Barratt Due Institute of Music which offers musical kindergarten, evening school and college for highly talented young musicians. There is also the public secondary school for talents in ballet at Ruseløkka school in Oslo, which admits the top 15 dancers. In athletics, the privately run Norwegian Elite Sports Gymnasium (NTG) offers secondary school for talents in five locations in Norway. This account might not be complete.
Republic of Ireland
The Centre for the Talented Youth of Ireland has run in Dublin City University since 1992.
South Korea
Following the Gifted Education Promotion Law in the year 2000, the Ministry of Education, Science, and Technology (MEST) founded the National Research Center for Gifted and Talented Education (NRCGTE) in 2002 to ensure effective implementation of gifted education research, development, and policy. The center is managed by the Korean Educational Development Institute (KEDI). Presently twenty-five universities conduct gifted and talented education research in some form; for example, Seoul National University is conducting Science-gifted Education Center, and KAIST is conducting Global Institute For Talented Education (GIFTED), the Korean Society for the Gifted and Talented and the Korean Society for the Gifted.
Education for the scientifically gifted in Korea can be traced back to the 1983 government founding of Gyeonggi Science High School. Following three later additions (Korea Science Academy of KAIST, Seoul Science High School and Daegu Science High School), approximately 1,500, or 1 in 1,300 (0.08 percent) of high school students are currently enrolled among its four gifted academies. By 2008, about 50,000, or 1 in 140 (0.7 percent) of elementary and middle school students participated in education for the gifted. In 2005, a program was undertaken to identify and educate gifted children of socioeconomically underprivileged people. Since then, more than 1,800 students have enrolled in the program.
Gradually, the focus has expanded over time to cover informatics, arts, physical education, creative writing, humanities, and social sciences, leading to the 2008 creation of the government-funded Korean National Institute for the Gifted Arts. To pluralize the need for trained professional educators, teachers undergo basic training (60 hours), advanced training (120 hours), and overseas training (60 hours) to acquire skills necessary to teach gifted youth.
Singapore
In Singapore, the Gifted Education Programme (GEP) was introduced in 1984 and is offered in the upper primary years (Primary 4–6, ages 10–12). Pupils undergo rigorous testing in Primary 3 (age 9) for admission into the GEP for Primary 4 to 6. About 1% of the year's cohort are admitted into the programme. The GEP is offered at selected schools, meaning that these pupils attend school alongside their peers in the mainstream curriculum but attend separate classes for certain subjects. As of the 2016 academic year, there are nine primary schools which offer the GEP.
Slovakia
The School for Gifted Children in Bratislava was established in 1998. It offers education known as APROGEN—Alternative Program for Gifted Education.
Turkey
The UYEP Research and Practice Center offers enriched programs for gifted students at Anadolu University. The center was founded by Ugur Sak in 2007. ANABILIM Schools have special classrooms for gifted and talented students. These schools apply the differentiated curriculum in the sciences, mathematics, language arts, social studies, and the arts for K8 gifted and talented students and enriched science and project-based learning in high school. There are over 200 science and art centers operated by the Ministry of Education that offer special education for gifted and talented students throughout the country. The Ministry uses the Anadolu Sak Intelligence Scale (ASIS) and the Wechsler Scales to select students for these centers. Four universities offer graduate programs in gifted education.
United Kingdom
In England, schools are expected to identify 5–10% of students who are gifted and/or talented in relation to the rest of the cohort in that school—an approach that is pragmatic (concerned with ensuring schools put in place some provision for their most able learners) rather than principled (in terms of how to best understand giftedness). The term gifted applies to traditional academic subjects, and talented is used in relation to high levels of attainment in the creative arts and sports. The National Academy for Gifted and Talented Youth ran from 2002 to 2007 at the University of Warwick. Warwick University decided not to reapply for the contract to run NAGTY in 2007, instead introducing its own programme, the International Gateway for Gifted Youth in 2008. In January 2010, the government announced that NAGTY was to be scrapped the following month.
United States
In the United States, each state department of education determines if the needs of gifted students will be addressed as a mandatory function of public education. If so, the state determines the definition of which students will be identified and receive services, but may or may not determine how they shall receive services. If a state does not consider gifted education mandatory, individual districts may, thus the definition of what gifted is varies from state or district.
In contrast with special education, gifted education is not regulated on a federal level, although recommendations by the US Department of Education are offered. As such, funding for services is not consistent from state to state, and although students may be identified, the extent to which they receive services can vary widely depending upon a state or district's budget.
Although schools with higher enrollment of minority or low-income students are just as likely to offer gifted programs as other schools, differing enrollment rates across racial and ethnic groups has raised concerns about equity in gifted education in the U.S.
Gifted education programs are also offered at various private schools. For example, the Mirman School caters to children with an IQ of 138 and above and Prep for Prep is focused on students of color.
Commonly used terms
Source: National Association for Gifted Children—Frequently Used Terms in Gifted Education
Affective curriculum: A curriculum that is designed to teach gifted students about emotions, self-esteem, and social skills. This can be valuable for all students, especially those who have been grouped with much older students, or who have been rejected by their same-age, but academically typical, peers.
Differentiation: modification of a gifted student's curriculum to accommodate their specific needs. This may include changing the content or ability level of the material.
Heterogeneous grouping: a strategy that groups students of varied ability, preparedness, or accomplishment in a single classroom environment. Usually this terminology is applied to groupings of students in a particular grade, especially in elementary school. For example, students in fifth grade would be heterogeneously grouped in math if they were randomly assigned to classes instead of being grouped by demonstrated subject mastery. Heterogeneous grouping is sometimes claimed to provide a more effective instructional environment for less prepared students.
Homogeneous grouping: a strategy that groups students by specific ability, preparedness, or interest within a subject area. Usually this terminology is applied to groupings of students in a particular grade, especially in elementary school. For example, students in fifth grade would be homogeneously grouped in math if they were assigned to classes based on demonstrated subject mastery rather than being randomly assigned. Homogeneous grouping can provide more effective instruction for the most prepared students.
Individualized Education Program (IEP): a written document that addresses a student's specific individual needs. It may specify accommodations, materials, or classroom instruction. IEPs are often created for students with disabilities, who are required by law to have an IEP when appropriate. Most states are not required to have IEPs for students who are only identified as gifted. Some students may be intellectually gifted in addition to having learning and/or attentional disabilities, and may have an IEP that includes, for instance, enrichment activities as a means of alleviating boredom or frustration, or as a reward for on-task behavior. In order to warrant such an IEP, a student needs to be diagnosed with a separate emotional or learning disability that is not simply the result of being unchallenged in a typical classroom. These are also known as Individual Program Plans, or IPPs.
Justification
Researchers and practitioners in gifted education contend that, if education were to follow the medical maxim of "first, do no harm," then no further justification would be required for providing resources for gifted education as they believe gifted children to be at-risk. The notion that gifted children are "at-risk" was publicly declared in the Marland Report in 1972:
Three decades later, a similar statement was made by researchers in the field:
Controversies
Controversies concerning gifted education are varied and often highly politicized. They are as basic as agreeing upon the appropriateness of the term gifted or the definition of giftedness. For example, does giftedness refer to performance or potential (such as inherent intelligence)? Many students do not exhibit both at the same time.
Measures of general intelligence also remain controversial. Early IQ tests were notorious for producing higher IQ scores for privileged races and classes and lower scores for disadvantaged subgroups. Although IQ tests have changed substantially over the past half century, and many objections to the early tests have been addressed by 'culture-neutral' tests (such as the Raven test), IQ testing remains controversial. Regardless of the tests used to identify children for gifted programs, many school districts in the United States still have disproportionately more White and Asian American students enrolled in their gifted programs, while Hispanic and African American students are usually underrepresented. However, research shows that this may be not be a fault of tests, but rather a result of the achievement gap in the United States.
Some schools and districts only accept IQ tests as evidence of giftedness. This brings scrutiny to the fact that many affluent families can afford to consult with an educational psychologist to test their children, whereas families with a limited income cannot afford the test and must depend on district resources.
Class and ethnicity
Gifted programs are often seen as being elitist in places where the majority of students receiving gifted services are from a privileged background. Identifying and serving gifted children from poverty presents unique challenges, ranging from emotional issues arising from a family's economic insecurity, to gaps in pre-school cognitive development due to the family's lack of education and time.
In New York City, experience has shown that basing admission to gifted and talented programs on tests of any sort can result in selection of substantially more middle-class and white or Asian students and development of more programs in schools that such students attend.
Emotional aspects
While giftedness is seen as an academic advantage, psychologically it can pose other challenges for the gifted individual. A person who is intellectually advanced may or may not be advanced in other areas. Each individual student needs to be evaluated for physical, social, and emotional skills without the traditional prejudices which prescribe either "compensatory" weaknesses or "matching" advancement in these areas.
It is a common misconception that gifted students are universally gifted in all areas of academics, and these misconceptions can have a variety of negative emotional effects on a gifted child. Unrealistically high expectations of academic success are often placed on gifted students by both parents and teachers. This pressure can cause gifted students to experience high levels of anxiety, to become perfectionists, and to develop a fear of failure. Gifted students come to define themselves and their identity through their giftedness, which can be problematic as their entire self-concept can be shaken when they do not live up to the unrealistically high expectations of others.
A person with significant academic talents often finds it difficult to fit in with schoolmates. These pressures often wane during adulthood, but they can leave a significant negative impact on emotional development.
Social pressures can cause children to "play down" their intelligence in an effort to blend in with other students. "Playing down" is a strategy often used by students with clinical depression and is seen somewhat more frequently in socially acute adolescents. This behavior is usually discouraged by educators when they recognize it. Unfortunately, the very educators who want these children to challenge themselves and to embrace their gifts and talents are often the same people who are forced to discourage them in a mixed-ability classroom, through mechanisms like refusing to call on the talented student in class so that typical students have an opportunity to participate.
Students who are young, enthusiastic or aggressive are more likely to attract attention and to disrupt the class by working ahead, giving the correct answers all the time, asking for new assignments, or finding creative ways to entertain themselves while the rest of the class finishes an assignment. This behavior can be mistaken for ADHD.
Many parents of gifted find that it is the social-emotional aspect of their children's lives that needs support. Schools and talent development programs often focus on academic enrichment rather than providing time for gifted children to have the social interaction with true peers that is required for healthy development. National organizations such as Supporting Emotional Needs of the Gifted (SENG) as well as local organizations, have emerged in an effort to meet these needs.
It can also happen that some unidentified gifted students will get bored in regular class, daydream and lose track of where the class is in a lecture, and the teacher becomes convinced that the student is slow and struggling with the material.
Finally, gifted and talented students are statistically somewhat more likely to be diagnosed with a mental disorder such as bipolar disorder and to become addicted to drugs or alcohol. Gifted and talented students also have a higher chance of co-occurring learning disability. Gifted students with learning disabilities are often called twice exceptional. These students can require special attention in school.
Gender
Another area of controversy has been the marginalization of gifted females. Studies have attributed this to self-efficacy, acculturation and biological differences in aptitude between boys and girls for advanced mathematics.
Test preparation
In the United States, particularly in New York City where qualifying children as young as four are enrolled in enriched kindergarten classes offered by the public schools, a test preparation industry has grown up which closely monitors the nature of tests given to prospective students of gifted and talented programs. This can result in admission of significant numbers of students into programs who lack superior natural intellectual talent and exclusion of naturally talented students who did not participate in test preparation or lacked the resources to do so.
It is virtually impossible to train a child for a WISC test or other gifted test. Some websites are known for publishing test questions and answers, although using these is considered illegal since it is highly confidential information. It would also be disastrous if a non-gifted student was placed in a gifted program. Reviewing actual test questions can confuse children and stifles their natural thinking process, however reviewing similar style questions is a possibility.
Private gifted assessment is usually expensive and educators recommend that parents take advantage of online screening tests to give a preliminary indication of potential giftedness. Another way to screen for giftedness before requesting a psychological assessment is to do a curriculum-based assessment. Curriculum-based assessment is a form of achievement testing that focuses specifically on what the child has been exposed to in their academic career. It can be done through school or a private educational center. Although this can determine if a child's performance in school potentially signifies giftedness, there are complications. For example, if a child changes school districts or country of residence, the different terminology of curriculum could hold that child back. Secondly, discrepancies between school districts, along with public and private education, create a very wide range of potential knowledge bases.
Scholarly journals
Advance Academics
Gifted Child Quarterly
Gifted Education International
Gifted and Talented International
High Ability Studies
Journal for the Education of the Gifted
Roeper Review
See also
List of gifted and talented programmes
Academic elitism
Special education
Rationale for gifted programs
Selective schools
Discrimination of excellence
Notes
Further reading
The latest research about gifted education can be found in the academic journals that specialize in gifted education: Gifted Child Quarterly, Journal of Advanced Academics, Journal for the Education of the Gifted, Roeper Review.
Assouline, S., and A. Lupkowski-Shoplik (2005). Developing Math Talent: A Guide for Educating Gifted And Advanced Learners in Math. Waco, TX: Prufrock Press .
Broecher, J. (2005). Hochintelligente kreativ begaben. LIT-Verlag Muenster, Hamburg 2005 (Application of the High/Scope Approach and Renzulli's Enrichment Triad Model to a German Summer Camp for the Gifted)
Davidson, Jan and Bob, with Vanderkam, Laura (2004). Genius Denied: How to Stop Wasting Our Brightest Young Minds. New York, NY: Simon and Schuster.
Davis, G., and S. Rimm (1989). Education of the Gifted and Talented (2nd edn). Englewood Cliffs, NJ: Prentice Hall.
Hansen, J., and S. Hoover (1994). Talent Development: Theories and Practice. Dubuque, IA: Kendall Hunt.
Johnsen, S. (1999, November/ December). "The top 10 events in gifted education". Gifted Child Today, 22(6), 7.
Frank, Maurice (2013). "High Learning Potential". In Lib Ed, a UK-based (online, formerly paper) magazine opposed to authoritarian schooling.
Newland, T. (1976). The Gifted in Historical Perspective. Englewood Cliffs, NJ: Prentice Hall.
Piirto, J. (1999). Talented Adults and Children: Their Development and Education (3rd edn). Waco, TX,: Prufrock Press.
Rogers, Karen B. (2002). Re-forming Gifted Education: How Parents and Teachers Can Match the Program to the Child. Scottsdale, AZ: Great Potential Press.
Winebrenner, Susan. (2001). Teaching Gifted Kids in the Regular Classroom. Minneapolis, MN: Free Spirit Publishing.
U.S. Department of Education, Office of Educational Research and Improvement. (1993). National Excellence: A case for developing America's talent. Washington, DC: U.S. Government Printing Office.
External links
Hoagies' Gifted Education Page
The Relationship of Grouping Practices to the Education of the Gifted and Talented Learner.
Myths About Gifted Students
"Raising an Accidental Prodigy from The Wall Street Journal on choices parents of gifted children make about their education
Alternative education
School terminology | 0.765567 | 0.995837 | 0.76238 |
Rigour | Rigour (British English) or rigor (American English; see spelling differences) describes a condition of stiffness or strictness. These constraints may be environmentally imposed, such as "the rigours of famine"; logically imposed, such as mathematical proofs which must maintain consistent answers; or socially imposed, such as the process of defining ethics and law.
Etymology
"Rigour" comes to English through old French (13th c., Modern French rigueur) meaning "stiffness", which itself is based on the Latin rigorem (nominative rigor) "numbness, stiffness, hardness, firmness; roughness, rudeness", from the verb rigere "to be stiff". The noun was frequently used to describe a condition of strictness or stiffness, which arises from a situation or constraint either chosen or experienced passively. For example, the title of the book Theologia Moralis Inter Rigorem et Laxitatem Medi roughly translates as "mediating theological morality between rigour and laxness". The book details, for the clergy, situations in which they are obligated to follow church law exactly, and in which situations they can be more forgiving yet still considered moral. Rigor mortis translates directly as the stiffness (rigor) of death (mortis), again describing a condition which arises from a certain constraint (death).
Intellectualism
Intellectual rigour is a process of thought which is consistent, does not contain self-contradiction, and takes into account the entire scope of available knowledge on the topic. It actively avoids logical fallacy. Furthermore, it requires a sceptical assessment of the available knowledge. If a topic or case is dealt with in a rigorous way, it typically means that it is dealt with in a comprehensive, thorough and complete way, leaving no room for inconsistencies.
Scholarly method describes the different approaches or methods which may be taken to apply intellectual rigour on an institutional level to ensure the quality of information published. An example of intellectual rigour assisted by a methodical approach is the scientific method, in which a person will produce a hypothesis based on what they believe to be true, then construct experiments in order to prove that hypothesis wrong. This method, when followed correctly, helps to prevent against circular reasoning and other fallacies which frequently plague conclusions within academia. Other disciplines, such as philosophy and mathematics, employ their own structures to ensure intellectual rigour. Each method requires close attention to criteria for logical consistency, as well as to all relevant evidence and possible differences of interpretation. At an institutional level, peer review is used to validate intellectual rigour.
Honesty
Intellectual rigour is a subset of intellectual honesty—a practice of thought in which ones convictions are kept in proportion to valid evidence. Intellectual honesty is an unbiased approach to the acquisition, analysis, and transmission of ideas. A person is being intellectually honest when he or she, knowing the truth, states that truth, regardless of outside social/environmental pressures. It is possible to doubt whether complete intellectual honesty exists—on the grounds that no one can entirely master his or her own presuppositions—without doubting that certain kinds of intellectual rigour are potentially available. The distinction certainly matters greatly in debate, if one wishes to say that an argument is flawed in its premises.
Politics and law
The setting for intellectual rigour does tend to assume a principled position from which to advance or argue. An opportunistic tendency to use any argument at hand is not very rigorous, although very common in politics, for example. Arguing one way one day, and another later, can be defended by casuistry, i.e. by saying the cases are different.
In the legal context, for practical purposes, the facts of cases do always differ. Case law can therefore be at odds with a principled approach; and intellectual rigour can seem to be defeated. This defines a judge's problem with uncodified law. Codified law poses a different problem, of interpretation and adaptation of definite principles without losing the point; here applying the letter of the law, with all due rigour, may on occasion seem to undermine the principled approach.
Mathematics
Mathematical rigour can apply to methods of mathematical proof and to methods of mathematical practice (thus relating to other interpretations of rigour).
Mathematical proof
Mathematical rigour is often cited as a kind of gold standard for mathematical proof. Its history traces back to Greek mathematics, especially to Euclid's Elements.
Until the 19th century, Euclid's Elements was seen as extremely rigorous and profound, but in the late 19th century, Hilbert (among others) realized that the work left certain assumptions implicit—assumptions that could not be proved from Euclid's Axioms (e.g. two circles can intersect in a point, some point is within an angle, and figures can be superimposed on each other). This was contrary to the idea of rigorous proof where all assumptions need to be stated and nothing can be left implicit. New foundations were developed using the axiomatic method to address this gap in rigour found in the Elements (e.g., Hilbert's axioms, Birkhoff's axioms, Tarski's axioms).
During the 19th century, the term "rigorous" began to be used to describe increasing levels of abstraction when dealing with calculus which eventually became known as mathematical analysis. The works of Cauchy added rigour to the older works of Euler and Gauss. The works of Riemann added rigour to the works of Cauchy. The works of Weierstrass added rigour to the works of Riemann, eventually culminating in the arithmetization of analysis. Starting in the 1870s, the term gradually came to be associated with Cantorian set theory.
Mathematical rigour can be modelled as amenability to algorithmic proof checking. Indeed, with the aid of computers, it is possible to check some proofs mechanically. Formal rigour is the introduction of high degrees of completeness by means of a formal language where such proofs can be codified using set theories such as ZFC (see automated theorem proving).
Published mathematical arguments have to conform to a standard of rigour, but are written in a mixture of symbolic and natural language. In this sense, written mathematical discourse is a prototype of formal proof. Often, a written proof is accepted as rigorous although it might not be formalised as yet. The reason often cited by mathematicians for writing informally is that completely formal proofs tend to be longer and more unwieldy, thereby obscuring the line of argument. An argument that appears obvious to human intuition may in fact require fairly long formal derivations from the axioms. A particularly well-known example is how in Principia Mathematica, Whitehead and Russell have to expend a number of lines of rather opaque effort in order to establish that, indeed, it is sensical to say: "1+1=2". In short, comprehensibility is favoured over formality in written discourse.
Still, advocates of automated theorem provers may argue that the formalisation of proof does improve the mathematical rigour by disclosing gaps or flaws in informal written discourse. When the correctness of a proof is disputed, formalisation is a way to settle such a dispute as it helps to reduce misinterpretations or ambiguity.
Physics
The role of mathematical rigour in relation to physics is twofold:
First, there is the general question, sometimes called Wigner's Puzzle, "how it is that mathematics, quite generally, is applicable to nature?" Some scientists believe that its record of successful application to nature justifies the study of mathematical physics.
Second, there is the question regarding the role and status of mathematically rigorous results and relations. This question is particularly vexing in relation to quantum field theory, where computations often produce infinite values for which a variety of non-rigorous work-arounds have been devised.
Both aspects of mathematical rigour in physics have attracted considerable attention in philosophy of science (see, for example, ref. and ref. and the works quoted therein).
Education
Rigour in the classroom is a hotly debated topic amongst educators. Even the semantic meaning of the word is contested.
Generally speaking, classroom rigour consists of multi-faceted, challenging instruction and correct placement of the student. Students excelling in formal operational thought tend to excel in classes for gifted students. Students who have not reached that final stage of cognitive development, according to developmental psychologist Jean Piaget, can build upon those skills with the help of a properly trained teacher.
Rigour in the classroom is commonly called "rigorous instruction". It is instruction that requires students to construct meaning for themselves, impose structure on information, integrate individual skills into processes, operate within but at the outer edge of their abilities, and apply what they learn in more than one context and to unpredictable situations
See also
Intellectual honesty
Intellectual dishonesty
Pedant
Scientific method
Self-deception
Sophistry
Cognitive rigor
References
Philosophical logic | 0.770842 | 0.989019 | 0.762378 |
Transport theorem | The transport theorem (or transport equation, rate of change transport theorem or basic kinematic equation or Bour's formula, named after: Edmond Bour) is a vector equation that relates the time derivative of a Euclidean vector as evaluated in a non-rotating coordinate system to its time derivative in a rotating reference frame. It has important applications in classical mechanics and analytical dynamics and diverse fields of engineering. A Euclidean vector represents a certain magnitude and direction in space that is independent of the coordinate system in which it is measured. However, when taking a time derivative of such a vector one actually takes the difference between two vectors measured at two different times t and t+dt. In a rotating coordinate system, the coordinate axes can have different directions at these two times, such that even a constant vector can have a non-zero time derivative. As a consequence, the time derivative of a vector measured in a rotating coordinate system can be different from the time derivative of the same vector in a non-rotating reference system. For example, the velocity vector of an airplane as evaluated using a coordinate system that is fixed to the earth (a rotating reference system) is different from its velocity as evaluated using a coordinate system that is fixed in space. The transport theorem provides a way to relate time derivatives of vectors between a rotating and non-rotating coordinate system, it is derived and explained in more detail in rotating reference frame and can be written as:
Here f is the vector of which the time derivative is evaluated in both the non-rotating, and rotating coordinate system. The subscript r designates its time derivative in the rotating coordinate system and the vector Ω is the angular velocity of the rotating coordinate system.
The Transport Theorem is particularly useful for relating velocities and acceleration vectors between rotating and non-rotating coordinate systems.
Reference states: "Despite of its importance in classical mechanics and its ubiquitous application in engineering, there is no universally-accepted name for the Euler derivative transformation formula [...] Several terminology are used: kinematic theorem, transport theorem, and transport equation. These terms, although terminologically correct, are more prevalent in the subject of fluid mechanics to refer to entirely different physics concepts." An example of such a different physics concept is Reynolds transport theorem.
Derivation
Let be the basis vectors of , as seen from the reference frame , and denote the components of a vector in by just .
Let
so that this coordinate transformation is generated, in time, according to .
Such a generator differential equation is important for trajectories in Lie group theory.
Applying the product rule with implict summation convention,
For the rotation groups , one has .
In three dimensions, , the generator then equals the cross product operation from the left, a skew-symmetric linear map for any vector . As a matrix, it is also related to the vector as seen from via
References
Mathematical theorems | 0.779721 | 0.977741 | 0.762366 |
Oberth effect | In astronautics, a powered flyby, or Oberth maneuver, is a maneuver in which a spacecraft falls into a gravitational well and then uses its engines to further accelerate as it is falling, thereby achieving additional speed. The resulting maneuver is a more efficient way to gain kinetic energy than applying the same impulse outside of a gravitational well. The gain in efficiency is explained by the Oberth effect, wherein the use of a reaction engine at higher speeds generates a greater change in mechanical energy than its use at lower speeds. In practical terms, this means that the most energy-efficient method for a spacecraft to burn its fuel is at the lowest possible orbital periapsis, when its orbital velocity (and so, its kinetic energy) is greatest. In some cases, it is even worth spending fuel on slowing the spacecraft into a gravity well to take advantage of the efficiencies of the Oberth effect. The maneuver and effect are named after the person who first described them in 1927, Hermann Oberth, a Transylvanian Saxon physicist and a founder of modern rocketry.
Because the vehicle remains near periapsis only for a short time, for the Oberth maneuver to be most effective the vehicle must be able to generate as much impulse as possible in the shortest possible time. As a result the Oberth maneuver is much more useful for high-thrust rocket engines like liquid-propellant rockets, and less useful for low-thrust reaction engines such as ion drives, which take a long time to gain speed. Low thrust rockets can use the Oberth effect by splitting a long departure burn into several short burns near the periapsis. The Oberth effect also can be used to understand the behavior of multi-stage rockets: the upper stage can generate much more usable kinetic energy than the total chemical energy of the propellants it carries.
In terms of the energies involved, the Oberth effect is more effective at higher speeds because at high speed the propellant has significant kinetic energy in addition to its chemical potential energy. At higher speed the vehicle is able to employ the greater change (reduction) in kinetic energy of the propellant (as it is exhausted backward and hence at reduced speed and hence reduced kinetic energy) to generate a greater increase in kinetic energy of the vehicle.
Explanation in terms of work and kinetic energy
Because kinetic energy equals mv2/2, this change in velocity imparts a greater increase in kinetic energy at a high velocity than it would at a low velocity. For example, considering a 2 kg rocket:
at 1 m/s, the rocket starts with 12 = 1 J of kinetic energy. Adding 1 m/s increases the kinetic energy to 22 = 4 J, for a gain of 3 J;
at 10 m/s, the rocket starts with 102 = 100 J of kinetic energy. Adding 1 m/s increases the kinetic energy to 112 = 121 J, for a gain of 21 J.
This greater change in kinetic energy can then carry the rocket higher in the gravity well than if the propellant were burned at a lower speed.
Description in terms of work
The thrust produced by a rocket engine is independent of the rocket’s velocity relative to the surrounding atmosphere. A rocket acting on a fixed object, as in a static firing, does no useful work on the rocket; the rocket's chemical energy is progressively converted to kinetic energy of the exhaust, plus heat. But when the rocket moves, its thrust acts through the distance it moves. Force multiplied by displacement is the definition of mechanical work. The greater the velocity of the rocket and payload during the burn the greater is the displacement and the work done, and the greater the increase in kinetic energy of the rocket and its payload. As the velocity of the rocket increases, progressively more of the available kinetic energy goes to the rocket and its payload, and less to the exhaust.
This is shown as follows. The mechanical work done on the rocket is defined as the dot product of the force of the engine's thrust and the displacement it travels during the burn
If the burn is made in the prograde direction, The work results in a change in kinetic energy
Differentiating with respect to time, we obtain
or
where is the velocity. Dividing by the instantaneous mass to express this in terms of specific energy we get
where is the acceleration vector.
Thus it can be readily seen that the rate of gain of specific energy of every part of the rocket is proportional to speed and, given this, the equation can be integrated (numerically or otherwise) to calculate the overall increase in specific energy of the rocket.
Impulsive burn
Integrating the above energy equation is often unnecessary if the burn duration is short. Short burns of chemical rocket engines close to periapsis or elsewhere are usually mathematically modeled as impulsive burns, where the force of the engine dominates any other forces that might change the vehicle's energy over the burn.
For example, as a vehicle falls toward periapsis in any orbit (closed or escape orbits) the velocity relative to the central body increases. Briefly burning the engine (an "impulsive burn") prograde at periapsis increases the velocity by the same increment as at any other time. However, since the vehicle's kinetic energy is related to the square of its velocity, this increase in velocity has a non-linear effect on the vehicle's kinetic energy, leaving it with higher energy than if the burn were achieved at any other time.
Oberth calculation for a parabolic orbit
If an impulsive burn of Δv is performed at periapsis in a parabolic orbit, then the velocity at periapsis before the burn is equal to the escape velocity (Vesc), and the specific kinetic energy after the burn is
where .
When the vehicle leaves the gravity field, the loss of specific kinetic energy is
so it retains the energy
which is larger than the energy from a burn outside the gravitational field by
When the vehicle has left the gravity well, it is traveling at a speed
For the case where the added impulse Δv is small compared to escape velocity, the 1 can be ignored, and the effective Δv of the impulsive burn can be seen to be multiplied by a factor of simply
and one gets
≈
Similar effects happen in closed and hyperbolic orbits.
Parabolic example
If the vehicle travels at velocity v at the start of a burn that changes the velocity by Δv, then the change in specific orbital energy (SOE) due to the new orbit is
Once the spacecraft is far from the planet again, the SOE is entirely kinetic, since gravitational potential energy approaches zero. Therefore, the larger the v at the time of the burn, the greater the final kinetic energy, and the higher the final velocity.
The effect becomes more pronounced the closer to the central body, or more generally, the deeper in the gravitational field potential in which the burn occurs, since the velocity is higher there.
So if a spacecraft is on a parabolic flyby of Jupiter with a periapsis velocity of 50 km/s and performs a 5 km/s burn, it turns out that the final velocity change at great distance is 22.9 km/s, giving a multiplication of the burn by 4.58 times.
Paradox
It may seem that the rocket is getting energy for free, which would violate conservation of energy. However, any gain to the rocket's kinetic energy is balanced by a relative decrease in the kinetic energy the exhaust is left with (the kinetic energy of the exhaust may still increase, but it does not increase as much). Contrast this to the situation of static firing, where the speed of the engine is fixed at zero. This means that its kinetic energy does not increase at all, and all the chemical energy released by the fuel is converted to the exhaust's kinetic energy (and heat).
At very high speeds the mechanical power imparted to the rocket can exceed the total power liberated in the combustion of the propellant; this may also seem to violate conservation of energy. But the propellants in a fast-moving rocket carry energy not only chemically, but also in their own kinetic energy, which at speeds above a few kilometres per second exceed the chemical component. When these propellants are burned, some of this kinetic energy is transferred to the rocket along with the chemical energy released by burning.
The Oberth effect can therefore partly make up for what is extremely low efficiency early in the rocket's flight when it is moving only slowly. Most of the work done by a rocket early in flight is "invested" in the kinetic energy of the propellant not yet burned, part of which they will release later when they are burned.
See also
Bi-elliptic transfer
Gravity assist
Propulsive efficiency
References
External links
Oberth effect
Explanation of the effect by Geoffrey Landis.
Rocket propulsion, classical relativity, and the Oberth effect
Animation (MP4) of the Oberth effect in orbit from the Blanco and Mungan paper cited above.
Aerospace engineering
Rocketry
Astrodynamics | 0.768554 | 0.991937 | 0.762357 |
Uniformitarianism | Uniformitarianism, also known as the Doctrine of Uniformity or the Uniformitarian Principle, is the assumption that the same natural laws and processes that operate in our present-day scientific observations have always operated in the universe in the past and apply everywhere in the universe. It refers to invariance in the metaphysical principles underpinning science, such as the constancy of cause and effect throughout space-time, but has also been used to describe spatiotemporal invariance of physical laws. Though an unprovable postulate that cannot be verified using the scientific method, some consider that uniformitarianism should be a required first principle in scientific research. Other scientists disagree and consider that nature is not absolutely uniform, even though it does exhibit certain regularities.
In geology, uniformitarianism has included the gradualistic concept that "the present is the key to the past" and that geological events occur at the same rate now as they have always done, though many modern geologists no longer hold to a strict gradualism. Coined by William Whewell, uniformitarianism was originally proposed in contrast to catastrophism by British naturalists in the late 18th century, starting with the work of the geologist James Hutton in his many books including Theory of the Earth. Hutton's work was later refined by scientist John Playfair and popularised by geologist Charles Lyell's Principles of Geology in 1830. Today, Earth's history is considered to have been a slow, gradual process, punctuated by occasional natural catastrophic events.
History
18th century
Abraham Gottlob Werner (1749–1817) proposed Neptunism, where strata represented deposits from shrinking seas precipitated onto primordial rocks such as granite. In 1785 James Hutton proposed an opposing, self-maintaining infinite cycle based on natural history and not on the Biblical account.
Hutton then sought evidence to support his idea that there must have been repeated cycles, each involving deposition on the seabed, uplift with tilting and erosion, and then moving undersea again for further layers to be deposited. At Glen Tilt in the Cairngorm mountains he found granite penetrating metamorphic schists, in a way which indicated to him that the presumed primordial rock had been molten after the strata had formed. He had read about angular unconformities as interpreted by Neptunists, and found an unconformity at Jedburgh where layers of greywacke in the lower layers of the cliff face have been tilted almost vertically before being eroded to form a level plane, under horizontal layers of Old Red Sandstone. In the spring of 1788 he took a boat trip along the Berwickshire coast with John Playfair and the geologist Sir James Hall, and found a dramatic unconformity showing the same sequence at Siccar Point. Playfair later recalled that "the mind seemed to grow giddy by looking so far into the abyss of time", and Hutton concluded a 1788 paper he presented at the Royal Society of Edinburgh, later rewritten as a book, with the phrase "we find no vestige of a beginning, no prospect of an end".
Both Playfair and Hall wrote their own books on the theory, and for decades robust debate continued between Hutton's supporters and the Neptunists. Georges Cuvier's paleontological work in the 1790s, which established the reality of extinction, explained this by local catastrophes, after which other fixed species repopulated the affected areas. In Britain, geologists adapted this idea into "diluvial theory" which proposed repeated worldwide annihilation and creation of new fixed species adapted to a changed environment, initially identifying the most recent catastrophe as the biblical flood.
19th century
From 1830 to 1833 Charles Lyell's multi-volume Principles of Geology was published. The work's subtitle was "An attempt to explain the former changes of the Earth's surface by reference to causes now in operation". He drew his explanations from field studies conducted directly before he went to work on the founding geology text, and developed Hutton's idea that the earth was shaped entirely by slow-moving forces still in operation today, acting over a very long period of time. The terms uniformitarianism for this idea, and catastrophism for the opposing viewpoint, was coined by William Whewell in a review of Lyell's book. Principles of Geology was the most influential geological work in the middle of the 19th century.
Systems of inorganic earth history
Geoscientists support diverse systems of Earth history, the nature of which rests on a certain mixture of views about the process, control, rate, and state which are preferred. Because geologists and geomorphologists tend to adopt opposite views over process, rate, and state in the inorganic world, there are eight different systems of beliefs in the development of the terrestrial sphere. All geoscientists stand by the principle of uniformity of law. Most, but not all, are directed by the principle of simplicity. All make definite assertions about the quality of rate and state in the inorganic realm.
Lyell
Lyell's uniformitarianism is a family of four related propositions, not a single idea:
Uniformity of law – the laws of nature are constant across time and space.
Uniformity of methodology – the appropriate hypotheses for explaining the geological past are those with analogy today.
Uniformity of kind – past and present causes are all of the same kind, have the same energy, and produce the same effects.
Uniformity of degree – geological circumstances have remained the same over time.
None of these connotations requires another, and they are not all equally inferred by uniformitarians.
Gould explained Lyell's propositions in Time's Arrow, Time's Cycle (1987), stating that Lyell conflated two different types of propositions: a pair of methodological assumptions with a pair of substantive hypotheses. The four together make up Lyell's uniformitarianism.
Methodological assumptions
The two methodological assumptions below are accepted to be true by the majority of scientists and geologists. Gould claims that these philosophical propositions must be assumed before you can proceed as a scientist doing science. "You cannot go to a rocky outcrop and observe either the constancy of nature's laws or the working of unknown processes. It works the other way around." You first assume these propositions and "then you go to the outcrop."
Uniformity of law across time and space: Natural laws are constant across space and time.
The axiom of uniformity of law <ref name=gould1987>, "Making inferences about the past is wrapped up in the difference between studying the observable and the unobservable. In the observable, erroneous beliefs can be proven wrong and be inductively corrected by other observations. This is Popper's principle of falsifiability. However, past processes are not observable by their very nature. Therefore, the invariance of nature's laws must be assumed to come to conclusions about the past."</ref> is necessary in order for scientists to extrapolate (by inductive inference) into the unobservable past. The constancy of natural laws must be assumed in the study of the past; else we cannot meaningfully study it.
Uniformity of process across time and space: Natural processes are constant across time and space.
Though similar to uniformity of law, this second a priori assumption, shared by the vast majority of scientists, deals with geological causes, not physicochemical laws. The past is to be explained by processes acting currently in time and space rather than inventing extra esoteric or unknown processes without good reason'',, "Strict uniformitarianism may often be a guarantee against pseudo-scientific phantasies and loose conjectures, but it makes one easily forget that the principle of uniformity is not a law, not a rule established after comparison of facts, but a methodological principle, preceding the observation of facts ... It is the logical principle of parsimony of causes and of the economy of scientific notions. By explaining past changes by analogy with present phenomena, a limit is set to conjecture, for there is only one way in which two things are equal, but there is an infinity of ways in which they could be supposed different." otherwise known as parsimony or Occam's razor.
Substantive hypotheses
The substantive hypotheses were controversial and, in some cases, accepted by few. These hypotheses are judged true or false on empirical grounds through scientific observation and repeated experimental data. This is in contrast with the previous two philosophical assumptions that come before one can do science and so cannot be tested or falsified by science.
Uniformity of rate across time and space: Change is typically slow, steady, and gradual.
Uniformity of rate (or gradualism) is what most people (including geologists) think of when they hear the word "uniformitarianism", confusing this hypothesis with the entire definition. As late as 1990, Lemon, in his textbook of stratigraphy, affirmed that "The uniformitarian view of earth history held that all geologic processes proceed continuously and at a very slow pace."
Gould explained Hutton's view of uniformity of rate; mountain ranges or grand canyons are built by the accumulation of nearly insensible changes added up through vast time. Some major events such as floods, earthquakes, and eruptions, do occur. But these catastrophes are strictly local. They neither occurred in the past nor shall happen in the future, at any greater frequency or extent than they display at present. In particular, the whole earth is never convulsed at once.
Uniformity of state across time and space''': Change is evenly distributed throughout space and time.
The uniformity of state hypothesis implies that throughout the history of our earth there is no progress in any inexorable direction. The planet has almost always looked and behaved as it does now. Change is continuous but leads nowhere. The earth is in balance: a dynamic steady state.
20th century
Stephen Jay Gould's first scientific paper, "Is uniformitarianism necessary?" (1965), reduced these four assumptions to two. He dismissed the first principle, which asserted spatial and temporal invariance of natural laws, as no longer an issue of debate. He rejected the third (uniformity of rate) as an unjustified limitation on scientific inquiry, as it constrains past geologic rates and conditions to those of the present. So, Lyell's uniformitarianism was deemed unnecessary.
Uniformitarianism was proposed in contrast to catastrophism, which states that the distant past "consisted of epochs of paroxysmal and catastrophic action interposed between periods of comparative tranquility" Especially in the late 19th and early 20th centuries, most geologists took this interpretation to mean that catastrophic events are not important in geologic time; one example of this is the debate of the formation of the Channeled Scablands due to the catastrophic Missoula glacial outburst floods. An important result of this debate and others was the re-clarification that, while the same principles operate in geologic time, catastrophic events that are infrequent on human time-scales can have important consequences in geologic history.
Derek Ager has noted that "geologists do not deny uniformitarianism in its true sense, that is to say, of interpreting the past by means of the processes that are seen going on at the present day, so long as we remember that the periodic catastrophe is one of those processes. Those periodic catastrophes make more showing in the stratigraphical record than we have hitherto assumed."
Modern geologists do not apply uniformitarianism in the same way as Lyell. They question if rates of processes were uniform through time and only those values measured during the history of geology are to be accepted. The present may not be a long enough key to penetrating the deep lock of the past. Geologic processes may have been active at different rates in the past that humans have not observed. "By force of popularity, uniformity of rate has persisted to our present day. For more than a century, Lyell's rhetoric conflating axiom with hypotheses has descended in unmodified form. Many geologists have been stifled by the belief that proper methodology includes an a priori commitment to gradual change, and by a preference for explaining large-scale phenomena as the concatenation of innumerable tiny changes."
The current consensus is that Earth's history is a slow, gradual process punctuated by occasional natural catastrophic events that have affected Earth and its inhabitants. In practice it is reduced from Lyell's conflation, or blending, to simply the two philosophical assumptions. This is also known as the principle of geological actualism, which states that all past geological action was like all present geological action. The principle of actualism is the cornerstone of paleoecology.
Social sciences
Uniformitarianism has also been applied in historical linguistics, where it is considered a foundational principle of the field. Linguist Donald Ringe gives the following definition:
The principle is known in linguistics, after William Labov and associates, as the Uniformitarian Principle or Unifomitarian Hypothesis.
See also
Conservation law
Noether's theorem
Law of universal gravitation
Astronomical spectroscopy
Cosmological principle
History of paleontology
Paradigm shift
Physical constant
Physical cosmology
Scientific consensus
Time-variation of fundamental constants
Notes
References
Web
External links
Uniformitarianism at Physical Geography
Have physical constants changed with time?
Metatheory of science
Evolution
Geological history of Earth
History of Earth science
Epistemology of science | 0.767409 | 0.993408 | 0.762351 |
Hamiltonian Monte Carlo | The Hamiltonian Monte Carlo algorithm (originally known as hybrid Monte Carlo) is a Markov chain Monte Carlo method for obtaining a sequence of random samples whose distribution converges to a target probability distribution that is difficult to sample directly. This sequence can be used to estimate integrals of the target distribution, such as expected values and moments.
Hamiltonian Monte Carlo corresponds to an instance of the Metropolis–Hastings algorithm, with a Hamiltonian dynamics evolution simulated using a time-reversible and volume-preserving numerical integrator (typically the leapfrog integrator) to propose a move to a new point in the state space. Compared to using a Gaussian random walk proposal distribution in the Metropolis–Hastings algorithm, Hamiltonian Monte Carlo reduces the correlation between successive sampled states by proposing moves to distant states which maintain a high probability of acceptance due to the approximate energy conserving properties of the simulated Hamiltonian dynamic when using a symplectic integrator. The reduced correlation means fewer Markov chain samples are needed to approximate integrals with respect to the target probability distribution for a given Monte Carlo error.
The algorithm was originally proposed by Simon Duane, Anthony Kennedy, Brian Pendleton and Duncan Roweth in 1987 for calculations in lattice quantum chromodynamics. In 1996, Radford M. Neal showed how the method could be used for a broader class of statistical problems, in particular artificial neural networks. However, the burden of having to provide gradients of the Bayesian network delayed the wider adoption of the algorithm in statistics and other quantitative disciplines, until in the mid-2010s the developers of Stan implemented HMC in combination with automatic differentiation.
Algorithm
Suppose the target distribution to sample is for and a chain of samples is required.
The Hamilton's equations are
where and are the th component of the position and momentum vector respectively and is the Hamiltonian. Let be a mass matrix which is symmetric and positive definite, then the Hamiltonian is
where is the potential energy. The potential energy for a target is given as
which comes from the Boltzmann's factor. Note that the Hamiltonian is dimensionless in this formulation because the exponential probability weight has to be well defined. For example, in simulations at finite temperature the factor (with the Boltzmann constant ) is directly absorbed into and .
The algorithm requires a positive integer for number of leapfrog steps and a positive number for the step size . Suppose the chain is at . Let . First, a random Gaussian momentum is drawn from . Next, the particle will run under Hamiltonian dynamics for time , this is done by solving the Hamilton's equations numerically using the leapfrog algorithm. The position and momentum vectors after time using the leapfrog algorithm are:
These equations are to be applied to and times to obtain and .
The leapfrog algorithm is an approximate solution to the motion of non-interacting classical particles. If exact, the solution will never change the initial randomly-generated energy distribution, as energy is conserved for each particle in the presence of a classical potential energy field. In order to reach a thermodynamic equilibrium distribution, particles must have some sort of interaction with, for example, a surrounding heat bath, so that the entire system can take on different energies with probabilities according to the Boltzmann distribution.
One way to move the system towards a thermodynamic equilibrium distribution is to change the state of the particles using the Metropolis–Hastings algorithm. So first, one applies the leapfrog step, then a Metropolis-Hastings step.
The transition from to is
where
A full update consists of first randomly sampling the momenta (independently of the previous iterations), then integrating the equations of motion (e.g. with leapfrog), and finally obtaining the new configuration from the Metropolis-Hastings accept/reject step. This updating mechanism is repeated to obtain .
No U-Turn Sampler
The No U-Turn Sampler (NUTS) is an extension by controlling automatically. Tuning is critical. For example, in the one dimensional case, the potential is which corresponds to the potential of a simple harmonic oscillator. For too large, the particle will oscillate and thus waste computational time. For too small, the particle will behave like a random walk.
Loosely, NUTS runs the Hamiltonian dynamics both forwards and backwards in time randomly until a U-Turn condition is satisfied. When that happens, a random point from the path is chosen for the MCMC sample and the process is repeated from that new point.
In detail, a binary tree is constructed to trace the path of the leap frog steps. To produce a MCMC sample, an iterative procedure is conducted. A slice variable is sampled. Let and be the position and momentum of the forward particle respectively. Similarly, and for the backward particle. In each iteration, the binary tree selects at random uniformly to move the forward particle forwards in time or the backward particle backwards in time. Also for each iteration, the number of leap frog steps increase by a factor of 2. For example, in the first iteration, the forward particle moves forwards in time using 1 leap frog step. In the next iteration, the backward particle moves backwards in time using 2 leap frog steps.
The iterative procedure continues until the U-Turn condition is met, that is
or when the Hamiltonian becomes inaccurate
or
where, for example, .
Once the U-Turn condition is met, the next MCMC sample, , is obtained by sampling uniformly the leap frog path traced out by the binary tree which satisfies
This is usually satisfied if the remaining HMC parameters are sensible.
See also
Dynamic Monte Carlo method
Software for Monte Carlo molecular modeling
Stan, a probabilistic programing language implementing HMC.
PyMC, a probabilistic programming language implementing HMC.
Metropolis-adjusted Langevin algorithm
References
Further reading
External links
Hamiltonian Monte Carlo from scratch
Optimization and Monte Carlo Methods
Monte Carlo methods
Markov chain Monte Carlo | 0.770629 | 0.989254 | 0.762348 |
Time-of-flight mass spectrometry | Time-of-flight mass spectrometry (TOFMS) is a method of mass spectrometry in which an ion's mass-to-charge ratio is determined by a time of flight measurement. Ions are accelerated by an electric field of known strength. This acceleration results in an ion having the same kinetic energy as any other ion that has the same charge. The velocity of the ion depends on the mass-to-charge ratio (heavier ions of the same charge reach lower speeds, although ions with higher charge will also increase in velocity). The time that it subsequently takes for the ion to reach a detector at a known distance is measured. This time will depend on the velocity of the ion, and therefore is a measure of its mass-to-charge ratio. From this ratio and known experimental parameters, one can identify the ion.
Theory
The potential energy of a charged particle in an electric field is related to the charge of the particle and to the strength of the electric field:
where Ep is potential energy, q is the charge of the particle, and U is the electric potential difference (also known as voltage).
When the charged particle is accelerated into time-of-flight tube (TOF tube or flight tube) by the voltage U, its potential energy is converted to kinetic energy. The kinetic energy of any mass is:
In effect, the potential energy is converted to kinetic energy, meaning that equations and are equal
The velocity of the charged particle after acceleration will not change since it moves in a field-free time-of-flight tube. The velocity of the particle can be determined in a time-of-flight tube since the length of the path (d) of the flight of the ion is known and the time of the flight of the ion (t) can be measured using a transient digitizer or time to digital converter.
Thus,
and we substitute the value of v in into.
Rearranging so that the flight time is expressed by everything else:
Taking the square root yields the time,
These factors for the time of flight have been grouped purposely. contains constants that in principle do not change when a set of ions are analyzed in a single pulse of acceleration. can thus be given as:
where k is a proportionality constant representing factors related to the instrument settings and characteristics.
() reveals more clearly that the time of flight of the ion varies with the square root of its mass-to-charge ratio (m/q).
Consider a real-world example of a MALDI time-of-flight mass spectrometer instrument which is used to produce a mass spectrum of the tryptic peptides of a protein. Suppose the mass of one tryptic peptide is 1000 daltons (Da). The kind of ionization of peptides produced by MALDI is typically +1 ions, so q = e in both cases. Suppose the instrument is set to accelerate the ions in a U = 15,000 volts (15 kilovolt or 15 kV) potential. And suppose the length of the flight tube is 1.5 meters (typical). All the factors necessary to calculate the time of flight of the ions are now known for, which is evaluated first of the ion of mass 1000 Da:
Note that the mass had to be converted from daltons (Da) to kilograms (kg) to make it possible to evaluate the equation in the proper units. The final value should be in seconds:
which is about 28 microseconds. If there were a singly charged tryptic peptide ion with 4000 Da mass, and it is four times larger than the 1000 Da mass, it would take twice the time, or about 56 microseconds to traverse the flight tube, since time is proportional to the square root of the mass-to-charge ratio.
Delayed extraction
Mass resolution can be improved in axial MALDI-TOF mass spectrometer where ion production takes place in vacuum by allowing the initial burst of ions and neutrals produced by the laser pulse to equilibrate and to let the ions travel some distance perpendicularly to the sample plate before the ions can be accelerated into the flight tube. The ion equilibration in plasma plume produced during the desorption/ionization takes place approximately 100 ns or less, after that most of ions irrespectively of their mass start moving from the surface with some average velocity. To compensate for the spread of this average velocity and to improve mass resolution, it was proposed to delay the extraction of ions from the ion source toward the flight tube by a few hundred nanoseconds to a few microseconds with respect to the start of short (typically, a few nanosecond) laser pulse. This technique is referred to as "time-lag focusing" for ionization of atoms or molecules by resonance enhanced multiphoton ionization or by electron impact ionization in a rarefied gas and "delayed extraction" for ions produced generally by laser desorption/ionization of molecules adsorbed on flat surfaces or microcrystals placed on conductive flat surface.
Delayed extraction generally refers to the operation mode of vacuum ion sources when the onset of the electric field responsible for acceleration (extraction) of the ions into the flight tube is delayed by some short time (200–500 ns) with respect to the ionization (or desorption/ionization) event. This differs from a case of constant extraction field where the ions are accelerated instantaneously upon being formed. Delayed extraction is used with MALDI or laser desorption/ionization (LDI) ion sources where the ions to be analyzed are produced in an expanding plume moving from the sample plate with a high speed (400–1000 m/s). Since the thickness of the ion packets arriving at the detector is important to mass resolution, on first inspection it can appear counter-intuitive to allow the ion plume to further expand before extraction. Delayed extraction is more of a compensation for the initial momentum of the ions: it provides the same arrival times at the detector for ions with the same mass-to-charge ratios but with different initial velocities.
In delayed extraction of ions produced in vacuum, the ions that have lower momentum in the direction of extraction start to be accelerated at higher potential due to being further from the extraction plate when the extraction field is turned on. Conversely, those ions with greater forward momentum start to be accelerated at lower potential since they are closer to the extraction plate. At the exit from the acceleration region, the slower ions at the back of the plume will be accelerated to greater velocity than the initially faster ions at the front of the plume. So after delayed extraction, a group of ions that leaves the ion source earlier has lower velocity in the direction of the acceleration compared to some other group of ions that leaves the ion source later but with greater velocity. When ion source parameters are properly adjusted, the faster group of ions catches up to the slower one at some distance from the ion source, so the detector plate placed at this distance detects simultaneous arrival of these groups of ions. In its way, the delayed application of the acceleration field acts as a one-dimensional time-of-flight focusing element.
Reflectron TOF
The kinetic energy distribution in the direction of ion flight can be corrected by using a reflectron. The reflectron uses a constant electrostatic field to reflect the ion beam toward the detector. The more energetic ions penetrate deeper into the reflectron, and take a slightly longer path to the detector. Less energetic ions of the same mass-to-charge ratio penetrate a shorter distance into the reflectron and, correspondingly, take a shorter path to the detector. The flat surface of the ion detector (typically a microchannel plate, MCP) is placed at the plane where ions of same m/z but with different energies arrive at the same time counted with respect to the onset of the extraction pulse in the ion source. A point of simultaneous arrival of ions of the same mass-to-charge ratio but with different energies is often referred as time-of-flight focus.
An additional advantage to the re-TOF arrangement is that twice the flight path is achieved in a given length of the TOF instrument.
Ion gating
A Bradbury–Nielsen shutter is a type of ion gate used in TOF mass spectrometers and in ion mobility spectrometers, as well as Hadamard transform TOF mass spectrometers. The Bradbury–Nielsen shutter is ideal for fast timed ion selector (TIS)—a device used for isolating ions over narrow mass range in tandem (TOF/TOF) MALDI mass spectrometers.
Orthogonal acceleration time-of-flight
Continuous ion sources (most commonly electrospray ionization, ESI) are generally interfaced to the TOF mass analyzer by "orthogonal extraction" in which ions introduced into the TOF mass analyzer are accelerated along the axis perpendicular to their initial direction of motion. Orthogonal acceleration combined with collisional ion cooling allows separating the ion production in the ion source and mass analysis. In this technique, very high resolution can be achieved for ions produced in MALDI or ESI sources.
Before entering the orthogonal acceleration region or the pulser, the ions produced in continuous (ESI) or pulsed (MALDI) sources are focused (cooled) into a beam of 1–2 mm diameter by collisions with a residual gas in RF multipole guides. A system of electrostatic lenses mounted in high-vacuum region before the pulser makes the beam parallel to minimize its divergence in the direction of acceleration. The combination of ion collisional cooling and orthogonal acceleration TOF has provided significant increase in resolution of modern TOF MS from few hundred to several tens of thousand without compromising the sensitivity.
Hadamard transform time-of-flight mass spectrometry
Hadamard transform time-of flight mass spectrometry (HT-TOFMS) is a mode of mass analysis used to significantly increase the signal-to-noise ratio of a conventional TOFMS. Whereas traditional TOFMS analyzes one packet of ions at a time, waiting for the ions to reach the detector before introducing another ion packet, HT-TOFMS can simultaneously analyze several ion packets traveling in the flight tube. The ions packets are encoded by rapidly modulating the transmission of the ion beam, so that lighter (and thus faster) ions from all initially-released packets of mass from a beam get ahead of heavier (and thus slower) ions. This process creates an overlap of many time-of-flight distributions convoluted in form of signals. The Hadamard transform algorithm is then used to carry out the deconvolution process which helps to produce a faster mass spectral storage rate than traditional TOFMS and other comparable mass separation instruments.
Tandem time-of-flight
Tandem time-of-flight (TOF/TOF) is a tandem mass spectrometry method where two time-of-flight mass spectrometers are used consecutively. To record full spectrum of precursor (parent) ions TOF/TOF operates in MS mode. In this mode, the energy of the pulse laser is chosen slightly above the onset of MALDI for specific matrix in use to ensure the compromise between an ion yield for all the parent ions and reduced fragmentation of the same ions. When operating in a tandem (MS/MS) mode, the laser energy is increased considerably above MALDI threshold. The first TOF mass spectrometer (basically, a flight tube which ends up with the timed ion selector) isolates precursor ions of choice using a velocity filter, typically, of a Bradbury–Nielsen type, and the second TOF-MS (that includes the post accelerator, flight tube, ion mirror, and the ion detector) analyzes the fragment ions. Fragment ions in MALDI TOF/TOF result from decay of precursor ions vibrationally excited above their dissociation level in MALDI source (post source decay ). Additional ion fragmentation implemented in a high-energy collision cell may be added to the system to increase dissociation rate of vibrationally excited precursor ions. Some designs include precursor signal quenchers as a part of second TOF-MS to reduce the instant current load on the ion detector.
Quadrupole time-of-flight
Quadrupole time-of-flight mass spectrometry (QToF-MS) has a similar configuration to a tandem mass spectrometer with a mass-resolving quadrupole and collision cell hexapole, but instead of a second mass-resolving quadrupole, a time-of-flight mass analyzer is used. Both quadrupoles can operate in RF mode only to allow all ions to pass through to the mass analyzer with minimal fragmentation. To increase spectral detail, the system takes advantage of collision-induced dissociation. Once the ions reach the flight tube, the ion pulser sends them upwards towards the reflectron and back down into the detector. Since the ion pulser transfers the same kinetic energy to all molecules, the flight time is dictated by the mass of the analyte.
QToF is capable of measuring mass to the 4th decimal place and is frequently used for pharmaceutical and toxicological analysis as a screening method for drug analogues. Identification is done by collection of the mass spectrum and comparison to tandem mass spectrum libraries.
Detectors
A time-of-flight mass spectrometer (TOFMS) consists of a mass analyzer and a detector. An ion source (either pulsed or continuous) is used for lab-related TOF experiments, but not needed for TOF analyzers used in space, where the sun or planetary ionospheres provide the ions. The TOF mass analyzer can be a linear flight tube or a reflectron. The ion detector typically consists of microchannel plate detector or a fast secondary emission multiplier (SEM) where first converter plate (dynode) is flat. The electrical signal from the detector is recorded by means of a time to digital converter (TDC) or a fast analog-to-digital converter (ADC). TDC is mostly used in combination with orthogonal-acceleration (oa)TOF instruments.
Time-to-digital converters register the arrival of a single ion at discrete time "bins"; a combination of threshold triggering and constant fraction discriminator (CFD) discriminates between electronic noise and ion arrival events. CFD converts nanosecond-long Gaussian-shaped electrical pulses of different amplitudes generated on the MCP's anode into common-shape pulses (e.g., pulses compatible with TTL/ESL logic circuitry) sent to TDC. Using CFD provides a time point correspondent to a position of peak maximum independent of variation in the peak amplitude caused by variation of the MCP or SEM gain. Fast CFDs of advanced designs have the dead times equal to or less than two single-hit response times of the ion detector (single-hit response time for MCP with 2-5 micron wide channels can be somewhere between 0.2 ns and 0.8 ns, depending on the channel angle) thus preventing repetitive triggering from the same pulse. Double-hit resolution (dead time) of modern multi-hit TDC can be as low as 3-5 nanosecond.
The TDC is a counting detector – it can be extremely fast (down to a few picosecond resolution), but its dynamic range is limited due to its inability to properly count the events when more than one ion simultaneously (i.e., within the TDC dead time) hit the detector. The outcome of limited dynamic range is that the number of ions (events) recorded in one mass spectrum is smaller compared to real number. The problem of limited dynamic range can be alleviated using multichannel detector design: an array of mini-anodes attached to a common MCP stack and multiple CFD/TDC, where each CFD/TDC records signals from individual mini-anode. To obtain peaks with statistically acceptable intensities, ion counting is accompanied by summing of hundreds of individual mass spectra (so-called hystograming). To reach a very high counting rate (limited only by duration of individual TOF spectrum which can be as high as few milliseconds in multipath TOF setups), a very high repetition rate of ion extractions to the TOF tube is used. Commercial orthogonal acceleration TOF mass analyzers typically operate at 5–20 kHz repetition rates. In combined mass spectra obtained by summing a large number of individual ion detection events, each peak is a histogram obtained by adding up counts in each individual bin. Because the recording of the individual ion arrival with TDC yields only a single time point, the TDC eliminates the fraction of peak width determined by a limited response time of both the MCP detector and preamplifier. This propagates into better mass resolution.
Modern ultra-fast 10 GSample/sec analog-to-digital converters digitize the pulsed ion current from the MCP detector at discrete time intervals (100 picoseconds). Modern 8-bit or 10-bit 10 GHz ADC has much higher dynamic range than the TDC, which allows its usage in MALDI-TOF instruments with its high peak currents. To record fast analog signals from MCP detectors one is required to carefully match the impedance of the detector anode with the input circuitry of the ADC (preamplifier) to minimize the "ringing" effect. Mass resolution in mass spectra recorded with ultra-fast ADC can be improved by using small-pore (2-5 micron) MCP detectors with shorter response times.
Applications
Matrix-assisted laser desorption ionization (MALDI) is a pulsed ionization technique that is readily compatible with TOF MS.
Atom probe tomography also takes advantage of TOF mass spectrometry.
Photoelectron photoion coincidence spectroscopy uses soft photoionization for ion internal energy selection and TOF mass spectrometry for mass analysis.
Secondary ion mass spectrometry commonly utilizes TOF mass spectrometers to allow parallel detection of different ions with a high mass resolving power.
Stefan Rutzinger proposed using TOF mass spectrometry with a cryogenic detector for the spectrometry of heavy biomolecules.
History of the field
An early time-of-flight mass spectrometer, named the Velocitron, was reported by A. E. Cameron and D. F. Eggers Jr, working at the Y-12 National Security Complex, in 1948. The idea had been proposed two years earlier, in 1946, by W. E. Stephens of the University of Pennsylvania in a Friday afternoon session of a meeting, at the Massachusetts Institute of Technology, of the American Physical Society.
References
Bibliography
External links
IFR/JIC TOF MS Tutorial
Jordan TOF Products TOF Mass Spectrometer Tutorial
University of Bristol TOF-MS Tutorial
Kore Technology – Introduction to Time-of-Flight Mass Spectrometry
Mass spectrometry | 0.771299 | 0.988391 | 0.762345 |
Orbital speed | In gravitationally bound systems, the orbital speed of an astronomical body or object (e.g. planet, moon, artificial satellite, spacecraft, or star) is the speed at which it orbits around either the barycenter (the combined center of mass) or, if one body is much more massive than the other bodies of the system combined, its speed relative to the center of mass of the most massive body.
The term can be used to refer to either the mean orbital speed (i.e. the average speed over an entire orbit) or its instantaneous speed at a particular point in its orbit. The maximum (instantaneous) orbital speed occurs at periapsis (perigee, perihelion, etc.), while the minimum speed for objects in closed orbits occurs at apoapsis (apogee, aphelion, etc.). In ideal two-body systems, objects in open orbits continue to slow down forever as their distance to the barycenter increases.
When a system approximates a two-body system, instantaneous orbital speed at a given point of the orbit can be computed from its distance to the central body and the object's specific orbital energy, sometimes called "total energy". Specific orbital energy is constant and independent of position.
Radial trajectories
In the following, it is assumed that the system is a two-body system and the orbiting object has a negligible mass compared to the larger (central) object. In real-world orbital mechanics, it is the system's barycenter, not the larger object, which is at the focus.
Specific orbital energy, or total energy, is equal to Ek − Ep (the difference between kinetic energy and potential energy). The sign of the result may be positive, zero, or negative and the sign tells us something about the type of orbit:
If the specific orbital energy is positive the orbit is unbound, or open, and will follow a hyperbola with the larger body the focus of the hyperbola. Objects in open orbits do not return; once past periapsis their distance from the focus increases without bound. See radial hyperbolic trajectory
If the total energy is zero, (Ek = Ep): the orbit is a parabola with focus at the other body. See radial parabolic trajectory. Parabolic orbits are also open.
If the total energy is negative, : The orbit is bound, or closed. The motion will be on an ellipse with one focus at the other body. See radial elliptic trajectory, free-fall time. Planets have bound orbits around the Sun.
Transverse orbital speed
The transverse orbital speed is inversely proportional to the distance to the central body because of the law of conservation of angular momentum, or equivalently, Kepler's second law. This states that as a body moves around its orbit during a fixed amount of time, the line from the barycenter to the body sweeps a constant area of the orbital plane, regardless of which part of its orbit the body traces during that period of time.
This law implies that the body moves slower near its apoapsis than near its periapsis, because at the smaller distance along the arc it needs to move faster to cover the same area.
Mean orbital speed
For orbits with small eccentricity, the length of the orbit
is close to that of a circular one, and the mean orbital speed can be approximated either from observations of the orbital period and the semimajor axis of its orbit, or from knowledge of the masses of the two bodies and the semimajor axis.
where is the orbital velocity, is the length of the semimajor axis, is the orbital period, and is the standard gravitational parameter. This is an approximation that only holds true when the orbiting body is of considerably lesser mass than the central one, and eccentricity is close to zero.
When one of the bodies is not of considerably lesser mass see: Gravitational two-body problem
So, when one of the masses is almost negligible compared to the other mass, as the case for Earth and Sun, one can approximate the orbit velocity as:
or:
Where is the (greater) mass around which this negligible mass or body is orbiting, and is the escape velocity at a distance from the center of the primary body equal to the radius of the orbit.
For an object in an eccentric orbit orbiting a much larger body, the length of the orbit decreases with orbital eccentricity , and is an ellipse. This can be used to obtain a more accurate estimate of the average orbital speed:
The mean orbital speed decreases with eccentricity.
Instantaneous orbital speed
For the instantaneous orbital speed of a body at any given point in its trajectory, both the mean distance and the instantaneous distance are taken into account:
where is the standard gravitational parameter of the orbited body, is the distance at which the speed is to be calculated, and is the length of the semi-major axis of the elliptical orbit. This expression is called the vis-viva equation.
For the Earth at perihelion, the value is:
which is slightly faster than Earth's average orbital speed of , as expected from Kepler's 2nd Law.
Planets
The closer an object is to the Sun the faster it needs to move to maintain the orbit. Objects move fastest at perihelion (closest approach to the Sun) and slowest at aphelion (furthest distance from the Sun). Since planets in the Solar System are in nearly circular orbits their individual orbital velocities do not vary much. Being closest to the Sun and having the most eccentric orbit, Mercury's orbital speed varies from about 59 km/s at perihelion to 39 km/s at aphelion.
Halley's Comet on an eccentric orbit that reaches beyond Neptune will be moving 54.6 km/s when from the Sun, 41.5 km/s when 1 AU from the Sun (passing Earth's orbit), and roughly 1 km/s at aphelion from the Sun. Objects passing Earth's orbit going faster than 42.1 km/s have achieved escape velocity and will be ejected from the Solar System if not slowed down by a gravitational interaction with a planet.
See also
Escape velocity
Delta-v budget
Hohmann transfer orbit
Bi-elliptic transfer
References
Orbits
hu:Kozmikus sebességek#Szökési sebességek | 0.765906 | 0.99535 | 0.762344 |
Le Chatelier's principle | In chemistry, Le Chatelier's principle (pronounced or ), also called Chatelier's principle, Braun–Le Chatelier principle, Le Chatelier–Braun principle or the equilibrium law, is a principle used to predict the effect of a change in conditions on chemical equilibrium.
The principle is named after French chemist Henry Louis Le Chatelier who enunciated the principle in 1884 by extending the reasoning from the Van 't Hoff relation of how temperature variations changes the equilibrium to the variations of pressure and what's now called chemical potential, and sometimes also credited to Karl Ferdinand Braun, who discovered it independently in 1887. It can be defined as:
In scenarios outside thermodynamic equilibrium, there can arise phenomena in contradiction to an over-general statement of Le Chatelier's principle.
Le Chatelier's principle is sometimes alluded to in discussions of topics other than thermodynamics.
Thermodynamic statement
Le Chatelier–Braun principle analyzes the qualitative behaviour of a thermodynamic system when a particular one of its externally controlled state variables, say changes by an amount the 'driving change', causing a change the 'response of prime interest', in its conjugate state variable all other externally controlled state variables remaining constant. The response illustrates 'moderation' in ways evident in two related thermodynamic equilibria. Obviously, one of has to be intensive, the other extensive. Also as a necessary part of the scenario, there is some particular auxiliary 'moderating' state variable , with its conjugate state variable For this to be of interest, the 'moderating' variable must undergo a change or in some part of the experimental protocol; this can be either by imposition of a change , or with the holding of constant, written For the principle to hold with full generality, must be extensive or intensive accordingly as is so. Obviously, to give this scenario physical meaning, the 'driving' variable and the 'moderating' variable must be subject to separate independent experimental controls and measurements.
Explicit statement
The principle can be stated in two ways, formally different, but substantially equivalent, and, in a sense, mutually 'reciprocal'. The two ways illustrate the Maxwell relations, and the stability of thermodynamic equilibrium according to the second law of thermodynamics, evident as the spread of energy amongst the state variables of the system in response to an imposed change.
The two ways of statement share an 'index' experimental protocol (denoted that may be described as 'changed driver, moderation permitted'. Along with the driver change it imposes a constant with and allows the uncontrolled 'moderating' variable response along with the 'index' response of interest
The two ways of statement differ in their respective compared protocols. One way posits a 'changed driver, no moderation' protocol (denoted The other way posits a 'fixed driver, imposed moderation' protocol (denoted )
'Driving' variable forced to change, 'moderating' variable allowed to respond; compared with 'driving' variable forced to change, 'moderating' variable forced not to change
This way compares with to compare the effects of the imposed the change with and without moderation. The protocol prevents 'moderation' by enforcing that through an adjustment and it observes the 'no-moderation' response Provided that the observed response is indeed that then the principle states that .
In other words, change in the 'moderating' state variable moderates the effect of the driving change in on the responding conjugate variable
'Driving' variable forced to change, 'moderating' variable allowed to respond; compared with 'driving' variable forced not to change, 'moderating' variable forced to change
This way also uses two experimental protocols, and , to compare the index effect with the effect of 'moderation' alone. The 'index' protocol is executed first; the response of prime interest, is observed, and the response of the 'moderating' variable is also measured. With that knowledge, then the 'fixed driver, moderation imposed' protocol enforces that with the driving variable held fixed; the protocol also, through an adjustment imposes a change (learnt from the just previous measurement) in the 'moderating' variable, and measures the change Provided that the 'moderated' response is indeed that then the principle states that the signs of and are opposite.
Again, in other words, change in the 'moderating' state variable opposes the effect of the driving change in on the responding conjugate variable
Other statements
The duration of adjustment depends on the strength of the negative feedback to the initial shock. The principle is typically used to describe closed negative-feedback systems, but applies, in general, to thermodynamically closed and isolated systems in nature, since the second law of thermodynamics ensures that the disequilibrium caused by an instantaneous shock is eventually followed by a new equilibrium.
While well rooted in chemical equilibrium, Le Chatelier's principle can also be used in describing mechanical systems in that a system put under stress will respond in such a way as to reduce or minimize that stress. Moreover, the response will generally be via the mechanism that most easily relieves that stress. Shear pins and other such sacrificial devices are design elements that protect systems against stress applied in undesired manners to relieve it so as to prevent more extensive damage to the entire system, a practical engineering application of Le Chatelier's principle.
Chemistry
Effect of change in concentration
Changing the concentration of a chemical will shift the equilibrium to the side that would counter that change in concentration. The chemical system will attempt to partly oppose the change affected to the original state of equilibrium. In turn, the rate of reaction, extent, and yield of products will be altered corresponding to the impact on the system.
This can be illustrated by the equilibrium of carbon monoxide and hydrogen gas, reacting to form methanol.
CO + 2 H2 ⇌ CH3OH
Suppose we were to increase the concentration of CO in the system. Using Le Chatelier's principle, we can predict that the concentration of methanol will increase, decreasing the total change in CO. If we are to add a species to the overall reaction, the reaction will favor the side opposing the addition of the species. Likewise, the subtraction of a species would cause the reaction to "fill the gap" and favor the side where the species was reduced. This observation is supported by the collision theory. As the concentration of CO is increased, the frequency of successful collisions of that reactant would increase also, allowing for an increase in forward reaction, and generation of the product. Even if the desired product is not thermodynamically favored, the end-product can be obtained if it is continuously removed from the solution.
The effect of a change in concentration is often exploited synthetically for condensation reactions (i.e., reactions that extrude water) that are equilibrium processes (e.g., formation of an ester from carboxylic acid and alcohol or an imine from an amine and aldehyde). This can be achieved by physically sequestering water, by adding desiccants like anhydrous magnesium sulfate or molecular sieves, or by continuous removal of water by distillation, often facilitated by a Dean-Stark apparatus.
Effect of change in temperature
The effect of changing the temperature in the equilibrium can be made clear by 1) incorporating heat as either a reactant or a product, and 2) assuming that an increase in temperature increases the heat content of a system. When the reaction is exothermic (ΔH is negative and energy is released), heat is included as a product, and when the reaction is endothermic (ΔH is positive and energy is consumed), heat is included as a reactant. Hence, whether increasing or decreasing the temperature would favor the forward or the reverse reaction can be determined by applying the same principle as with concentration changes.
Take, for example, the reversible reaction of nitrogen gas with hydrogen gas to form ammonia:
N2(g) + 3 H2(g) ⇌ 2 NH3(g) ΔH = −92 kJ mol−1
Because this reaction is exothermic, it produces heat:
N2(g) + 3 H2(g) ⇌ 2 NH3(g) + heat
If the temperature were increased, the heat content of the system would increase, so the system would consume some of that heat by shifting the equilibrium to the left, thereby producing less ammonia. More ammonia would be produced if the reaction were run at a lower temperature, but a lower temperature also lowers the rate of the process, so, in practice (the Haber process) the temperature is set at a compromise value that allows ammonia to be made at a reasonable rate with an equilibrium concentration that is not too unfavorable.
In exothermic reactions, an increase in temperature decreases the equilibrium constant, K, whereas in endothermic reactions, an increase in temperature increases K.
Le Chatelier's principle applied to changes in concentration or pressure can be understood by giving K a constant value. The effect of temperature on equilibria, however, involves a change in the equilibrium constant. The dependence of K on temperature is determined by the sign of ΔH. The theoretical basis of this dependence is given by the Van 't Hoff equation.
Effect of change in pressure
The equilibrium concentrations of the products and reactants do not directly depend on the total pressure of the system. They may depend on the partial pressure of the products and reactants, but if the number of moles of gaseous reactants is equal to the number of moles of gaseous products, pressure has no effect on equilibrium.
Changing total pressure by adding an inert gas at constant volume does not affect the equilibrium concentrations (see Effect of adding an inert gas below).
Changing total pressure by changing the volume of the system changes the partial pressures of the products and reactants and can affect the equilibrium concentrations (see §Effect of change in volume below).
Effect of change in volume
Changing the volume of the system changes the partial pressures of the products and reactants and can affect the equilibrium concentrations. With a pressure increase due to a decrease in volume, the side of the equilibrium with fewer moles is more favorable and with a pressure decrease due to an increase in volume, the side with more moles is more favorable. There is no effect on a reaction where the number of moles of gas is the same on each side of the chemical equation.
Considering the reaction of nitrogen gas with hydrogen gas to form ammonia:
⇌ ΔH = −92kJ mol−1
Note the number of moles of gas on the left-hand side and the number of moles of gas on the right-hand side. When the volume of the system is changed, the partial pressures of the gases change. If we were to decrease pressure by increasing volume, the equilibrium of the above reaction will shift to the left, because the reactant side has a greater number of moles than does the product side. The system tries to counteract the decrease in partial pressure of gas molecules by shifting to the side that exerts greater pressure. Similarly, if we were to increase pressure by decreasing volume, the equilibrium shifts to the right, counteracting the pressure increase by shifting to the side with fewer moles of gas that exert less pressure. If the volume is increased because there are more moles of gas on the reactant side, this change is more significant in the denominator of the equilibrium constant expression, causing a shift in equilibrium.
Effect of adding an inert gas
An inert gas (or noble gas), such as helium, is one that does not react with other elements or compounds. Adding an inert gas into a gas-phase equilibrium at constant volume does not result in a shift. This is because the addition of a non-reactive gas does not change the equilibrium equation, as the inert gas appears on both sides of the chemical reaction equation. For example, if A and B react to form C and D, but X does not participate in the reaction: \mathit{a}A{} + \mathit{b}B{} + \mathit{x}X <=> \mathit{c}C{} + \mathit{d}D{} + \mathit{x}X. While it is true that the total pressure of the system increases, the total pressure does not have any effect on the equilibrium constant; rather, it is a change in partial pressures that will cause a shift in the equilibrium. If, however, the volume is allowed to increase in the process, the partial pressures of all gases would be decreased resulting in a shift towards the side with the greater number of moles of gas. The shift will never occur on the side with fewer moles of gas. It is also known as Le Chatelier's postulate.
Effect of a catalyst
A catalyst increases the rate of a reaction without being consumed in the reaction. The use of a catalyst does not affect the position and composition of the equilibrium of a reaction, because both the forward and backward reactions are sped up by the same factor.
For example, consider the Haber process for the synthesis of ammonia (NH3):
N2 + 3 H2 ⇌ 2 NH3
In the above reaction, iron (Fe) and molybdenum (Mo) will function as catalysts if present. They will accelerate any reactions, but they do not affect the state of the equilibrium.
General statements
Thermodynamic equilibrium processes
Le Chatelier's principle refers to states of thermodynamic equilibrium. The latter are stable against perturbations that satisfy certain criteria; this is essential to the definition of thermodynamic equilibrium.
OR
It states that changes in the temperature, pressure, volume, or concentration of a system will result in predictable and opposing changes in the system in order to achieve a new equilibrium state.
For this, a state of thermodynamic equilibrium is most conveniently described through a fundamental relation that specifies a cardinal function of state, of the energy kind, or of the entropy kind, as a function of state variables chosen to fit the thermodynamic operations through which a perturbation is to be applied.
In theory and, nearly, in some practical scenarios, a body can be in a stationary state with zero macroscopic flows and rates of chemical reaction (for example, when no suitable catalyst is present), yet not in thermodynamic equilibrium, because it is metastable or unstable; then Le Chatelier's principle does not necessarily apply.
Non-equilibrium processes
A simple body or a complex thermodynamic system can also be in a stationary state with non-zero rates of flow and chemical reaction; sometimes the word "equilibrium" is used in reference to such a state, though by definition it is not a thermodynamic equilibrium state. Sometimes, it is proposed to consider Le Chatelier's principle for such states. For this exercise, rates of flow and of chemical reaction must be considered. Such rates are not supplied by equilibrium thermodynamics. For such states, there are no simple statements that echo Le Chatelier's principle. Prigogine and Defay demonstrate that such a scenario may exhibit moderation, or may exhibit a measured amount of anti-moderation, though not a run-away anti-moderation that goes to completion. The example analysed by Prigogine and Defay is the Haber process.
This situation is clarified by considering two basic methods of analysis of a process. One is the classical approach of Gibbs, the other uses the near- or local- equilibrium approach of De Donder. The Gibbs approach requires thermodynamic equilibrium. The Gibbs approach is reliable within its proper scope, thermodynamic equilibrium, though of course it does not cover non-equilibrium scenarios. The De Donder approach can cover equilibrium scenarios, but also covers non-equilibrium scenarios in which there is only local thermodynamic equilibrium, and not thermodynamic equilibrium proper. The De Donder approach allows state variables called extents of reaction to be independent variables, though in the Gibbs approach, such variables are not independent. Thermodynamic non-equilibrium scenarios can contradict an over-general statement of Le Chatelier's Principle.
Related system concepts
It is common to treat the principle as a more general observation of systems, such as
or, "roughly stated":
The concept of systemic maintenance of a stable steady state despite perturbations has a variety of names, and has been studied in a variety of contexts, chiefly in the natural sciences. In chemistry, the principle is used to manipulate the outcomes of reversible reactions, often to increase their yield. In pharmacology, the binding of ligands to receptors may shift the equilibrium according to Le Chatelier's principle, thereby explaining the diverse phenomena of receptor activation and desensitization. In biology, the concept of homeostasis is different from Le Chatelier's principle, in that homoeostasis is generally maintained by processes of active character, as distinct from the passive or dissipative character of the processes described by Le Chatelier's principle in thermodynamics. In economics, even further from thermodynamics, allusion to the principle is sometimes regarded as helping explain the price equilibrium of efficient economic systems. In some dynamic systems, the end-state cannot be determined from the shock or perturbation.
Economics
In economics, a similar concept also named after Le Chatelier was introduced by American economist Paul Samuelson in 1947. There the generalized Le Chatelier principle is for a maximum condition of economic equilibrium: Where all unknowns of a function are independently variable, auxiliary constraints—"just-binding" in leaving initial equilibrium unchanged—reduce the response to a parameter change. Thus, factor-demand and commodity-supply elasticities are hypothesized to be lower in the short run than in the long run because of the fixed-cost constraint in the short run.
Since the change of the value of an objective function in a neighbourhood of the maximum position is described by the envelope theorem, Le Chatelier's principle can be shown to be a corollary thereof.
See also
Homeostasis
Common-ion effect
Response reactions
References
Bibliography of cited sources
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, .
Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, .
Münster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, .
Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, translated by D.H. Everett, Longmans, Green & Co, London.
External links
YouTube video of Le Chatelier's principle and pressure
Equilibrium chemistry
Homeostasis | 0.765448 | 0.995938 | 0.762339 |
Electric Fence | For the physical barrier, see electric fence.
Electric Fence (or eFence) is a memory debugger written by Bruce Perens. It consists of a library which programmers can link into their code to override the C standard library memory management functions. eFence triggers a program crash when the memory error occurs, so a debugger can be used to inspect the code that caused the error.
Electric Fence is intended to find two common types of programming bugs:
Overrunning the end (or beginning) of a dynamically allocated buffer
Using a dynamically allocated buffer after returning it to the heap
In both cases, Electric Fence causes the errant program to abort immediately via a segmentation fault. Normally, these two errors would cause heap corruption, which would manifest itself only much later, usually in unrelated ways. Thus, Electric Fence helps programmers find the precise location of memory programming errors.
Electric Fence allocates at least two pages (often 8KB) for every allocated buffer. In some modes of operation, it does not deallocate freed buffers. Thus, Electric Fence vastly increases the memory requirements of programs being debugged. This leads to the recommendation that programmers should apply Electric Fence to smaller programs when possible, and should never leave Electric Fence linked against production code.
Electric Fence is free software licensed under the GNU General Public License.
See also
Dmalloc
External links
Electric Fence 2.2.4 source code from Ubuntu
DUMA – a fork of Electric Fence which also works for Windows
eFence-2.2.2 – rpm of electric fence 2.2.2 source
Free memory debuggers
Free software testing tools
Software testing tools
Free memory management software | 0.769517 | 0.99067 | 0.762337 |
Fresnel integral | The Fresnel integrals and are two transcendental functions named after Augustin-Jean Fresnel that are used in optics and are closely related to the error function. They arise in the description of near-field Fresnel diffraction phenomena and are defined through the following integral representations:
The parametric curve is the Euler spiral or clothoid, a curve whose curvature varies linearly with arclength.
The term Fresnel integral may also refer to the complex definite integral:
Where is real and positive, which can be evaluated by closing a contour in the complex plane and applying Cauchy's integral theorem.
Definition
The Fresnel integrals admit the following power series expansions that converge for all :
Some widely used tables use instead of for the argument of the integrals defining and . This changes their limits at infinity from to and the arc length for the first spiral turn from to 2 (at ). These alternative functions are usually known as normalized Fresnel integrals.
Euler spiral
The Euler spiral, also known as a Cornu spiral or clothoid, is the curve generated by a parametric plot of against . The Euler spiral was first studied in the mid 18th century by Leonhard Euler in the context of Euler–Bernoulli beam theory. A century later, Marie Alfred Cornu constructed the same spiral as a nomogram for diffraction computations.
From the definitions of Fresnel integrals, the infinitesimals and are thus:
Thus the length of the spiral measured from the origin can be expressed as
That is, the parameter is the curve length measured from the origin , and the Euler spiral has infinite length. The vector also expresses the unit tangent vector along the spiral, giving . Since is the curve length, the curvature can be expressed as
Thus the rate of change of curvature with respect to the curve length is
An Euler spiral has the property that its curvature at any point is proportional to the distance along the spiral, measured from the origin. This property makes it useful as a transition curve in highway and railway engineering: if a vehicle follows the spiral at unit speed, the parameter in the above derivatives also represents the time. Consequently, a vehicle following the spiral at constant speed will have a constant rate of angular acceleration.
Sections from Euler spirals are commonly incorporated into the shape of rollercoaster loops to make what are known as clothoid loops.
Properties
and are odd functions of ,
which can be readily seen from the fact that their power series expansions have only odd-degree terms, or alternatively because they are antiderivatives of even functions that also are zero at the origin.
Asymptotics of the Fresnel integrals as are given by the formulas:
Using the power series expansions above, the Fresnel integrals can be extended to the domain of complex numbers, where they become entire functions of the complex variable .
The Fresnel integrals can be expressed using the error function as follows:
or
Limits as approaches infinity
The integrals defining and cannot be evaluated in the closed form in terms of elementary functions, except in special cases. The limits of these functions as goes to infinity are known:
This can be derived with any one of several methods. One of them uses a contour integral of the function around the boundary of the sector-shaped region in the complex plane formed by the positive -axis, the bisector of the first quadrant with , and a circular arc of radius centered at the origin.
As goes to infinity, the integral along the circular arc tends to
where polar coordinates were used and Jordan's inequality was utilised for the second inequality. The integral along the real axis tends to the half Gaussian integral
Note too that because the integrand is an entire function on the complex plane, its integral along the whole contour is zero. Overall, we must have
where denotes the bisector of the first quadrant, as in the diagram. To evaluate the left hand side, parametrize the bisector as
where ranges from 0 to . Note that the square of this expression is just . Therefore, substitution gives the left hand side as
Using Euler's formula to take real and imaginary parts of gives this as
where we have written to emphasize that the original Gaussian integral's value is completely real with zero imaginary part. Letting
and then equating real and imaginary parts produces the following system of two equations in the two unknowns and :
Solving this for and gives the desired result.
Generalization
The integral
is a confluent hypergeometric function and also an incomplete gamma function
which reduces to Fresnel integrals if real or imaginary parts are taken:
The leading term in the asymptotic expansion is
and therefore
For , the imaginary part of this equation in particular is
with the left-hand side converging for and the right-hand side being its analytical extension to the whole plane less where lie the poles of .
The Kummer transformation of the confluent hypergeometric function is
with
Numerical approximation
For computation to arbitrary precision, the power series is suitable for small argument. For large argument, asymptotic expansions converge faster. Continued fraction methods may also be used.
For computation to particular target precision, other approximations have been developed. Cody developed a set of efficient approximations based on rational functions that give relative errors down to . A FORTRAN implementation of the Cody approximation that includes the values of the coefficients needed for implementation in other languages was published by van Snyder. Boersma developed an approximation with error less than .
Applications
The Fresnel integrals were originally used in the calculation of the electromagnetic field intensity in an environment where light bends around opaque objects. More recently, they have been used in the design of highways and railways, specifically their curvature transition zones, see track transition curve. Other applications are rollercoasters or calculating the transitions on a velodrome track to allow rapid entry to the bends and gradual exit.
Gallery
See also
Böhmer integral
Fresnel zone
Track transition curve
Euler spiral
Zone plate
Dirichlet integral
Notes
References
(Uses instead of .)
External links
Cephes, free/open-source C++/C code to compute Fresnel integrals among other special functions. Used in SciPy and ALGLIB.
Faddeeva Package, free/open-source C++/C code to compute complex error functions (from which the Fresnel integrals can be obtained), with wrappers for Matlab, Python, and other languages.
Integral calculus
Spirals
Physical optics
Special functions
Special hypergeometric functions
Analytic functions
Diffraction | 0.768318 | 0.992193 | 0.76232 |
Stochastic electrodynamics | Stochastic electrodynamics (SED) is extends classical electrodynamics (CED) of theoretical physics by adding the hypothesis of a classical Lorentz invariant radiation field having statistical properties similar to that of the electromagnetic zero-point field (ZPF) of quantum electrodynamics (QED).
Key ingredients
Stochastic electrodynamics combines two conventional classical ideas – electromagnetism derived from point charges obeying Maxwell's equations and particle motion driven by Lorentz forces – with one unconventional hypothesis: the classical field has radiation even at T=0. This zero-point radiation is inferred from observations of the (macroscopic) Casimir effect forces at low temperatures. As temperature approaches zero, experimental measurements of the force between two uncharged, conducting plates in a vacuum do not go to zero as classical electrodynamics would predict. Taking this result as evidence of classical zero-point radiation leads to the stochastic electrodynamics model.
Brief history
Stochastic electrodynamics is a term for a collection of research efforts of many different styles based on the ansatz that there exists a Lorentz invariant random electromagnetic radiation. The basic ideas have been around for a long time, but Marshall (1963) and Brafford seem to have originated the more concentrated efforts that started in the 1960s. Thereafter Timothy Boyer, Luis de la Peña and Ana María Cetto were perhaps the most prolific contributors in the 1970s and beyond.
Others have made contributions, alterations, and proposals concentrating on applying SED to problems in QED. A separate thread has been the investigation of an earlier proposal by Walther Nernst attempting to use the SED notion of a classical ZPF to explain inertial mass as due to a vacuum reaction.
In 2010, Cavalleri et al. introduced SEDS ('pure' SED, as they call it, plus spin) as a fundamental improvement that they claim potentially overcomes all the known drawbacks of SED. They also claim SEDS resolves four observed effects that are so far unexplained by QED, i.e., 1) the physical origin of the ZPF and its natural upper cutoff; 2) an anomaly in experimental studies of the neutrino rest mass; 3) the origin and quantitative treatment of 1/f noise; and 4) the high-energy tail (~ 1021 eV) of cosmic rays. Two double-slit electron diffraction experiments are proposed to discriminate between QM and SEDS.
In 2013, Auñon et al. showed that Casimir and Van der Waals interactions are a particular case of stochastic forces from electromagnetic sources when the broad Planck's spectrum is chosen, and the wavefields are non-correlated. Addressing fluctuating partially coherent light emitters with a tailored spectral energy distribution in the optical range, this establishes the link between stochastic electrodynamics and coherence theory; henceforth putting forward a way to optically create and control both such zero-point fields as well as Lifshitz forces of thermal fluctuations. In addition, this opens the path to build many more stochastic forces on employing narrow-band light sources for bodies with frequency-dependent responses.
Scope of SED
SED has been used in attempts to provide a classical explanation for effects previously considered to require quantum mechanics (here restricted to the Schrödinger equation and the Dirac equation and QED) for their explanation. It has also motivated a classical ZPF-based underpinning for gravity and inertia. There is no universal agreement on the successes and failures of SED, either in its congruence with standard theories of quantum mechanics, QED, and gravity or in its compliance with observation. The following SED-based explanations are relatively uncontroversial and are free of criticism at the time of writing:
The Van der Waals force
Diamagnetism
The Unruh effect
The following SED-based calculations and SED-related claims are more controversial, and some have been subject to published criticism:
The ground state of the harmonic oscillator
The ground state of the hydrogen atom
De Broglie waves
Inertia
Gravitation
See also
References
Fringe physics
Quantum field theory
Emergence | 0.793952 | 0.960155 | 0.762317 |
Fourth, fifth, and sixth derivatives of position | In physics, the fourth, fifth and sixth derivatives of position are defined as derivatives of the position vector with respect to time – with the first, second, and third derivatives being velocity, acceleration, and jerk, respectively. The higher-order derivatives are less common than the first three; thus their names are not as standardized, though the concept of a minimum snap trajectory has been used in robotics and is implemented in MATLAB.
The fourth derivative is referred to as snap, leading the fifth and sixth derivatives to be "sometimes somewhat facetiously" called crackle and pop, inspired by the Rice Krispies mascots Snap, Crackle, and Pop. The fourth derivative is also called jounce.
(snap/jounce)
Snap, or jounce, is the fourth derivative of the position vector with respect to time, or the rate of change of the jerk with respect to time. Equivalently, it is the second derivative of acceleration or the third derivative of velocity,
and is defined by any of the following equivalent expressions:
In civil engineering, the design of railway tracks and roads involves the minimization of snap, particularly around bends with different radii of curvature. When snap is constant, the jerk changes linearly, allowing for a smooth increase in radial acceleration, and when, as is preferred, the snap is zero, the change in radial acceleration is linear. The minimization or elimination of snap is commonly done using a mathematical clothoid function. Minimizing snap improves the performance of machine tools and roller coasters.
The following equations are used for constant snap:
where
is constant snap,
is initial jerk,
is final jerk,
is initial acceleration,
is final acceleration,
is initial velocity,
is final velocity,
is initial position,
is final position,
is time between initial and final states.
The notation (used by Visser) is not to be confused with the displacement vector commonly denoted similarly.
The dimensions of snap are distance per fourth power of time (LT−4). The corresponding SI unit is metre per second to the fourth power, m/s4, m⋅s−4.
The fifth derivative of the position vector with respect to time is sometimes referred to as crackle. It is the rate of change of snap with respect to time. Crackle is defined by any of the following equivalent expressions:
The following equations are used for constant crackle:
where
: constant crackle,
: initial snap,
: final snap,
: initial jerk,
: final jerk,
: initial acceleration,
: final acceleration,
: initial velocity,
: final velocity,
: initial position,
: final position,
: time between initial and final states.
The dimensions of crackle are LT−5. The corresponding SI unit is m/s5.
The sixth derivative of the position vector with respect to time is sometimes referred to as pop. It is the rate of change of crackle with respect to time. Pop is defined by any of the following equivalent expressions:
The following equations are used for constant pop:
where
: constant pop,
: initial crackle,
: final crackle,
: initial snap,
: final snap,
: initial jerk,
: final jerk,
: initial acceleration,
: final acceleration,
: initial velocity,
: final velocity,
: initial position,
: final position,
: time between initial and final states.
The dimensions of pop are LT−6. The corresponding SI unit is m/s6.
References
External links
Acceleration
Kinematic properties
Time in physics
Vector physical quantities | 0.764041 | 0.997741 | 0.762316 |
Pascal's law | Pascal's law (also Pascal's principle or the principle of transmission of fluid-pressure) is a principle in fluid mechanics given by Blaise Pascal that states that a pressure change at any point in a confined incompressible fluid is transmitted throughout the fluid such that the same change occurs everywhere. The law was established by French mathematician Blaise Pascal in 1653 and published in 1663.
Definition
Pascal's principle is defined as:
Fluid column with gravity
For a fluid column in a uniform gravity (e.g. in a hydraulic press), this principle can be stated mathematically as:
where
The intuitive explanation of this formula is that the change in pressure between two elevations is due to the weight of the fluid between the elevations. Alternatively, the result can be interpreted as a pressure change caused by the change of potential energy per unit volume of the liquid due to the existence of the gravitational field. Note that the variation with height does not depend on any additional pressures. Therefore, Pascal's law can be interpreted as saying that any change in pressure applied at any given point of the fluid is transmitted undiminished throughout the fluid.
The formula is a specific case of Navier–Stokes equations without inertia and viscosity terms.
Applications
If a U-tube is filled with water and pistons are placed at each end, pressure exerted by the left piston will be transmitted throughout the liquid and against the bottom of the right piston (The pistons are simply "plugs" that can slide freely but snugly inside the tube.). The pressure that the left piston exerts against the water will be exactly equal to the pressure the water exerts against the right piston . By using we get . Suppose the tube on the right side is made 50 times wider . If a 1 N load is placed on the left piston, an additional pressure due to the weight of the load is transmitted throughout the liquid and up against the right piston. This additional pressure on the right piston will cause an upward force which is 50 times bigger than the force on the left piston. The difference between force and pressure is important: the additional pressure is exerted against the entire area of the larger piston. Since there is 50 times the area, 50 times as much force is exerted on the larger piston. Thus, the larger piston will support a 50 N load - fifty times the load on the smaller piston.
Forces can be multiplied using such a device. One newton input produces 50 newtons output. By further increasing the area of the larger piston (or reducing the area of the smaller piston), forces can be multiplied, in principle, by any amount. Pascal's principle underlies the operation of the hydraulic press. The hydraulic press does not violate energy conservation, because a decrease in distance moved compensates for the increase in force. When the small piston is moved downward 100 centimeters, the large piston will be raised only one-fiftieth of this, or 2 centimeters. The input force multiplied by the distance moved by the smaller piston is equal to the output force multiplied by the distance moved by the larger piston; this is one more example of a simple machine operating on the same principle as a mechanical lever.
A typical application of Pascal's principle for gases and liquids is the automobile lift seen in many service stations (the hydraulic jack). Increased air pressure produced by an air compressor is transmitted through the air to the surface of oil in an underground reservoir. The oil, in turn, transmits the pressure to a piston, which lifts the automobile. The relatively low pressure that exerts the lifting force against the piston is about the same as the air pressure in automobile tires. Hydraulics is employed by modern devices ranging from very small to enormous. For example, there are hydraulic pistons in almost all construction machines where heavy loads are involved.
Other applications:
Force amplification in the braking system of most motor vehicles.
Used in artesian wells, water towers, and dams.
Scuba divers must understand this principle. Starting from normal atmospheric pressure, about 100 kilopascal, the pressure increases by about 100 kPa for each increase of 10 m depth.
Usually Pascal's rule is applied to confined space (static flow), but due to the continuous flow process, Pascal's principle can be applied to the lift oil mechanism (which can be represented as a U tube with pistons on either end).
Pascal's barrel
Pascal's barrel is the name of a hydrostatics experiment allegedly performed by Blaise Pascal in 1646. In the experiment, Pascal supposedly inserted a long vertical tube into an (otherwise sealed) barrel filled with water. When water was poured into the vertical tube, the increase in hydrostatic pressure caused the barrel to burst.
The experiment is mentioned nowhere in Pascal's preserved works and it may be apocryphal, attributed to him by 19th-century French authors, among whom the experiment is known as crève-tonneau (approx.: "barrel-buster");
nevertheless the experiment remains associated with Pascal in many elementary physics textbooks.
See also
Pascal's contributions to the physical sciences
References
Hydrostatics
Fluid mechanics
Blaise Pascal | 0.764907 | 0.996582 | 0.762293 |
Canonical transformation | In Hamiltonian mechanics, a canonical transformation is a change of canonical coordinates that preserves the form of Hamilton's equations. This is sometimes known as form invariance. Although Hamilton's equations are preserved, it need not preserve the explicit form of the Hamiltonian itself. Canonical transformations are useful in their own right, and also form the basis for the Hamilton–Jacobi equations (a useful method for calculating conserved quantities) and Liouville's theorem (itself the basis for classical statistical mechanics).
Since Lagrangian mechanics is based on generalized coordinates, transformations of the coordinates do not affect the form of Lagrange's equations and, hence, do not affect the form of Hamilton's equations if the momentum is simultaneously changed by a Legendre transformation into
where are the new co‑ordinates, grouped in canonical conjugate pairs of momenta and corresponding positions for with being the number of degrees of freedom in both co‑ordinate systems.
Therefore, coordinate transformations (also called point transformations) are a type of canonical transformation. However, the class of canonical transformations is much broader, since the old generalized coordinates, momenta and even time may be combined to form the new generalized coordinates and momenta. Canonical transformations that do not include the time explicitly are called restricted canonical transformations (many textbooks consider only this type).
Modern mathematical descriptions of canonical transformations are considered under the broader topic of symplectomorphism which covers the subject with advanced mathematical prerequisites such as cotangent bundles, exterior derivatives and symplectic manifolds.
Notation
Boldface variables such as represent a list of generalized coordinates that need not transform like a vector under rotation and similarly represents the corresponding generalized momentum, e.g.,
A dot over a variable or list signifies the time derivative, e.g.,
and the equalities are read to be satisfied for all coordinates, for example:
The dot product notation between two lists of the same number of coordinates is a shorthand for the sum of the products of corresponding components, e.g.,
The dot product (also known as an "inner product") maps the two coordinate lists into one variable representing a single numerical value. The coordinates after transformation are similarly labelled with for transformed generalized coordinates and for transformed generalized momentum.
Conditions for restricted canonical transformation
Restricted canonical transformations are coordinate transformations where transformed coordinates and do not have explicit time dependence, i.e., and . The functional form of Hamilton's equations is
In general, a transformation does not preserve the form of Hamilton's equations but in the absence of time dependence in transformation, some simplifications are possible. Following the formal definition for a canonical transformation, it can be shown that for this type of transformation, the new Hamiltonian (sometimes called the Kamiltonian) can be expressed as:where it differs by a partial time derivative of a function known as generator, which reduces to being only a function of time for restricted canonical transformations.
In addition to leaving the form of the Hamiltonian unchanged, it is also permits the use of the unchanged Hamiltonian in the Hamilton's equations of motion due to the above form as:
Although canonical transformations refers to a more general set of transformations of phase space corresponding with less permissive transformations of the Hamiltonian, it provides simpler conditions to obtain results that can be further generalized. All of the following conditions, with the exception of bilinear invariance condition, can be generalized for canonical transformations, including time dependance.
Indirect conditions
Since restricted transformations have no explicit time dependence (by definition), the time derivative of a new generalized coordinate is
where is the Poisson bracket.
Similarly for the identity for the conjugate momentum, Pm using the form of the "Kamiltonian" it follows that:
Due to the form of the Hamiltonian equations of motion,
if the transformation is canonical, the two derived results must be equal, resulting in the equations:
The analogous argument for the generalized momenta Pm leads to two other sets of equations:
These are the indirect conditions to check whether a given transformation is canonical.
Symplectic condition
Sometimes the Hamiltonian relations are represented as:
Where
and . Similarly, let .
From the relation of partial derivatives, converting the relation in terms of partial derivatives with new variables gives where . Similarly for ,
Due to form of the Hamiltonian equations for ,
where can be used due to the form of Kamiltonian. Equating the two equations gives the symplectic condition as:
The left hand side of the above is called the Poisson matrix of , denoted as . Similarly, a Lagrange matrix of can be constructed as . It can be shown that the symplectic condition is also equivalent to by using the property. The set of all matrices which satisfy symplectic conditions form a symplectic group. The symplectic conditions are equivalent with indirect conditions as they both lead to the equation , which is used in both of the derivations.
Invariance of Poisson Bracket
The Poisson bracket which is defined as:can be represented in matrix form as:Hence using partial derivative relations and symplectic condition gives:
The symplectic condition can also be recovered by taking and which shows that . Thus these conditions are equivalent to symplectic conditions. Furthermore, it can be seen that , which is also the result of explicitly calculating the matrix element by expanding it.
Invariance of Lagrange Bracket
The Lagrange bracket which is defined as:
can be represented in matrix form as:
Using similar derivation, gives:
The symplectic condition can also be recovered by taking and which shows that . Thus these conditions are equivalent to symplectic conditions. Furthermore, it can be seen that , which is also the result of explicitly calculating the matrix element by expanding it.
Bilinear invariance conditions
These set of conditions only apply to restricted canonical transformations or canonical transformations that are independent of time variable.
Consider arbitrary variations of two kinds, in a single pair of generalized coordinate and the corresponding momentum:
The area of the infinitesimal parallelogram is given by:
It follows from the symplectic condition that the infinitesimal area is conserved under canonical transformation:
Note that the new coordinates need not be completely oriented in one coordinate momentum plane.
Hence, the condition is more generally stated as an invariance of the form under canonical transformation, expanded as:If the above is obeyed for any arbitrary variations, it would be only possible if the indirect conditions are met.
The form of the equation, is also known as a symplectic product of the vectors and and the bilinear invariance condition can be stated as a local conservation of the symplectic product.
Liouville's theorem
The indirect conditions allow us to prove Liouville's theorem, which states that the volume in phase space is conserved under canonical transformations, i.e.,
By calculus, the latter integral must equal the former times the determinant of Jacobian Where
Exploiting the "division" property of Jacobians yields
Eliminating the repeated variables gives
Application of the indirect conditions above yields .
Generating function approach
To guarantee a valid transformation between and , we may resort to a direct generating function approach. Both sets of variables must obey Hamilton's principle. That is the action integral over the Lagrangians and , obtained from the respective Hamiltonian via an "inverse" Legendre transformation, must be stationary in both cases (so that one can use the Euler–Lagrange equations to arrive at Hamiltonian equations of motion of the designated form; as it is shown for example here):
One way for both variational integral equalities to be satisfied is to have
Lagrangians are not unique: one can always multiply by a constant and add a total time derivative and yield the same equations of motion (as discussed on Wikibooks). In general, the scaling factor is set equal to one; canonical transformations for which are called extended canonical transformations. is kept, otherwise the problem would be rendered trivial and there would be not much freedom for the new canonical variables to differ from the old ones.
Here is a generating function of one old canonical coordinate ( or ), one new canonical coordinate ( or ) and (possibly) the time . Thus, there are four basic types of generating functions (although mixtures of these four types can exist), depending on the choice of variables. As will be shown below, the generating function will define a transformation from old to new canonical coordinates, and any such transformation is guaranteed to be canonical.
The various generating functions and its properties tabulated below is discussed in detail:
Type 1 generating function
The type 1 generating function depends only on the old and new generalized coordinates
To derive the implicit transformation, we expand the defining equation above
Since the new and old coordinates are each independent, the following equations must hold
These equations define the transformation as follows: The first set of equations
define relations between the new generalized coordinates and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations
yields analogous formulae for the new generalized momenta in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation
yields a formula for as a function of the new canonical coordinates .
In practice, this procedure is easier than it sounds, because the generating function is usually simple. For example, let
This results in swapping the generalized coordinates for the momenta and vice versa
and . This example illustrates how independent the coordinates and momenta are in the Hamiltonian formulation; they are equivalent variables.
Type 2 generating function
The type 2 generating function depends only on the old generalized coordinates and the new generalized momenta
where the terms represent a Legendre transformation to change the right-hand side of the equation below. To derive the implicit transformation, we expand the defining equation above
Since the old coordinates and new momenta are each independent, the following equations must hold
These equations define the transformation as follows: The first set of equations
define relations between the new generalized momenta and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations
yields analogous formulae for the new generalized coordinates in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation
yields a formula for as a function of the new canonical coordinates .
In practice, this procedure is easier than it sounds, because the generating function is usually simple. For example, let
where is a set of functions. This results in a point transformation of the generalized coordinates
Type 3 generating function
The type 3 generating function depends only on the old generalized momenta and the new generalized coordinates
where the terms represent a Legendre transformation to change the left-hand side of the equation below. To derive the implicit transformation, we expand the defining equation above
Since the new and old coordinates are each independent, the following equations must hold
These equations define the transformation as follows: The first set of equations
define relations between the new generalized coordinates and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations
yields analogous formulae for the new generalized momenta in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation yields a formula for as a function of the new canonical coordinates .
In practice, this procedure is easier than it sounds, because the generating function is usually simple.
Type 4 generating function
The type 4 generating function depends only on the old and new generalized momenta
where the terms represent a Legendre transformation to change both sides of the equation below. To derive the implicit transformation, we expand the defining equation above
Since the new and old coordinates are each independent, the following equations must hold
These equations define the transformation as follows: The first set of equations
define relations between the new generalized momenta and the old canonical coordinates . Ideally, one can invert these relations to obtain formulae for each as a function of the old canonical coordinates. Substitution of these formulae for the coordinates into the second set of equations
yields analogous formulae for the new generalized coordinates in terms of the old canonical coordinates . We then invert both sets of formulae to obtain the old canonical coordinates as functions of the new canonical coordinates . Substitution of the inverted formulae into the final equation
yields a formula for as a function of the new canonical coordinates .
Restrictions on generating functions
For example, using generating function of second kind: and , the first set of equations consisting of variables , and has to be inverted to get . This process is possible when the matrix defined by is non-singular.
Hence, restrictions are placed on generating functions to have the matrices: , , and , being non-singular.
Limitations of generating functions
Since is non-singular, it implies that is also non-singular. Since the matrix is inverse of , the transformations of type 2 generating functions always have a non-singular matrix. Similarly, it can be stated that type 1 and type 4 generating functions always have a non-singular matrix whereas type 2 and type 3 generating functions always have a non-singular matrix. Hence, the canonical transformations resulting from these generating functions are not completely general.
In other words, since and are each independent functions, it follows that to have generating function of the form and or and , the corresponding Jacobian matrices and are restricted to be non singular, ensuring that the generating function is a function of independent variables. However, as a feature of canonical transformations, it is always possible to choose such independent functions from sets or , to form a generating function representation of canonical transformations, including the time variable. Hence, it can be proved that every finite canonical transformation can be given as a closed but implicit form that is a variant of the given four simple forms.
Canonical transformation conditions
Canonical transformation relations
From: , calculate :
Since the left hand side is which is independent of dynamics of the particles, equating coefficients of and to zero, canonical transformation rules are obtained. This step is equivalent to equating the left hand side as .
Similarly:
Similarly the canonical transformation rules are obtained by equating the left hand side as .
The above two relations can be combined in matrix form as: (which will also retain same form for extended canonical transformation) where the result , has been used. The canonical transformation relations are hence said to be equivalent to in this context.
The canonical transformation relations can now be restated to include time dependance:Since and , if and do not explicitly depend on time, can be taken. The analysis of restricted canonical transformations is hence consistent with this generalization.
Symplectic Condition
Applying transformation of co-ordinates formula for , in Hamiltonian's equations gives:
Similarly for :or:Where the last terms of each equation cancel due to condition from canonical transformations. Hence leaving the symplectic relation: which is also equivalent with the condition . It follows from the above two equations that the symplectic condition implies the equation , from which the indirect conditions can be recovered. Thus, symplectic conditions and indirect conditions can be said to be equivalent in the context of using generating functions.
Invariance of Poisson and Lagrange Bracket
Since and where the symplectic condition is used in the last equalities. Using , the equalities and are obtained which imply the invariance of Poisson and Lagrange brackets.
Extended Canonical Transformation
Canonical transformation relations
By solving for:with various forms of generating function, the relation between K and H goes as instead, which also applies for case.
All results presented below can also be obtained by replacing , and from known solutions, since it retains the form of Hamilton's equations. The extended canonical transformations are hence said to be result of a canonical transformation and a trivial canonical transformation which has (for the given example, which satisfies the condition).
Using same steps previously used in previous generalization, with in the general case, and retaining the equation , extended canonical transformation partial differential relations are obtained as:
Symplectic condition
Following the same steps to derive the symplectic conditions, as: and
where using instead gives:The second part of each equation cancel. Hence the condition for extended canonical transformation instead becomes: .
Poisson and Lagrange Brackets
The Poisson brackets are changed as follows:whereas, the Lagrange brackets are changed as:
Hence, the Poisson bracket scales by the inverse of whereas the Lagrange bracket scales by a factor of .
Infinitesimal canonical transformation
Consider the canonical transformation that depends on a continuous parameter , as follows:
For infinitesimal values of , the corresponding transformations are called as infinitesimal canonical transformations which are also known as differential canonical transformations.
Consider the following generating function:
Since for , has the resulting canonical transformation, and , this type of generating function can be used for infinitesimal canonical transformation by restricting to an infinitesimal value. From the conditions of generators of second type:Since , changing the variables of the function to and neglecting terms of higher order of , gives:Infinitesimal canonical transformations can also be derived using the matrix form of the symplectic condition.
Active canonical transformations
In the passive view of transformations, the coordinate system is changed without the physical system changing, whereas in the active view of transformation, the coordinate system is retained and the physical system is said to undergo transformations. Thus, using the relations from infinitesimal canonical transformations, the change in the system states under active view of the canonical transformation is said to be:
or as in matrix form.
For any function , it changes under active view of the transformation according to:
Considering the change of Hamiltonians in the active view, i.e., for a fixed point,where are mapped to the point, by the infinitesimal canonical transformation, and similar change of variables for to is considered up-to first order of . Hence, if the Hamiltonian is invariant for infinitesimal canonical transformations, its generator is a constant of motion.
Examples of ICT
Time evolution
Taking and , then . Thus the continuous application of such a transformation maps the coordinates to . Hence if the Hamiltonian is time translation invariant, i.e. does not have explicit time dependence, its value is conserved for the motion.
Translation
Taking , and . Hence, the canonical momentum generates a shift in the corresponding generalized coordinate and if the Hamiltonian is invariant of translation, the momentum is a constant of motion.
Rotation
Consider an orthogonal system for an N-particle system:
Choosing the generator to be: and the infinitesimal value of , then the change in the coordinates is given for x by:
and similarly for y:
whereas the z component of all particles is unchanged: .
These transformations correspond to rotation about the z axis by angle in its first order approximation. Hence, repeated application of the infinitesimal canonical transformation generates a rotation of system of particles about the z axis. If the Hamiltonian is invariant under rotation about the z axis, the generator, the component of angular momentum along the axis of rotation, is an invariant of motion.
Motion as canonical transformation
Motion itself (or, equivalently, a shift in the time origin) is a canonical transformation. If and , then Hamilton's principle is automatically satisfiedsince a valid trajectory should always satisfy Hamilton's principle, regardless of the endpoints.
Examples
The translation where are two constant vectors is a canonical transformation. Indeed, the Jacobian matrix is the identity, which is symplectic: .
Set and , the transformation where is a rotation matrix of order 2 is canonical. Keeping in mind that special orthogonal matrices obey it's easy to see that the Jacobian is symplectic. However, this example only works in dimension 2: is the only special orthogonal group in which every matrix is symplectic. Note that the rotation here acts on and not on and independently, so these are not the same as a physical rotation of an orthogonal spatial coordinate system.
The transformation , where is an arbitrary function of , is canonical. Jacobian matrix is indeed given by which is symplectic.
Modern mathematical description
In mathematical terms, canonical coordinates are any coordinates on the phase space (cotangent bundle) of the system that allow the canonical one-form to be written as
up to a total differential (exact form). The change of variable between one set of canonical coordinates and another is a canonical transformation. The index of the generalized coordinates is written here as a superscript, not as a subscript as done above. The superscript conveys the contravariant transformation properties of the generalized coordinates, and does not mean that the coordinate is being raised to a power. Further details may be found at the symplectomorphism article.
History
The first major application of the canonical transformation was in 1846, by Charles Delaunay, in the study of the Earth-Moon-Sun system. This work resulted in the publication of a pair of large volumes as Mémoires by the French Academy of Sciences, in 1860 and 1867.
See also
Symplectomorphism
Hamilton–Jacobi equation
Liouville's theorem (Hamiltonian)
Mathieu transformation
Linear canonical transformation
Notes
References
Hamiltonian mechanics
Transforms | 0.771403 | 0.988183 | 0.762288 |
Wind-turbine aerodynamics | The primary application of wind turbines is to generate energy using the wind. Hence, the aerodynamics is a very important aspect of wind turbines. Like most machines, wind turbines come in many different types, all of them based on different energy extraction concepts.
Though the details of the aerodynamics depend very much on the topology, some fundamental concepts apply to all turbines. Every topology has a maximum power for a given flow, and some topologies are better than others. The method used to extract power has a strong influence on this. In general, all turbines may be classified as either lift-based or drag-based, the former being more efficient. The difference between these groups is the aerodynamic force that is used to extract the energy.
The most common topology is the horizontal-axis wind turbine. It is a lift-based wind turbine with very good performance. Accordingly, it is a popular choice for commercial applications and much research has been applied to this turbine. Despite being a popular lift-based alternative in the latter part of the 20th century, the Darrieus wind turbine is rarely used today. The Savonius wind turbine is the most common drag type turbine. Despite its low efficiency, it remains in use because of its robustness and simplicity to build and maintain.
General aerodynamic considerations
The governing equation for power extraction is:
where P is the power, F is the force vector, and v is the velocity of the moving wind turbine part.
The force F is generated by the wind's interaction with the blade. The magnitude and distribution of this force is the primary focus of wind-turbine aerodynamics. The most familiar type of aerodynamic force is drag. The direction of the drag force is parallel to the relative wind. Typically, the wind turbine parts are moving, altering the flow around the part. An example of relative wind is the wind one would feel cycling on a calm day.
To extract power, the turbine part must move in the direction of the net force. In the drag force case, the relative wind speed decreases subsequently, and so does the drag force. The relative wind aspect dramatically limits the maximum power that can be extracted by a drag-based wind turbine. Lift-based wind turbines typically have lifting surfaces moving perpendicular to the flow. Here, the relative wind does not decrease; rather, it increases with rotor speed. Thus, the maximum power limits of these machines are much higher than those of drag-based machines.
Characteristic parameters
Wind turbines come in a variety of sizes. Once in operation, a wind turbine experiences a wide range of conditions. This variability complicates the comparison of different types of turbines. To deal with this, nondimensionalization is applied to various qualities. Nondimensionalization allows one to make comparisons between different turbines, without having to consider the effect of things like size and wind conditions from the comparison. One of the qualities of nondimensionalization is that though geometrically similar turbines will produce the same non-dimensional results, other factors (difference in scale, wind properties) cause them to produce very different dimensional properties.
Power Coefficient
The coefficient of power is the most important variable in wind-turbine aerodynamics. The Buckingham π theorem can be applied to show that the non-dimensional variable for power is given by the equation below. This equation is similar to efficiency, so values between 0 and less than 1 are typical. However, this is not exactly the same as efficiency and thus in practice, some turbines can exhibit greater than unity power coefficients. In these circumstances, one cannot conclude the first law of thermodynamics is violated because this is not an efficiency term by the strict definition of efficiency.
where is the coefficient of power, is the air density, A is the area of the wind turbine, and V is the wind speed.
Thrust coefficient
The thrust coefficient is another important dimensionless number in wind turbine aerodynamics.
Speed ratio
Equation shows two important dependents. The first is the speed (U) of the machine. The speed at the tip of the blade is usually used for this purpose, and is written as the product of the blade radius r and the rotational speed of the wind: , where is the rotational velocity in radians/second).[please clarify] This variable is nondimensionalized by the wind speed, to obtain the speed ratio:
Lift and drag
The force vector is not straightforward, as stated earlier there are two types of aerodynamic forces, lift and drag. Accordingly, there are two non-dimensional parameters. However, both variables are non-dimensionalized in a similar way. The formula for lift is given below, the formula for drag is given after:
where is the lift coefficient, is the drag coefficient, is the relative wind as experienced by the wind turbine blade, and A is the area. Note that A may not be the same area used in the power non-dimensionalization of power.
Relative speed
The aerodynamic forces have a dependency on W, this speed is the relative speed and it is given by the equation below. Note that this is vector subtraction.
Drag- versus lift-based machines
All wind turbines extract energy from the wind through aerodynamic forces. There are two important aerodynamic forces: drag and lift. Drag applies a force on the body in the direction of the relative flow, while lift applies a force perpendicular to the relative flow. Many machine topologies could be classified by the primary force used to extract the energy. For example, a Savonious wind turbine is a drag-based machine, while a Darrieus wind turbine and conventional horizontal-axis wind turbines are lift-based machines. Drag-based machines are conceptually simple, yet suffer from poor efficiency. Efficiency in this analysis is based on the power extracted vs. the plan-form area. Considering that the wind is free, but the blade materials are not, a plan-form-based definition of efficiency is more appropriate.
The analysis is focused on comparing the maximum power extraction modes and nothing else. Accordingly, several idealizations are made to simplify the analysis, further considerations are required to apply this analysis to real turbines. For example, in this comparison the effects of axial momentum theory are ignored. Axial momentum theory demonstrates how the wind turbine imparts an influence on the wind which in-turn decelerates the flow and limits the maximum power. For more details see Betz's law. Since this effect is the same for both lift and drag-based machines it can be ignored for comparison purposes. The topology of the machine can introduce additional losses, for example trailing vorticity in horizontal axis machines degrades the performance at the tip. Typically these losses are minor and can be ignored in this analysis (for example tip loss effects can be reduced by using high aspect-ratio blades).
Maximum power of a drag-based wind turbine
Equation will be the starting point in this derivation. Equation is used to define the force, and equation is used for the relative speed. These substitutions give the following formula for power.
The formulas and are applied to express in nondimensional form:
It can be shown through calculus that equation achieves a maximum at . By inspection one can see that equation will achieve larger values for . In these circumstances, the scalar product in equation makes the result negative. Thus, one can conclude that the maximum power is given by:
Experimentally it has been determined that a large is 1.2, thus the maximum is approximately 0.1778.
Maximum power of a lift-based wind turbine
The derivation for the maximum power of a lift-based machine is similar, with some modifications. First we must recognize that drag is always present, and thus cannot be ignored. It will be shown that neglecting drag leads to a final solution of infinite power. This result is clearly invalid, hence we will proceed with drag. As before, equations, and will be used along with to define the power below expression.
Similarly, this is non-dimensionalized with equations and. However, in this derivation the parameter is also used:
Solving the optimal speed ratio is complicated by the dependency on and the fact that the optimal speed ratio is a solution to a cubic polynomial. Numerical methods can then be applied to determine this solution and the corresponding solution for a range of results. Some sample solutions are given in the table below.
Experiments have shown that it is not unreasonable to achieve a drag ratio of about 0.01 at a lift coefficient of 0.6. This would give a of about 889. This is substantially better than the best drag-based machine, and explains why lift-based machines are superior.
In the analysis given here, there is an inconsistency compared to typical wind turbine non-dimensionalization. As stated in the preceding section, the A (area) in the non-dimensionalization is not always the same as the A in the force equations and. Typically for the A is the area swept by the rotor blade in its motion. For and A is the area of the turbine wing section. For drag based machines, these two areas are almost identical so there is little difference. To make the lift based results comparable to the drag results, the area of the wing section was used to non-dimensionalize power. The results here could be interpreted as power per unit of material. Given that the material represents the cost (wind is free), this is a better variable for comparison.
If one were to apply conventional non-dimensionalization, more information on the motion of the blade would be required. However the discussion on horizontal-axis wind turbines will show that the maximum there is 16/27. Thus, even by conventional non-dimensional analysis lift based machines are superior to drag based machines.
There are several idealizations to the analysis. In any lift-based machine (aircraft included) with finite wings, there is a wake that affects the incoming flow and creates induced drag. This phenomenon exists in wind turbines and was neglected in this analysis. Including induced drag requires information specific to the topology. In these cases it is expected that both the optimal speed-ratio and the optimal would be less. The analysis focused on the aerodynamic potential but neglected structural aspects. In reality most optimal wind-turbine design becomes a compromise between optimal aerodynamic design and optimal structural design.
Horizontal-axis wind turbine
The aerodynamics of a horizontal-axis wind turbine are not straightforward. The air flow at the blades is not the same as the airflow further away from the turbine. The very nature of the way in which energy is extracted from the air also causes air to be deflected by the turbine. In addition, the aerodynamics of a wind turbine at the rotor surface exhibit phenomena rarely seen in other aerodynamic fields.
Axial momentum and the Lanchester–Betz–Joukowsky limit
Energy in fluid is contained in four different forms: gravitational potential energy, thermodynamic pressure, kinetic energy from the velocity and finally thermal energy. Gravitational and thermal energy have a negligible effect on the energy extraction process. From a macroscopic point of view, the air flow around the wind turbine is at atmospheric pressure. If pressure is constant then only kinetic energy is extracted. However up close near the rotor itself the air velocity is constant as it passes through the rotor plane. This is because of conservation of mass: the air that passes through the rotor cannot slow down because it needs to stay out of the way of the air behind it. So at the rotor the energy is extracted by a pressure drop. The air directly behind the wind turbine is at sub-atmospheric pressure; the air in front is at greater than atmospheric pressure. It is this high pressure in front of the wind turbine that deflects some of the upstream air around the turbine.
Frederick W. Lanchester was the first to study this phenomenon in application to ship propellers; five years later Nikolai Yegorovich Zhukovsky and Albert Betz independently arrived at the same results. It is believed that each researcher was not aware of the others' work because of World War I and the Bolshevik Revolution. Formally, the proceeding limit should thus be referred to as the Lanchester–Betz–Joukowsky limit. In general Albert Betz is credited with this accomplishment because he published his work in a journal that had wide circulation, while the other two published it in the publication associated with their respective institutions. Thus it is widely known as simply the Betz Limit.
This limit is derived by looking at the axial momentum of the air passing through the wind turbine. As stated above, some of the air is deflected away from the turbine. This causes the air passing through the rotor plane to have a smaller velocity than the free stream velocity. The ratio of this reduction to that of the air velocity far away from the wind turbine is called the axial induction factor. It is defined as
where a is the axial induction factor, U1 is the wind speed far away upstream from the rotor, and U2 is the wind speed at the rotor.
The first step to deriving the Betz limit is to apply the principle of conservation of angular momentum. As stated above, the effect of the wind turbine is to attenuate the flow. A location downstream of the turbine sees a lower wind speed than a location upstream of the turbine. This would violate the conservation of momentum if the wind turbine was not applying a thrust force on the flow. This thrust force manifests itself through the pressure drop across the rotor. The front operates at high pressure while the back operates at low pressure. The pressure difference from the front to back causes the thrust force. The momentum lost in the turbine is balanced by the thrust force.
Another equation is needed to relate the pressure difference to the velocity of the flow near the turbine. Here, the Bernoulli equation is used between the field flow and the flow near the wind turbine. There is one limitation to the Bernoulli equation: the equation cannot be applied to fluid passing through the wind turbine. Instead, conservation of mass is used to relate the incoming air to the outlet air. Betz used these equations and managed to solve the velocities of the flow in the far wake and near the wind turbine in terms of the far field flow and the axial induction factor. The velocities are given below as:
U4 is introduced here as the wind velocity in the far wake. This is important because the power extracted from the turbine is defined by the following equation. However the Betz limit is given in terms of the coefficient of power . The coefficient of power is similar to efficiency but not the same. The formula for the coefficient of power is given beneath the formula for power:
Betz was able to develop an expression for in terms of the induction factors. This is done by the velocity relations being substituted into power and power is substituted into the coefficient of power definition. The relationship Betz developed is given below:
The Betz limit is defined by the maximum value that can be given by the above formula. This is found by taking the derivative with respect to the axial induction factor, setting it to zero and solving for the axial induction factor. Betz was able to show that the optimum axial induction factor is one third. The optimum axial induction factor was then used to find the maximum coefficient of power. This maximum coefficient is the Betz limit. Betz was able to show that the maximum coefficient of power of a wind turbine is 16/27. Airflow operating at higher thrust will cause the axial induction factor to rise above the optimum value. Higher thrust causes more air to be deflected away from the turbine. When the axial induction factor falls below the optimum value, the wind turbine is not extracting all the energy it can. This reduces pressure around the turbine and allows more air to pass through it, but not enough to account for the lack of energy being extracted.
The derivation of the Betz limit shows a simple analysis of wind turbine aerodynamics. In reality there is a lot more. A more rigorous analysis would include wake rotation, the effect of variable geometry, the important effect of airfoils on the flow, etc. Within airfoils alone, the wind turbine aerodynamicist has to consider the effects of surface roughness, dynamic stall tip losses, and solidity, among other problems.
Angular momentum and wake rotation
The wind turbine described by Betz does not actually exist. It is merely an idealized wind turbine described as an actuator disk. It's a disk in space where fluid energy is simply extracted from the air. In the Betz turbine the energy extraction manifests itself through thrust. The equivalent turbine described by Betz would be a horizontal propeller type operating at infinite tip speed ratios and no losses. The tip speed ratio is the ratio of the speed of the tip relative to the free stream flow. Actual turbines try to run very high L/D airfoils at high tip speed ratios to attempt to approximate this, but there are still additional losses in the wake because of these limitations.
One key difference between actual turbines and the actuator disk, is that energy is extracted through torque. The wind imparts a torque on the wind turbine, thrust is a necessary by-product of torque. Newtonian physics dictates that for every action there is an equal and opposite reaction. If the wind imparts torque on the blades, then the blades must be imparting torque on the wind. This torque would then cause the flow to rotate. Thus the flow in the wake has two components: axial and tangential. This tangential flow is referred to as a wake rotation.
Torque is necessary for energy extraction. However wake rotation is considered a loss. Accelerating the flow in the tangential direction increases the absolute velocity. This in turn increases the amount of kinetic energy in the near wake. This rotational energy is not dissipated in any form that would allow for a greater pressure drop (Energy extraction). Thus any rotational energy in the wake is energy that is lost and unavailable.
This loss is minimized by allowing the rotor to rotate very quickly. To the observer it may seem like the rotor is not moving fast; however, it is common for the tips to be moving through the air at 8-10 times the speed of the free stream. Newtonian mechanics defines power as torque multiplied by the rotational speed. The same amount of power can be extracted by allowing the rotor to rotate faster and produce less torque. Less torque means that there is less wake rotation. Less wake rotation means there is more energy available to extract. However, very high tip speeds also increase the drag on the blades, decreasing power production. Balancing these factors is what leads to most modern horizontal-axis wind turbines running at a tip speed ratio around 9. In addition, wind turbines usually limit the tip speed to around 80-90m/s due to leading edge erosion and high noise levels. At wind speeds above about 10m/s (where a turbine running a tip speed ratio of 9 would reach 90m/s tip speed), turbines usually do not continue to increase rotational speed for this reason, which slightly reduces efficiency.
Blade element and momentum theory
The simplest model for horizontal-axis wind turbine aerodynamics is blade element momentum theory. The theory is based on the assumption that the flow at a given annulus does not affect the flow at adjacent annuli. This allows the rotor blade to be analyzed in sections, where the resulting forces are summed over all sections to get the overall forces of the rotor. The theory uses both axial and angular momentum balances to determine the flow and the resulting forces at the blade.
The momentum equations for the far field flow dictate that the thrust and torque will induce a secondary flow in the approaching wind. This in turn affects the flow geometry at the blade. The blade itself is the source of these thrust and torque forces. The force response of the blades is governed by the geometry of the flow, or better known as the angle of attack. Refer to the Airfoil article for more information on how airfoils create lift and drag forces at various angles of attack. This interplay between the far field momentum balances and the local blade forces requires one to solve the momentum equations and the airfoil equations simultaneously. Typically computers and numerical methods are employed to solve these models.
There is a lot of variation between different versions of blade element momentum theory. First, one can consider the effect of wake rotation or not. Second, one can go further and consider the pressure drop induced in wake rotation. Third, the tangential induction factors can be solved with a momentum equation, an energy balance or orthogonal geometric constraint; the latter a result of Biot–Savart law in vortex methods. These all lead to different set of equations that need to be solved. The simplest and most widely used equations are those that consider wake rotation with the momentum equation but ignore the pressure drop from wake rotation. Those equations are given below. a is the axial component of the induced flow, a' is the tangential component of the induced flow. is the solidity of the rotor, is the local inflow angle. and are the coefficient of normal force and the coefficient of tangential force respectively. Both these coefficients are defined with the resulting lift and drag coefficients of the airfoil:
Corrections to blade element momentum theory
Blade element momentum theory alone fails to represent accurately the true physics of real wind turbines. Two major shortcomings are the effects of a discrete number of blades and far field effects when the turbine is heavily loaded. Secondary shortcomings originate from having to deal with transient effects like dynamic stall, rotational effects like the Coriolis force and centrifugal pumping, and geometric effects that arise from coned and yawed rotors. The current state of the art in blade element momentum theory uses corrections to deal with these major shortcomings. These corrections are discussed below. There is as yet no accepted treatment for the secondary shortcomings. These areas remain a highly active area of research in wind turbine aerodynamics.
The effect of the discrete number of blades is dealt with by applying the Prandtl tip loss factor. The most common form of this factor is given below where B is the number of blades, R is the outer radius and r is the local radius. The definition of F is based on actuator disk models and not directly applicable to blade element momentum theory. However the most common application multiplies induced velocity term by F in the momentum equations. As in the momentum equation there are many variations for applying F, some argue that the mass flow should be corrected in either the axial equation, or both axial and tangential equations. Others have suggested a second tip loss term to account for the reduced blade forces at the tip. Shown below are the above momentum equations with the most common application of F:
The typical momentum theory is effective only for axial induction factors up to 0.4 (thrust coefficient of 0.96). Beyond this point the wake collapses and turbulent mixing occurs. This state is highly transient and largely unpredictable by theoretical means. Accordingly, several empirical relations have been developed. As the usual case there are several version; however a simple one that is commonly used is a linear curve fit given below, with . The turbulent wake function given excludes the tip loss function, however the tip loss is applied simply by multiplying the resulting axial induction by the tip loss function.
when
The terms and represent different quantities. The first one is the thrust coefficient of the rotor, which is the one which should be corrected for high rotor loading (i.e., for high values of ), while the second one is the tangential aerodynamic coefficient of an individual blade element, which is given by the aerodynamic lift and drag coefficients.
A "Unified momentum model for rotor aerodynamics across operating regimes" which claims to extend validity also for 0.5 < a < 1 was published recently
https://doi.org/10.1038/s41467-024-50756-5 .
Aerodynamic modeling
Blade element momentum theory is widely used due to its simplicity and overall accuracy, but its originating assumptions limit its use when the rotor disk is yawed, or when other non-axisymmetric effects (like the rotor wake) influence the flow. Limited success at improving predictive accuracy has been made using computational fluid dynamics (CFD) solvers based on Reynolds-averaged Navier–Stokes equations and other similar three-dimensional models such as free vortex methods. These are very computationally intensive simulations to perform for several reasons. First, the solver must accurately model the far-field flow conditions, which can extend several rotor diameters up- and down-stream and include atmospheric boundary layer turbulence, while at the same time resolving the small-scale boundary-layer flow conditions at the blades' surface (necessary to capture blade stall). In addition, many CFD solvers have difficulty meshing parts that move and deform, such as the rotor blades. Finally, there are many dynamic flow phenomena that are not easily modelled by Reynolds-averaged Navier–Stokes equations, such as dynamic stall and tower shadow. Due to the computational complexity, it is not currently practical to use these advanced methods for wind turbine design, though research continues in these and other areas related to helicopter and wind turbine aerodynamics.
Free vortex models and Lagrangian particle vortex methods are both active areas of research that seek to increase modelling accuracy by accounting for more of the three-dimensional and unsteady flow effects than either blade element momentum theory or Reynolds-averaged Navier–Stokes equations. Free vortex models are similar to lifting line theory in that they assume that the wind turbine rotor is shedding either a continuous vortex filament from the blade tips (and often the root), or a continuous vortex sheet from the blades' trailing edges. Lagrangian particle vortex methods can use a variety of methods to introduce vorticity into the wake. Biot–Savart summation is used to determine the induced flow field of these wake vortices' circulations, allowing for better approximations of the local flow over the rotor blades. These methods have largely confirmed much of the applicability of blade element momentum theory and shed insight into the structure of wind turbine wakes. Free vortex models have limitations due to its origin in potential flow theory, such as not explicitly modeling model viscous behavior (without semi-empirical core models), though the Lagrangian particle vortex method is a fully viscous method. Lagrangian particle vortex methods are more computationally intensive than either free vortex models or Reynolds-averaged Navier–Stokes equations, and free vortex models still rely on blade element theory for the blade forces.
See also
Blade solidity
Wind turbine design
References
Sources
Hansen, M.O.L. Aerodynamics of Wind Turbines, 3rd ed., Routledge, 2015
Schmitz, S. Aerodynamics of Wind Turbines: A Physical Basis for Analysis and Design, Wiley, 2019
Schaffarczyk, A.P. Introduction to Wind Turbine Aerodynamics, 3rd ed., SpringerNature, 2024
Aerodynamics
Wind turbines | 0.774648 | 0.983997 | 0.762251 |
Kolmogorov microscales | In fluid dynamics, Kolmogorov microscales are the smallest scales in turbulent flow. At the Kolmogorov scale, viscosity dominates and the turbulence kinetic energy is dissipated into thermal energy. They are defined by
where
is the average rate of dissipation of turbulence kinetic energy per unit mass, and
is the kinematic viscosity of the fluid.
Typical values of the Kolmogorov length scale, for atmospheric motion in which the large eddies have length scales on the order of kilometers, range from 0.1 to 10 millimeters; for smaller flows such as in laboratory systems, may be much smaller.
In 1941, Andrey Kolmogorov introduced the hypothesis that the smallest scales of turbulence are universal (similar for every turbulent flow) and that they depend only on and . The definitions of the Kolmogorov microscales can be obtained using this idea and dimensional analysis. Since the dimension of kinematic viscosity is length2/time, and the dimension of the energy dissipation rate per unit mass is length2/time3, the only combination that has the dimension of time is which is the Kolmogorov time scale. Similarly, the Kolmogorov length scale is the only combination of and that has dimension of length.
Alternatively, the definition of the Kolmogorov time scale can be obtained from the inverse of the mean square strain rate tensor, which also gives using the definition of the energy dissipation rate per unit mass Then the Kolmogorov length scale can be obtained as the scale at which the Reynolds number is equal to 1,
Kolmogorov's 1941 theory is a mean field theory since it assumes that the relevant dynamical parameter is the mean energy dissipation rate. In fluid turbulence, the energy dissipation rate fluctuates in space and time, so it is possible to think of the microscales as quantities that also vary in space and time. However, standard practice is to use mean field values since they represent the typical values of the smallest scales in a given flow. In 1961, Kolomogorov published a refined version of the similarity hypotheses that accounts for the log-normal distribution of the dissipation rate.
See also
Taylor microscale
Integral length scale
Batchelor scale
References
Turbulence | 0.773451 | 0.985517 | 0.762249 |
Orbital elements | Orbital elements are the parameters required to uniquely identify a specific orbit. In celestial mechanics these elements are considered in two-body systems using a Kepler orbit. There are many different ways to mathematically describe the same orbit, but certain schemes, each consisting of a set of six parameters, are commonly used in astronomy and orbital mechanics.
A real orbit and its elements change over time due to gravitational perturbations by other objects and the effects of general relativity. A Kepler orbit is an idealized, mathematical approximation of the orbit at a particular time.
Keplerian elements
The traditional orbital elements are the six Keplerian elements, after Johannes Kepler and his laws of planetary motion.
When viewed from an inertial frame, two orbiting bodies trace out distinct trajectories. Each of these trajectories has its focus at the common center of mass. When viewed from a non-inertial frame centered on one of the bodies, only the trajectory of the opposite body is apparent; Keplerian elements describe these non-inertial trajectories. An orbit has two sets of Keplerian elements depending on which body is used as the point of reference. The reference body (usually the most massive) is called the primary, the other body is called the secondary. The primary does not necessarily possess more mass than the secondary, and even when the bodies are of equal mass, the orbital elements depend on the choice of the primary.
Two elements define the shape and size of the ellipse:
Eccentricity — shape of the ellipse, describing how much it is elongated compared to a circle (not marked in diagram).
Semi-major axis — half the distance between the apoapsis and periapsis. The portion of the semi-major axis extending from the primary at one focus to the periapsis is shown as a purple line in the diagram; the rest (from the primary/focus to the center of the orbit ellipse) is below the reference plane and not shown.
Two elements define the orientation of the orbital plane in which the ellipse is embedded:
Inclination — vertical tilt of the ellipse with respect to the reference plane, measured at the ascending node (where the orbit passes upward through the reference plane, the green angle in the diagram). Tilt angle is measured perpendicular to line of intersection between orbital plane and reference plane. Any three distinct points on an ellipse will define the ellipse orbital plane. The plane and the ellipse are both two-dimensional objects defined in three-dimensional space.
Longitude of the ascending node — horizontally orients the ascending node of the ellipse (where the orbit passes from south to north through the reference plane, symbolized by ) with respect to the reference frame's vernal point (symbolized by ♈︎). This is measured in the reference plane, and is shown as the green angle in the diagram.
The remaining two elements are as follows:
Argument of periapsis defines the orientation of the ellipse in the orbital plane, as an angle measured from the ascending node to the periapsis (the closest point the satellite body comes to the primary body around which it orbits), the purple angle in the diagram.
True anomaly (, , or ) at epoch defines the position of the orbiting body along the ellipse at a specific time (the "epoch"), expressed as an angle from the periapsis.
The mean anomaly is a mathematically convenient fictitious "angle" which does not correspond to a real geometric angle, but rather varies linearly with time, one whole orbital period being represented by an "angle" of 2 radians. It can be converted into the true anomaly , which does represent the real geometric angle in the plane of the ellipse, between periapsis (closest approach to the central body) and the position of the orbiting body at any given time. Thus, the true anomaly is shown as the red angle in the diagram, and the mean anomaly is not shown.
The angles of inclination, longitude of the ascending node, and argument of periapsis can also be described as the Euler angles defining the orientation of the orbit relative to the reference coordinate system.
Note that non-elliptic trajectories also exist, but are not closed, and are thus not orbits. If the eccentricity is greater than one, the trajectory is a hyperbola. If the eccentricity is equal to one, the trajectory is a parabola. Regardless of eccentricity, the orbit degenerates to a radial trajectory if the angular momentum equals zero.
Required parameters
Given an inertial frame of reference and an arbitrary epoch (a specified point in time), exactly six parameters are necessary to unambiguously define an arbitrary and unperturbed orbit.
This is because the problem contains six degrees of freedom. These correspond to the three spatial dimensions which define position (, , in a Cartesian coordinate system), plus the velocity in each of these dimensions. These can be described as orbital state vectors, but this is often an inconvenient way to represent an orbit, which is why Keplerian elements are commonly used instead.
Sometimes the epoch is considered a "seventh" orbital parameter, rather than part of the reference frame.
If the epoch is defined to be at the moment when one of the elements is zero, the number of unspecified elements is reduced to five. (The sixth parameter is still necessary to define the orbit; it is merely numerically set to zero by convention or "moved" into the definition of the epoch with respect to real-world clock time.)
Alternative parametrizations
Keplerian elements can be obtained from orbital state vectors (a three-dimensional vector for the position and another for the velocity) by manual transformations or with computer software.
Other orbital parameters can be computed from the Keplerian elements such as the period, apoapsis, and periapsis. (When orbiting the Earth, the last two terms are known as the apogee and perigee.) It is common to specify the period instead of the semi-major axis a in Keplerian element sets, as each can be computed from the other provided the standard gravitational parameter, , is given for the central body.
Instead of the mean anomaly at epoch, the mean anomaly , mean longitude, true anomaly , or (rarely) the eccentric anomaly might be used.
Using, for example, the "mean anomaly" instead of "mean anomaly at epoch" means that time must be specified as a seventh orbital element. Sometimes it is assumed that mean anomaly is zero at the epoch (by choosing the appropriate definition of the epoch), leaving only the five other orbital elements to be specified.
Different sets of elements are used for various astronomical bodies. The eccentricity, , and either the semi-major axis, , or the distance of periapsis, , are used to specify the shape and size of an orbit. The longitude of the ascending node, , the inclination, , and the argument of periapsis, , or the longitude of periapsis, , specify the orientation of the orbit in its plane. Either the longitude at epoch, , the mean anomaly at epoch, , or the time of perihelion passage, , are used to specify a known point in the orbit. The choices made depend whether the vernal equinox or the node are used as the primary reference. The semi-major axis is known if the mean motion and the gravitational mass are known.
It is also quite common to see either the mean anomaly or the mean longitude expressed directly, without either or as intermediary steps, as a polynomial function with respect to time. This method of expression will consolidate the mean motion into the polynomial as one of the coefficients. The appearance will be that or are expressed in a more complicated manner, but we will appear to need one fewer orbital element.
Mean motion can also be obscured behind citations of the orbital period .
Euler angle transformations
The angles , , are the Euler angles (corresponding to , , in the notation used in that article) characterizing the orientation of the coordinate system
where:
, is in the equatorial plane of the central body. is in the direction of the vernal equinox. is perpendicular to and with defines the reference plane. is perpendicular to the reference plane. Orbital elements of bodies (planets, comets, asteroids, ...) in the Solar System usually the ecliptic as that plane.
, are in the orbital plane and with in the direction to the pericenter (periapsis). is perpendicular to the plane of the orbit. is mutually perpendicular to and .
Then, the transformation from the , , coordinate frame to the , , frame with the Euler angles , , is:
where
The inverse transformation, which computes the 3 coordinates in the I-J-K system given the 3 (or 2) coordinates in the x-y-z system, is represented by the inverse matrix. According to the rules of matrix algebra, the inverse matrix of the product of the 3 rotation matrices is obtained by inverting the order of the three matrices and switching the signs of the three Euler angles.
That is,
where
The transformation from , , to Euler angles , , is:
where signifies the polar argument that can be computed with the standard function available in many programming languages.
Orbit prediction
Under ideal conditions of a perfectly spherical central body, zero perturbations and negligible relativistic effects, all orbital elements except the mean anomaly are constants. The mean anomaly changes linearly with time, scaled by the mean motion,
where is the standard gravitational parameter. Hence if at any instant the orbital parameters are , then the elements at time is given by .
Perturbations and elemental variance
Unperturbed, two-body, Newtonian orbits are always conic sections, so the Keplerian elements define an ellipse, parabola, or hyperbola. Real orbits have perturbations, so a given set of Keplerian elements accurately describes an orbit only at the epoch. Evolution of the orbital elements takes place due to the gravitational pull of bodies other than the primary, the nonsphericity of the primary, atmospheric drag, relativistic effects, radiation pressure, electromagnetic forces, and so on.
Keplerian elements can often be used to produce useful predictions at times near the epoch. Alternatively, real trajectories can be modeled as a sequence of Keplerian orbits that osculate ("kiss" or touch) the real trajectory. They can also be described by the so-called planetary equations, differential equations which come in different forms developed by Lagrange, Gauss, Delaunay, Poincaré, or Hill.
Two-line elements
Keplerian elements parameters can be encoded as text in a number of formats. The most common of them is the NASA / NORAD "two-line elements" (TLE) format, originally designed for use with 80 column punched cards, but still in use because it is the most common format, and 80-character ASCII records can be handled efficiently by modern databases.
Depending on the application and object orbit, the data derived from TLEs older than 30 days can become unreliable. Orbital positions can be calculated from TLEs through simplified perturbation models (SGP4 / SDP4 / SGP8 / SDP8).
Example of a two-line element:
1 27651U 03004A 07083.49636287 .00000119 00000-0 30706-4 0 2692
2 27651 039.9951 132.2059 0025931 073.4582 286.9047 14.81909376225249
Delaunay variables
The Delaunay orbital elements were introduced by Charles-Eugène Delaunay during his study of the motion of the Moon. Commonly called Delaunay variables, they are a set of canonical variables, which are action-angle coordinates. The angles are simple sums of some of the Keplerian angles:
the mean longitude: ,
the longitude of periapsis: , and
the longitude of the ascending node:
along with their respective conjugate momenta, , , and . The momenta , , and are the action variables and are more elaborate combinations of the Keplerian elements , , and .
Delaunay variables are used to simplify perturbative calculations in celestial mechanics, for example while investigating the Kozai–Lidov oscillations in hierarchical triple systems. The advantage of the Delaunay variables is that they remain well defined and non-singular (except for , which can be tolerated) when and / or are very small: When the test particle's orbit is very nearly circular, or very nearly "flat".
See also
Apparent longitude
Asteroid family, asteroids that share similar proper orbital elements
Beta angle
Ephemeris
Geopotential model
Orbital inclination
Orbital state vectors
Proper orbital elements
Osculating orbit
References
External links
– a serious treatment of orbital elements
– also furnishes orbital elements for a large number of solar system objects
– access to VEC2TLE software
– orbital elements of the major planets
Orbits
fr:Orbite#Paramètres orbitaux | 0.765884 | 0.995254 | 0.762249 |
Bioenergetic systems | Bioenergetic systems are metabolic processes that relate to the flow of energy in living organisms. Those processes convert energy into adenosine triphosphate (ATP), which is the form suitable for muscular activity. There are two main forms of synthesis of ATP: aerobic, which uses oxygen from the bloodstream, and anaerobic, which does not. Bioenergetics is the field of biology that studies bioenergetic systems.
Overview
The process that converts the chemical energy of food into ATP (which can release energy) is not dependent on oxygen availability. During exercise, the supply and demand of oxygen available to muscle cells is affected by duration and intensity and by the individual's cardio respiratory fitness level. It is also affected by the type of activity, for instance, during isometric activity the contracted muscles restricts blood flow (leaving oxygen and blood borne fuels unable to be delivered to muscle cells adequately for oxidative phosphorylation). Three systems can be selectively recruited, depending on the amount of oxygen available, as part of the cellular respiration process to generate ATP for the muscles. They are ATP, the anaerobic system and the aerobic system.
Adenosine triphosphate
ATP is the only type of usable form of chemical energy for musculoskeletal activity. It is stored in most cells, particularly in muscle cells. Other forms of chemical energy, such as those available from oxygen and food, must be transformed into ATP before they can be utilized by the muscle cells.
Coupled reactions
Since energy is released when ATP is broken down, energy is required to rebuild or resynthesize it. The building blocks of ATP synthesis are the by-products of its breakdown; adenosine diphosphate (ADP) and inorganic phosphate (Pi). The energy for ATP resynthesis comes from three different series of chemical reactions that take place within the body. Two of the three depend upon the food eaten, whereas the other depends upon a chemical compound called phosphocreatine. The energy released from any of these three series of reactions is utilized in reactions that resynthesize ATP. The separate reactions are functionally linked in such a way that the energy released by one is used by the other.
Three processes can synthesize ATP:
ATP–CP system (phosphagen system) – At maximum intensity, this system is used for up to 10–15 seconds. The ATP–CP system neither uses oxygen nor produces lactic acid if oxygen is unavailable and is thus called alactic anaerobic. This is the primary system behind very short, powerful movements like a golf swing, a 100 m sprint or powerlifting.
Anaerobic system – This system predominates in supplying energy for intense exercise lasting less than two minutes. It is also known as the glycolytic system. An example of an activity of the intensity and duration that this system works under would be a 400 m sprint.
Aerobic system – This is the long-duration energy system. After five minutes of exercise, the O2 system is dominant. In a 1 km run, this system is already providing approximately half the energy; in a marathon run it provides 98% or more. Around mile 20 of a marathon, runners typically "hit the wall," having depleted their glycogen reserves they then attain "second wind" which is entirely aerobic metabolism primarily by free fatty acids.
Aerobic and anaerobic systems usually work concurrently. When describing activity, it is not a question of which energy system is working, but which predominates.
Anaerobic and aerobic metabolism
The term metabolism refers to the various series of chemical reactions that take place within the body. Aerobic refers to the presence of oxygen, whereas anaerobic means with a series of chemical reactions that does not require the presence of oxygen. The ATP-CP series and the lactic acid series are anaerobic, whereas the oxygen series is aerobic.
Anaerobic metabolism
ATP–CP: the phosphagen system
Creatine phosphate (CP), like ATP, is stored in muscle cells. When it is broken down, a considerable amount of energy is released. The energy released is coupled to the energy requirement necessary for the resynthesis of ATP.
The total muscular stores of both ATP and CP are small. Thus, the amount of energy obtainable through this system is limited. The phosphagen stored in the working muscles is typically exhausted in seconds of vigorous activity. However, the usefulness of the ATP-CP system lies in the rapid availability of energy rather than quantity. This is important with respect to the kinds of physical activities that humans are capable of performing.
The phosphagen system (ATP-PCr) occurs in the cytosol (a gel-like substance) of the sarcoplasm of skeletal muscle, and in the myocyte's cytosolic compartment of the cytoplasm of cardiac and smooth muscle.
During muscle contraction:
H2O + ATP → H+ + ADP + Pi (Mg2+ assisted, utilization of ATP for muscle contraction by ATPase)
H+ + ADP + CP → ATP + Creatine (Mg2+ assisted, catalyzed by creatine kinase, ATP is used again in the above reaction for continued muscle contraction)
2 ADP → ATP + AMP (catalyzed by adenylate kinase/myokinase when CP is depleted, ATP is again used for muscle contraction)
Muscle at rest:
ATP + Creatine → H+ + ADP + CP (Mg2+ assisted, catalyzed by creatine kinase)
ADP + Pi → ATP (during anaerobic glycolysis and oxidative phosphorylation)
When the phosphagen system has been depleted of phosphocreatine (creatine phosphate), the resulting AMP produced from the adenylate kinase (myokinase) reaction is primarily regulated by the purine nucleotide cycle.
Anaerobic glycolysis
This system is known as anaerobic glycolysis. "Glycolysis" refers to the breakdown of sugar. In this system, the breakdown of sugar supplies the necessary energy from which ATP is manufactured. When sugar is metabolized anaerobically, it is only partially broken down and one of the byproducts is lactic acid. This process creates enough energy to couple with the energy requirements to resynthesize ATP.
When H+ ions accumulate in the muscles causing the blood pH level to reach low levels, temporary muscle fatigue results. Another limitation of the lactic acid system that relates to its anaerobic quality is that only a few moles of ATP can be resynthesized from the breakdown of sugar. This system cannot be relied on for extended periods of time.
The lactic acid system, like the ATP-CP system, is important primarily because it provides a rapid supply of ATP energy. For example, exercises that are performed at maximum rates for between 1 and 3 minutes depend heavily upon the lactic acid system. In activities such as running 1500 meters or a mile, the lactic acid system is used predominantly for the "kick" at the end of the race.
Aerobic metabolism
Aerobic glycolysis
Glycolysis – The first stage is known as glycolysis, which produces 2 ATP molecules, 2 reduced molecules of nicotinamide adenine dinucleotide (NADH) and 2 pyruvate molecules that move on to the next stage – the Krebs cycle. Glycolysis takes place in the cytoplasm of normal body cells, or the sarcoplasm of muscle cells.
The Krebs cycle – This is the second stage, and the products of this stage of the aerobic system are a net production of one ATP, one carbon dioxide molecule, three reduced NAD+ molecules, and one reduced flavin adenine dinucleotide (FAD) molecule. (The molecules of NAD+ and FAD mentioned here are electron carriers, and if they are reduced, they have had one or two H+ ions and two electrons added to them.) The metabolites are for each turn of the Krebs cycle. The Krebs cycle turns twice for each six-carbon molecule of glucose that passes through the aerobic system – as two three-carbon pyruvate molecules enter the Krebs cycle. Before pyruvate enters the Krebs cycle it must be converted to acetyl coenzyme A. During this link reaction, for each molecule of pyruvate converted to acetyl coenzyme A, a NAD+ is also reduced. This stage of the aerobic system takes place in the matrix of the cells' mitochondria.
Oxidative phosphorylation – The last stage of the aerobic system produces the largest yield of ATP – a total of 34 ATP molecules. It is called oxidative phosphorylation because oxygen is the final acceptor of electrons and hydrogen ions (hence oxidative) and an extra phosphate is added to ADP to form ATP (hence phosphorylation).
This stage of the aerobic system occurs on the cristae (infoldings of the membrane of the mitochondria). The reaction of each NADH in this electron transport chain provides enough energy for 3 molecules of ATP, while reaction of FADH2 yields 2 molecules of ATP. This means that 10 total NADH molecules allow the regeneration of 30 ATP, and 2 FADH2 molecules allow for 4 ATP molecules to be regenerated (in total 34 ATP from oxidative phosphorylation, plus 4 from the previous two stages, producing a total of 38 ATP in the aerobic system). NADH and FADH2 are oxidized to allow the NAD+ and FAD to be reused in the aerobic system, while electrons and hydrogen ions are accepted by oxygen to produce water, a harmless byproduct.
Fatty acid oxidation
Triglycerides stored in adipose tissue and in other tissues, such as muscle and liver, release fatty acids and glycerol in a process known as lipolysis. Fatty acids are slower than glucose to convert into acetyl-CoA, as first it has to go through beta oxidation. It takes about 10 minutes for fatty acids to sufficiently produce ATP. Fatty acids are the primary fuel source at rest and in low to moderate intensity exercise. Though slower than glucose, its yield is much higher. One molecule of glucose produces through aerobic glycolysis a net of 30-32 ATP; whereas a fatty acid can produce through beta oxidation a net of approximately 100 ATP depending on the type of fatty acid. For example, palmitic acid can produce a net of 106 ATP.
Amino acid degradation
Normally, amino acids do not provide the bulk of fuel substrates. However, in times of glycolytic or ATP crisis, amino acids can convert into pyruvate, acetyl-CoA, and citric acid cycle intermediates. This is useful during strenuous exercise or starvation as it provides faster ATP than fatty acids; however, it comes at the expense of risking protein catabolism (such as the breakdown of muscle tissue) to maintain the free amino acid pool.
Purine nucleotide cycle
The purine nucleotide cycle is used in times of glycolytic or ATP crisis, such as strenuous exercise or starvation. It produces fumarate, a citric acid cycle intermediate, which enters the mitochondrion through the malate-aspartate shuttle, and from there produces ATP by oxidative phosphorylation.
Ketolysis
During starvation or while consuming a low-carb/ketogenic diet, the liver produces ketones. Ketones are needed as fatty acids cannot pass the blood-brain barrier, blood glucose levels are low and glycogen reserves depleted. Ketones also convert to acetyl-CoA faster than fatty acids. After the ketones convert to acetyl-CoA in a process known as ketolysis, it enters the citric acid cycle to produce ATP by oxidative phosphorylation.
The longer that the person's glycogen reserves have been depleted, the higher the blood concentration of ketones, typically due to starvation or a low carb diet (βHB 3 - 5 mM). Prolonged high-intensity aerobic exercise, such as running 20 miles, where individuals "hit the wall" can create post-exercise ketosis; however, the level of ketones produced are smaller (βHB 0.3 - 2 mM).
Ethanol metabolism
Ethanol (alcohol) is first converted into acetaldehyde, consuming NAD+ twice, before being converted into acetate. The acetate is then converted into acetyl-CoA. When alcohol is consumed in small quantities, the NADH/NAD+ ratio remains in balance enough for the acetyl-CoA to be used by the Krebs cycle for oxidative phosphorylation. However, even moderate amounts of alcohol (1-2 drinks) results in more NADH than NAD+, which inhibits oxidative phosphorylation.
When the NADH/NAD+ ratio is disrupted (far more NADH than NAD+), this is called pseudohypoxia. The Krebs cycle needs NAD+ as well as oxygen, for oxidative phosphorylation. Without sufficient NAD+, the impaired aerobic metabolism mimics hypoxia (insufficient oxygen), resulting in excessive use of anaerobic glycolysis and a disrupted pyruvate/lactate ratio (low pyruvate, high lactate). The conversion of pyruvate into lactate produces NAD+, but only enough to maintain anaerobic glycolysis. In chronic excessive alcohol consumption (alcoholism), the microsomal ethanol oxidizing system (MEOS) is used in addition to alcohol dehydrogenase.
See also
Hitting the wall (muscle fatigue due to glycogen depletion)
Second wind (increased ATP synthesis primarily from free fatty acids)
References
Further reading
Exercise Physiology for Health, Fitness and Performance. Sharon Plowman and Denise Smith. Lippincott Williams & Wilkins; Third edition (2010). .
Ch. 38. Hormonal Regulation of Energy Metabolism. Berne and Levy Physiology, 6th ed (2008)
The effects of increasing exercise intensity on muscle fuel utilisation in humans. Van Loon et al. Journal of Physiology (2001)
(OTEP) Open Textbook of Exercise Physiology. Edited by Brian R. MacIntosh (2023)
ATP metabolism | 0.781371 | 0.975526 | 0.762247 |
Lightning | Lightning is a natural phenomenon formed by electrostatic discharges through the atmosphere between two electrically charged regions, either both in the atmosphere or one in the atmosphere and one on the ground, temporarily neutralizing these in a near-instantaneous release of an average of between 200 megajoules and 7 gigajoules of energy, depending on the type. This discharge may produce a wide range of electromagnetic radiation, from heat created by the rapid movement of electrons, to brilliant flashes of visible light in the form of black-body radiation. Lightning causes thunder, a sound from the shock wave which develops as gases in the vicinity of the discharge experience a sudden increase in pressure. Lightning occurs commonly during thunderstorms as well as other types of energetic weather systems, but volcanic lightning can also occur during volcanic eruptions. Lightning is an atmospheric electrical phenomenon and contributes to the global atmospheric electrical circuit.
The three main kinds of lightning are distinguished by where they occur: either inside a single thundercloud (intra-cloud), between two clouds (cloud-to-cloud), or between a cloud and the ground (cloud-to-ground), in which case it is referred to as a lightning strike. Many other observational variants are recognized, including "heat lightning", which can be seen from a great distance but not heard; dry lightning, which can cause forest fires; and ball lightning, which is rarely observed scientifically.
Humans have deified lightning for millennia. Idiomatic expressions derived from lightning, such as the English expression "bolt from the blue", are common across languages. At all times people have been fascinated by the sight and difference of lightning. The fear of lightning is called astraphobia.
The first known photograph of lightning is from 1847, by Thomas Martin Easterly. The first surviving photograph is from 1882, by William Nicholson Jennings, a photographer who spent half his life capturing pictures of lightning and proving its diversity.
There is growing evidence that lightning activity is increased by particulate emissions (a form of air pollution). However, lightning may also improve air quality and clean greenhouse gases such as methane from the atmosphere, while creating nitrogen oxide and ozone at the same time. Lightning is also the major cause of wildfire, and wildfire can contribute to climate change as well. More studies are warranted to clarify their relationship.
Electrification
The details of the charging process are still being studied by scientists, but there is general agreement on some of the basic concepts of thunderstorm electrification. Electrification can be by the triboelectric effect leading to electron or ion transfer between colliding bodies. Uncharged, colliding water-drops can become charged because of charge transfer between them (as aqueous ions) in an electric field as would exist in a thunder cloud. The main charging area in a thunderstorm occurs in the central part of the storm where air is moving upward rapidly (updraft) and temperatures range from ; see Figure 1. In that area, the combination of temperature and rapid upward air movement produces a mixture of super-cooled cloud droplets (small water droplets below freezing), small ice crystals, and graupel (soft hail). The updraft carries the super-cooled cloud droplets and very small ice crystals upward.
At the same time, the graupel, which is considerably larger and denser, tends to fall or be suspended in the rising air.
The differences in the movement of the precipitation cause collisions to occur. When the rising ice crystals collide with graupel, the ice crystals become positively charged and the graupel becomes negatively charged; see Figure 2. The updraft carries the positively charged ice crystals upward toward the top of the storm cloud. The larger and denser graupel is either suspended in the middle of the thunderstorm cloud or falls toward the lower part of the storm.
The result is that the upper part of the thunderstorm cloud becomes positively charged while the middle to lower part of the thunderstorm cloud becomes negatively charged.
The upward motions within the storm and winds at higher levels in the atmosphere tend to cause the small ice crystals (and positive charge) in the upper part of the thunderstorm cloud to spread out horizontally some distance from the thunderstorm cloud base. This part of the thunderstorm cloud is called the anvil. While this is the main charging process for the thunderstorm cloud, some of these charges can be redistributed by air movements within the storm (updrafts and downdrafts). In addition, there is a small but important positive charge buildup near the bottom of the thunderstorm cloud due to the precipitation and warmer temperatures.
The induced separation of charge in pure liquid water has been known since the 1840s as has the electrification of pure liquid water by the triboelectric effect.
William Thomson (Lord Kelvin) demonstrated that charge separation in water occurs in the usual electric fields at the Earth's surface and developed a continuous electric field measuring device using that knowledge.
The physical separation of charge into different regions using liquid water was demonstrated by Kelvin with the Kelvin water dropper. The most likely charge-carrying species were considered to be the aqueous hydrogen ion and the aqueous hydroxide ion.
The electrical charging of solid water ice has also been considered. The charged species were again considered to be the hydrogen ion and the hydroxide ion.
An electron is not stable in liquid water concerning a hydroxide ion plus dissolved hydrogen for the time scales involved in thunderstorms.
The charge carrier in lightning is mainly electrons in a plasma. The process of going from charge as ions (positive hydrogen ion and negative hydroxide ion) associated with liquid water or solid water to charge as electrons associated with lightning must involve some form of electro-chemistry, that is, the oxidation and/or the reduction of chemical species. As hydroxide functions as a base and carbon dioxide is an acidic gas, it is possible that charged water clouds in which the negative charge is in the form of the aqueous hydroxide ion, interact with atmospheric carbon dioxide to form aqueous carbonate ions and aqueous hydrogen carbonate ions.
General considerations
The typical cloud-to-ground lightning flash culminates in the formation of an electrically conducting plasma channel through the air in excess of tall, from within the cloud to the ground's surface. The actual discharge is the final stage of a very complex process. At its peak, a typical thunderstorm produces three or more strikes to the Earth per minute.
Lightning primarily occurs when warm air is mixed with colder air masses, resulting in atmospheric disturbances necessary for polarizing the atmosphere.
Lightning can also occur during dust storms, forest fires, tornadoes, volcanic eruptions, and even in the cold of winter, where the lightning is known as thundersnow. Hurricanes typically generate some lightning, mainly in the rainbands as much as from the center.
Distribution, frequency and extent
Lightning is not distributed evenly around Earth. On Earth, the lightning frequency is approximately 44 (± 5) times per second, or nearly 1.4 billion flashes per year and the median duration is 0.52 seconds made up from a number of much shorter flashes (strokes) of around 60 to 70 microseconds.
Many factors affect the frequency, distribution, strength and physical properties of a typical lightning flash in a particular region of the world. These factors include ground elevation, latitude, prevailing wind currents, relative humidity, and proximity to warm and cold bodies of water. To a certain degree, the proportions of intra-cloud, cloud-to-cloud, and cloud-to-ground lightning may also vary by season in middle latitudes.
Because human beings are terrestrial and most of their possessions are on the Earth where lightning can damage or destroy them, cloud-to-ground (CG) lightning is the most studied and best understood of the three types, even though in-cloud (IC) and cloud-to-cloud (CC) are more common types of lightning. Lightning's relative unpredictability limits a complete explanation of how or why it occurs, even after hundreds of years of scientific investigation.
About 70% of lightning occurs over land in the tropics where atmospheric convection is the greatest.
This occurs from both the mixture of warmer and colder air masses, as well as differences in moisture concentrations, and it generally happens at the boundaries between them. The flow of warm ocean currents past drier land masses, such as the Gulf Stream, partially explains the elevated frequency of lightning in the Southeast United States. Because large bodies of water lack the topographic variation that would result in atmospheric mixing, lightning is notably less frequent over the world's oceans than over land. The North and South Poles are limited in their coverage of thunderstorms and therefore result in areas with the least lightning.
In general, CG lightning flashes account for only 25% of all total lightning flashes worldwide. Since the base of a thunderstorm is usually negatively charged, this is where most CG lightning originates. This region is typically at the elevation where freezing occurs within the cloud. Freezing, combined with collisions between ice and water, appears to be a critical part of the initial charge development and separation process. During wind-driven collisions, ice crystals tend to develop a positive charge, while a heavier, slushy mixture of ice and water (called graupel) develops a negative charge. Updrafts within a storm cloud separate the lighter ice crystals from the heavier graupel, causing the top region of the cloud to accumulate a positive space charge while the lower level accumulates a negative space charge.
Because the concentrated charge within the cloud must exceed the insulating properties of air, and this increases proportionally to the distance between the cloud and the ground, the proportion of CG strikes (versus CC or IC discharges) becomes greater when the cloud is closer to the ground. In the tropics, where the freezing level is generally higher in the atmosphere, only 10% of lightning flashes are CG. At the latitude of Norway (around 60° North latitude), where the freezing elevation is lower, 50% of lightning is CG.
Lightning is usually produced by cumulonimbus clouds, which have bases that are typically above the ground and tops up to in height.
The place on Earth where lightning occurs most often is over Lake Maracaibo, wherein the Catatumbo lightning phenomenon produces 250 bolts of lightning a day. This activity occurs on average, 297 days a year. The second most lightning density is near the village of Kifuka in the mountains of the eastern Democratic Republic of the Congo, where the elevation is around . On average, this region receives . Other lightning hotspots include Singapore and Lightning Alley in Central Florida.
According to the World Meteorological Organization, on April 29, 2020, a bolt 768 km (477.2 mi) long was observed in the southern U.S.—sixty km (37 mi) longer than the previous distance record (southern Brazil, October 31, 2018). A single flash in Uruguay and northern Argentina on June 18, 2020, lasted for 17.1 seconds—0.37 seconds longer than the previous record (March 4, 2019, also in northern Argentina).
Necessary conditions
In order for an electrostatic discharge to occur, two preconditions are necessary: first, a sufficiently high potential difference between two regions of space must exist, and second, a high-resistance medium must obstruct the free, unimpeded equalization of the opposite charges. The atmosphere provides the electrical insulation, or barrier, that prevents free equalization between charged regions of opposite polarity.
It is well understood that during a thunderstorm there is charge separation and aggregation in certain regions of the cloud; however, the exact processes by which this occurs are not fully understood.
Electrical field generation
As a thundercloud moves over the surface of the Earth, an equal electric charge, but of opposite polarity, is induced on the Earth's surface underneath the cloud. The induced positive surface charge, when measured against a fixed point, will be small as the thundercloud approaches, increasing as the center of the storm arrives and dropping as the thundercloud passes. The referential value of the induced surface charge could be roughly represented as a bell curve.
The oppositely charged regions create an electric field within the air between them. This electric field varies in relation to the strength of the surface charge on the base of the thundercloud – the greater the accumulated charge, the higher the electrical field.
Flashes and strikes
The best-studied and understood form of lightning is cloud to ground (CG) lightning. Although more common, intra-cloud (IC) and cloud-to-cloud (CC) flashes are very difficult to study given there are no "physical" points to monitor inside the clouds. Also, given the very low probability of lightning striking the same point repeatedly and consistently, scientific inquiry is difficult even in areas of high CG frequency.
Lightning leaders
In a process not well understood, a bidirectional channel of ionized air, called a "leader", is initiated between oppositely-charged regions in a thundercloud. Leaders are electrically conductive channels of ionized gas that propagate through, or are otherwise attracted to, regions with a charge opposite of that of the leader tip. The negative end of the bidirectional leader fills a positive charge region, also called a well, inside the cloud while the positive end fills a negative charge well. Leaders often split, forming branches in a tree-like pattern. In addition, negative and some positive leaders travel in a discontinuous fashion, in a process called "stepping". The resulting jerky movement of the leaders can be readily observed in slow-motion videos of lightning flashes.
It is possible for one end of the leader to fill the oppositely-charged well entirely while the other end is still active. When this happens, the leader end which filled the well may propagate outside of the thundercloud and result in either a cloud-to-air flash or a cloud-to-ground flash. In a typical cloud-to-ground flash, a bidirectional leader initiates between the main negative and lower positive charge regions in a thundercloud. The weaker positive charge region is filled quickly by the negative leader which then propagates toward the inductively-charged ground.
The positively and negatively charged leaders proceed in opposite directions, positive upwards within the cloud and negative towards the earth. Both ionic channels proceed, in their respective directions, in a number of successive spurts. Each leader "pools" ions at the leading tips, shooting out one or more new leaders, momentarily pooling again to concentrate charged ions, then shooting out another leader. The negative leader continues to propagate and split as it heads downward, often speeding up as it gets closer to the Earth's surface.
About 90% of ionic channel lengths between "pools" are approximately in length. The establishment of the ionic channel takes a comparatively long amount of time (hundreds of milliseconds) in comparison to the resulting discharge, which occurs within a few dozen microseconds. The electric current needed to establish the channel, measured in the tens or hundreds of amperes, is dwarfed by subsequent currents during the actual discharge.
Initiation of the lightning leader is not well understood. The electric field strength within the thundercloud is not typically large enough to initiate this process by itself. Many hypotheses have been proposed. One hypothesis postulates that showers of relativistic electrons are created by cosmic rays and are then accelerated to higher velocities via a process called runaway breakdown. As these relativistic electrons collide and ionize neutral air molecules, they initiate leader formation. Another hypothesis involves locally enhanced electric fields being formed near elongated water droplets or ice crystals. Percolation theory, especially for the case of biased percolation, describes random connectivity phenomena, which produce an evolution of connected structures similar to that of lightning strikes. A streamer avalanche model has recently been favored by observational data taken by LOFAR during storms.
Upward streamers
When a stepped leader approaches the ground, the presence of opposite charges on the ground enhances the strength of the electric field. The electric field is strongest on grounded objects whose tops are closest to the base of the thundercloud, such as trees and tall buildings. If the electric field is strong enough, a positively charged ionic channel, called a positive or upward streamer, can develop from these points. This was first theorized by Heinz Kasemir.
As negatively charged leaders approach, increasing the localized electric field strength, grounded objects already experiencing corona discharge will exceed a threshold and form upward streamers.
Attachment
Once a downward leader connects to an available upward leader, a process referred to as attachment, a low-resistance path is formed and discharge may occur. Photographs have been taken in which unattached streamers are clearly visible. The unattached downward leaders are also visible in branched lightning, none of which are connected to the earth, although it may appear they are. High-speed videos can show the attachment process in progress.
Discharge
Return stroke
Once a conductive channel bridges the air gap between the negative charge excess in the cloud and the positive surface charge excess below, there is a large drop in resistance across the lightning channel. Electrons accelerate rapidly as a result in a zone beginning at the point of attachment, which expands across the entire leader network at up to one third of the speed of light. This is the "return stroke" and it is the most luminous and noticeable part of the lightning discharge.
A large electric charge flows along the plasma channel, from the cloud to the ground, neutralising the positive ground charge as electrons flow away from the strike point to the surrounding area. This huge surge of current creates large radial voltage differences along the surface of the ground. Called step potentials, they are responsible for more injuries and deaths in groups of people or of other animals than the strike itself. Electricity takes every path available to it.
Such step potentials will often cause current to flow through one leg and out another, electrocuting an unlucky human or animal standing near the point where the lightning strikes.
The electric current of the return stroke averages 30 kiloamperes for a typical negative CG flash, often referred to as "negative CG" lightning. In some cases, a ground-to-cloud (GC) lightning flash may originate from a positively charged region on the ground below a storm. These discharges normally originate from the tops of very tall structures, such as communications antennas. The rate at which the return stroke current travels has been found to be around 100,000 km/s (one-third of the speed of light).
The massive flow of electric current occurring during the return stroke combined with the rate at which it occurs (measured in microseconds) rapidly superheats the completed leader channel, forming a highly electrically conductive plasma channel. The core temperature of the plasma during the return stroke may exceed , causing it to radiate with a brilliant, blue-white color. Once the electric current stops flowing, the channel cools and dissipates over tens or hundreds of milliseconds, often disappearing as fragmented patches of glowing gas. The nearly instantaneous heating during the return stroke causes the air to expand explosively, producing a powerful shock wave which is heard as thunder.
Re-strike
High-speed videos (examined frame-by-frame) show that most negative CG lightning flashes are made up of 3 or 4 individual strokes, though there may be as many as 30.
Each re-strike is separated by a relatively large amount of time, typically 40 to 50 milliseconds, as other charged regions in the cloud are discharged in subsequent strokes. Re-strikes often cause a noticeable "strobe light" effect.
To understand why multiple return strokes utilize the same lightning channel, one needs to understand the behavior of positive leaders, which a typical ground flash effectively becomes following the negative leader's connection with the ground. Positive leaders decay more rapidly than negative leaders do. For reasons not well understood, bidirectional leaders tend to initiate on the tips of the decayed positive leaders in which the negative end attempts to re-ionize the leader network. These leaders, also called recoil leaders, usually decay shortly after their formation. When they do manage to make contact with a conductive portion of the main leader network, a return stroke-like process occurs and a dart leader travels across all or a portion of the length of the original leader. The dart leaders making connections with the ground are what cause a majority of subsequent return strokes.
Each successive stroke is preceded by intermediate dart leader strokes that have a faster rise time but lower amplitude than the initial return stroke. Each subsequent stroke usually re-uses the discharge channel taken by the previous one, but the channel may be offset from its previous position as wind displaces the hot channel.
Since recoil and dart leader processes do not occur on negative leaders, subsequent return strokes very seldom utilize the same channel on positive ground flashes which are explained later in the article.
Transient currents during flash
The electric current within a typical negative CG lightning discharge rises very quickly to its peak value in 1–10 microseconds, then decays more slowly over 50–200 microseconds. The transient nature of the current within a lightning flash results in several phenomena that need to be addressed in the effective protection of ground-based structures. Rapidly changing currents tend to travel on the surface of a conductor, in what is called the skin effect, unlike direct currents, which "flow-through" the entire conductor like water through a hose. Hence, conductors used in the protection of facilities tend to be multi-stranded, with small wires woven together. This increases the total bundle surface area in inverse proportion to the individual strand radius, for a fixed total cross-sectional area.
The rapidly changing currents also create electromagnetic pulses (EMPs) that radiate outward from the ionic channel. This is a characteristic of all electrical discharges. The radiated pulses rapidly weaken as their distance from the origin increases. However, if they pass over conductive elements such as power lines, communication lines, or metallic pipes, they may induce a current which travels outward to its termination. The surge current is inversely related to the surge impedance: the higher in impedance, the lower the current. This is the surge that, more often than not, results in the destruction of delicate electronics, electrical appliances, or electric motors. Devices known as surge protectors (SPD) or transient voltage surge suppressors (TVSS) attached in parallel with these lines can detect the lightning flash's transient irregular current, and, through alteration of its physical properties, route the spike to an attached earthing ground, thereby protecting the equipment from damage.
Types
Three primary types of lightning are defined by the "starting" and "ending" points of a flash channel.
(IC) or lightning occurs within a single thundercloud unit.
(CC) or lightning starts and ends between two different "functional" thundercloud units.
(CG) lightning primarily originates in the thundercloud and terminates on an Earth surface, but may also occur in the reverse direction, that is ground to cloud.
There are variations of each type, such as "positive" versus "negative" CG flashes, that have different physical characteristics common to each which can be measured. Different common names used to describe a particular lightning event may be attributed to the same or to different events.
Cloud to ground (CG)
(CG) lightning is a lightning discharge between a thundercloud and the ground. It is initiated by a stepped leader moving down from the cloud, which is met by a streamer moving up from the ground.
CG is the least common, but best understood of all types of lightning. It is easier to study scientifically because it terminates on a physical object, namely the ground, and lends itself to being measured by instruments on the ground. Of the three primary types of lightning, it poses the greatest threat to life and property, since it terminates on the ground or "strikes".
The overall discharge, termed a flash, is composed of a number of processes such as preliminary breakdown, stepped leaders, connecting leaders, return strokes, dart leaders, and subsequent return strokes. The conductivity of the electrical ground, be it soil, fresh water, or salt water, may affect the lightning discharge rate and thus visible characteristics.
Positive and negative lightning
Cloud-to-ground (CG) lightning is either positive or negative, as defined by the direction of the conventional electric current between cloud and ground. Most CG lightning is negative, meaning that a negative charge is transferred to ground and electrons travel downward along the lightning channel (conventionally the current flows from the ground to the cloud). The reverse happens in a positive CG flash, where electrons travel upward along the lightning channel and a positive charge is transferred to the ground (conventionally the current flows from the cloud to the ground). Positive lightning is less common than negative lightning and on average makes up less than 5% of all lightning strikes.
There are six different mechanisms theorized to result in the formation of positive lightning.
Vertical wind shear displacing the upper positive charge region of a thundercloud, exposing it to the ground below.
The loss of lower charge regions in the dissipating stage of a thunderstorm, leaving the primary positive charge region.
A complex arrangement of charge regions in a thundercloud, effectively resulting in an or in which the main negative charge region is above the main positive charge region instead of beneath it.
An unusually large lower positive charge region in the thundercloud.
Cutoff of an extended negative leader from its origin which creates a new bidirectional leader in which the positive end strikes the ground, commonly seen in anvil-crawler spider flashes.
The initiation of a downward positive branch from an IC lightning flash.
Contrary to popular belief, positive lightning flashes do not necessarily originate from the anvil or the upper positive charge region and strike a rain-free area outside of the thunderstorm. This belief is based on the outdated idea that lightning leaders are unipolar and originate from their respective charge region.
Positive lightning strikes tend to be much more intense than their negative counterparts. An average bolt of negative lightning carries an electric current of 30,000 amperes (30 kA), and transfers 15 C (coulombs) of electric charge and 1 gigajoule of energy. Large bolts of positive lightning can carry up to 120 kA and 350 C. The average positive ground flash has roughly double the peak current of a typical negative flash, and can produce peak currents up to 400 kA and charges of several hundred coulombs. Furthermore, positive ground flashes with high peak currents are commonly followed by long continuing currents, a correlation not seen in negative ground flashes.
As a result of their greater power, positive lightning strikes are considerably more dangerous than negative strikes. Positive lightning produces both higher peak currents and longer continuing currents, making them capable of heating surfaces to much higher levels which increases the likelihood of a fire being ignited. The long distances positive lightning can propagate through clear air explains why they are known as "bolts from the blue", giving no warning to observers.
Despite the popular misconception that these are positive lightning strikes due to them seemingly originating from the positive charge region, observations have shown that these are in fact negative flashes. They begin as IC flashes within the cloud, the negative leader then exits the cloud from the positive charge region before propagating through clear air and striking the ground some distance away.
Positive lightning has also been shown to trigger the occurrence of upward lightning flashes from the tops of tall structures and is largely responsible for the initiation of sprites several tens of kilometers above ground level. Positive lightning tends to occur more frequently in winter storms, as with thundersnow, during intense tornadoes and in the dissipation stage of a thunderstorm. Huge quantities of extremely low frequency (ELF) and very low frequency (VLF) radio waves are also generated.
Cloud to cloud (CC) and intra-cloud (IC)
Lightning discharges may occur between areas of cloud without contacting the ground. When it occurs between two separate clouds, it is known as (CC) or lightning; when it occurs between areas of differing electric potential within a single cloud, it is known as (IC) lightning. IC lightning is the most frequently occurring type.
IC lightning most commonly occurs between the upper anvil portion and lower reaches of a given thunderstorm. This lightning can sometimes be observed at great distances at night as so-called "sheet lightning". In such instances, the observer may see only a flash of light without hearing any thunder.
Another term used for cloud–cloud or cloud–cloud–ground lightning is "Anvil Crawler", due to the habit of charge, typically originating beneath or within the anvil and scrambling through the upper cloud layers of a thunderstorm, often generating dramatic multiple branch strokes. These are usually seen as a thunderstorm passes over the observer or begins to decay. The most vivid crawler behavior occurs in well developed thunderstorms that feature extensive rear anvil shearing.
Effects
Lightning strike
Effects on objects
Objects struck by lightning experience heat and magnetic forces of great magnitude. The heat created by lightning currents travelling through a tree may vaporize its sap, causing a steam explosion that bursts the trunk. As lightning travels through sandy soil, the soil surrounding the plasma channel may melt, forming tubular structures called fulgurites.
Effects on buildings and vehicles
Buildings or tall structures hit by lightning may be damaged as the lightning seeks unimpeded paths to the ground. By safely conducting a lightning strike to the ground, a lightning protection system, usually incorporating at least one lightning rod, can greatly reduce the probability of severe property damage.
Aircraft are highly susceptible to being struck due to their metallic fuselages, but lightning strikes are generally not dangerous to them. Due to the conductive properties of aluminium alloy, the fuselage acts as a Faraday cage. Present day aircraft are built to be safe from a lightning strike and passengers will generally not even know that it has happened.
Effects on animals
Although 90 percent of people struck by lightning survive, animalsincluding humansstruck by lightning may suffer severe injury due to internal organ and nervous system damage.
Other effects
Lightning serves an important role in the nitrogen cycle by oxidizing diatomic nitrogen in the air into nitrates which are deposited by rain and can fertilize the growth of plants and other organisms.
Thunder
Because the electrostatic discharge of terrestrial lightning superheats the air to plasma temperatures along the length of the discharge channel in a short duration, kinetic theory dictates gaseous molecules undergo a rapid increase in pressure and thus expand outward from the lightning creating a shock wave audible as thunder. Since the sound waves propagate not from a single point source but along the length of the lightning's path, the sound origin's varying distances from the observer can generate a rolling or rumbling effect. Perception of the sonic characteristics is further complicated by factors such as the irregular and possibly branching geometry of the lightning channel, by acoustic echoing from terrain, and by the usually multiple-stroke characteristic of the lightning strike.
Light travels at about , and sound travels through air at about . An observer can approximate the distance to the strike by timing the interval between the visible lightning and the audible thunder it generates. A lightning flash preceding its thunder by one second would be approximately in distance; a delay of three seconds would indicate a distance of about (3 × 343 m). A flash preceding thunder by five seconds would indicate a distance of approximately (5 × 343 m). Consequently, a lightning strike observed at a very close distance will be accompanied by a sudden clap of thunder, with almost no perceptible time lapse, possibly accompanied by the smell of ozone (O3).
Lightning at a sufficient distance may be seen and not heard; there is data that a lightning storm can be seen at over whereas the thunder travels about . Anecdotally, there are many examples of people saying 'the storm was directly overhead or all-around and yet there was no thunder'. Since thunderclouds can be up to 20 km high, lightning occurring high up in the cloud may appear close but is actually too far away to produce noticeable thunder.
Radio
Lightning discharges generate radio-frequency pulses which can be received thousands of kilometres from their source as radio atmospheric signals and whistlers.
High-energy radiation
The production of X-rays by a bolt of lightning was predicted as early as 1925 by C.T.R. Wilson, but no evidence was found until 2001/2002, when researchers at the New Mexico Institute of Mining and Technology detected X-ray emissions from an induced lightning strike along a grounded wire trailed behind a rocket shot into a storm cloud. In the same year University of Florida and Florida Tech researchers used an array of electric field and X-ray detectors at a lightning research facility in North Florida to confirm that natural lightning makes X-rays in large quantities during the propagation of stepped leaders. The cause of the X-ray emissions is still a matter for research, as the temperature of lightning is too low to account for the X-rays observed.
A number of observations by space-based telescopes have revealed even higher energy gamma ray emissions, the so-called terrestrial gamma-ray flashes (TGFs). These observations pose a challenge to current theories of lightning, especially with the recent discovery of the clear signatures of antimatter produced in lightning. Recent research has shown that secondary species, produced by these TGFs, such as electrons, positrons, neutrons or protons, can gain energies of up to several tens of MeV.
Ozone and nitrogen oxides
The very high temperatures generated by lightning lead to significant local increases in ozone and oxides of nitrogen. Each lightning flash in temperate and sub-tropical areas produces 7 kg of on average. In the troposphere the effect of lightning can increase by 90% and ozone by 30%.
Volcanic
Volcanic activity produces lightning-friendly conditions in multiple ways. The enormous quantity of pulverized material and gases explosively ejected into the atmosphere creates a dense plume of particles. The ash density and constant motion within the volcanic plume produces charge by frictional interactions (triboelectrification), resulting in very powerful and very frequent flashes as the cloud attempts to neutralize itself. Due to the extensive solid material (ash) content, unlike the water rich charge generating zones of a normal thundercloud, it is often called a dirty thunderstorm.
Powerful and frequent flashes have been witnessed in the volcanic plume as far back as the eruption of Mount Vesuvius in AD 79 by Pliny The Younger.
Likewise, vapors and ash originating from vents on the volcano's flanks may produce more localized and smaller flashes upwards of 2.9 km long.
Small, short duration sparks, recently documented near newly extruded magma, attest to the material being highly charged prior to even entering the atmosphere.
If the volcanic ash plume rises to freezing temperatures, ice particles form and collide with ash particles to cause electrification. Lightning can be detected in any explosion but the causation of additional electrification from ice particles in ash can lead to a stronger electrical field and a higher rate of detectable lightning. Lightning is also used as a volcano monitoring tool for detecting hazardous eruptions.
Fire lightning
Intense forest fires, such as those seen in the 2019–20 Australian bushfire season, can create their own weather systems that can produce lightning and other weather phenomena. Intense heat from a fire causes air to rapidly rise within the smoke plume, causing the formation of pyrocumulonimbus clouds. Cooler air is drawn in by this turbulent, rising air, helping to cool the plume. The rising plume is further cooled by the lower atmospheric pressure at high altitude, allowing the moisture in it to condense into cloud. Pyrocumulonimbus clouds form in an unstable atmosphere. These weather systems can produce dry lightning, fire tornadoes, intense winds, and dirty hail.
Extraterrestrial
Lightning has been observed within the atmospheres of other planets, such as Jupiter, Saturn, and probably Uranus and Neptune. Lightning on Jupiter is far more energetic than on Earth, despite seeming to be generated via the same mechanism. Recently, a new type of lightning was detected on Jupiter, thought to originate from "mushballs" including ammonia. On Saturn lightning, initially referred to as "Saturn Electrostatic Discharge", was discovered by the Voyager 1 mission.
Lightning on Venus has been a controversial subject after decades of study. During the Soviet Venera and U.S. Pioneer missions of the 1970s and 1980s, signals suggesting lightning may be present in the upper atmosphere were detected. The short Cassini–Huygens mission fly-by of Venus in 1999 detected no signs of lightning, but radio pulses recorded by the spacecraft Venus Express (which began orbiting Venus in April 2006) may originate from lightning on Venus.
Human-related phenomena
Airplane contrails have also been observed to influence lightning to a small degree. The water vapor-dense contrails of airplanes may provide a lower resistance pathway through the atmosphere having some influence upon the establishment of an ionic pathway for a lightning flash to follow.
Rocket exhaust plumes provided a pathway for lightning when it was witnessed striking the Apollo 12 rocket shortly after takeoff.
Thermonuclear explosions, by providing extra material for electrical conduction and a very turbulent localized atmosphere, have been seen triggering lightning flashes within the mushroom cloud. In addition, intense gamma radiation from large nuclear explosions may develop intensely charged regions in the surrounding air through Compton scattering. The intensely charged space charge regions create multiple clear-air lightning discharges shortly after the device detonates.
Scientific study
The science of lightning is called fulminology.
Properties
Lightning causes thunder, a sound from the shock wave which develops as gases in the vicinity of the discharge heat suddenly to very high temperatures. It is often heard a few seconds after the lightning itself. Thunder is heard as a rolling, gradually dissipating rumble because the sound from different portions of a long stroke arrives at slightly different times.
When the local electric field exceeds the dielectric strength of damp air (about 3 MV/m), electrical discharge results in a strike, often followed by commensurate discharges branching from the same path. Mechanisms that cause the charges to build up to lightning are still a matter of scientific investigation. A 2016 study confirmed dielectric breakdown is involved. Lightning may be caused by the circulation of warm moisture-filled air through electric fields. Ice or water particles then accumulate charge as in a Van de Graaff generator.
Researchers at the University of Florida found that the final one-dimensional speeds of 10 flashes observed were between 1.0 and 1.4 m/s, with an average of 4.4 m/s.
Detection and monitoring
The earliest detector invented to warn of the approach of a thunderstorm was the lightning bell. Benjamin Franklin installed one such device in his house. The detector was based on an electrostatic device called the 'electric chimes' invented by Andrew Gordon in 1742.
Lightning discharges generate a wide range of electromagnetic radiations, including radio-frequency pulses. The times at which a pulse from a given lightning discharge arrives at several receivers can be used to locate the source of the discharge with a precision on the order of metres. The United States federal government has constructed a nationwide grid of such lightning detectors, allowing lightning discharges to be tracked in real time throughout the continental U.S.
In addition, Blitzortung (a private global detection system that consists of over 500 detection stations owned and operated by hobbyists/volunteers) provides near real-time lightning maps at .
The Earth-ionosphere waveguide traps electromagnetic VLF- and ELF waves. Electromagnetic pulses transmitted by lightning strikes propagate within that waveguide. The waveguide is dispersive, which means that their group velocity depends on frequency. The difference of the group time delay of a lightning pulse at adjacent frequencies is proportional to the distance between transmitter and receiver. Together with direction-finding methods, this allows locating lightning strikes up to distances of 10,000 km from their origin. Moreover, the eigenfrequencies of the Earth-ionospheric waveguide, the Schumann resonances
at about 7.5 Hz, are used to determine the global thunderstorm activity.
In addition to ground-based lightning detection, several instruments aboard satellites have been constructed to observe lightning distribution. These include the Optical Transient Detector (OTD), aboard the OrbView-1 satellite launched on April 3, 1995, and the subsequent Lightning Imaging Sensor (LIS) aboard TRMM launched on November 28, 1997.
Starting in 2016, the National Oceanic and Atmospheric Administration launched Geostationary Operational Environmental Satellite–R Series (GOES-R) weather satellites outfitted with Geostationary Lightning Mapper (GLM) instruments which are near-infrared optical transient detectors that can detect the momentary changes in an optical scene, indicating the presence of lightning. The lightning detection data can be converted into a real-time map of lightning activity across the Western Hemisphere; this mapping technique has been implemented by the United States National Weather Service.
In 2022 EUMETSAT plan to launch the Lightning Imager (MTG-I LI) on board Meteosat Third Generation. This will complement NOAA's GLM. MTG-I LI will cover Europe and Africa and will include products on events, groups and flashes.
Artificially triggered
Rocket-triggered lightning can be "triggered" by launching specially designed rockets trailing spools of wire into thunderstorms. The wire unwinds as the rocket ascends, creating an elevated ground that can attract descending leaders. If a leader attaches, the wire provides a low-resistance pathway for a lightning flash to occur. The wire is vaporized by the return current flow, creating a straight lightning plasma channel in its place. This method allows for scientific research of lightning to occur under a more controlled and predictable manner.
The International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, Florida typically uses rocket triggered lightning in their research studies.
Laser-triggered
Since the 1970s, researchers have attempted to trigger lightning strikes by means of infrared or ultraviolet lasers, which create a channel of ionized gas through which the lightning would be conducted to ground. Such triggering of lightning is intended to protect rocket launching pads, electric power facilities, and other sensitive targets.
In New Mexico, U.S., scientists tested a new terawatt laser which provoked lightning. Scientists fired ultra-fast pulses from an extremely powerful laser thus sending several terawatts into the clouds to call down electrical discharges in storm clouds over the region. The laser beams sent from the laser make channels of ionized molecules known as filaments. Before the lightning strikes earth, the filaments lead electricity through the clouds, playing the role of lightning rods. Researchers generated filaments that lived a period too short to trigger a real lightning strike. Nevertheless, a boost in electrical activity within the clouds was registered. According to the French and German scientists who ran the experiment, the fast pulses sent from the laser will be able to provoke lightning strikes on demand. Statistical analysis showed that their laser pulses indeed enhanced the electrical activity in the thundercloud where it was aimed—in effect they generated small local discharges located at the position of the plasma channels.
Physical manifestations
Magnetism
The movement of electrical charges produces a magnetic field (see electromagnetism). The intense currents of a lightning discharge create a fleeting but very strong magnetic field. Where the lightning current path passes through rock, soil, or metal these materials can become permanently magnetized. This effect is known as lightning-induced remanent magnetism, or LIRM. These currents follow the least resistive path, often horizontally near the surface but sometimes vertically, where faults, ore bodies, or ground water offers a less resistive path. One theory suggests that lodestones, natural magnets encountered in ancient times, were created in this manner.
Lightning-induced magnetic anomalies can be mapped in the ground, and analysis of magnetized materials can confirm lightning was the source of the magnetization and provide an estimate of the peak current of the lightning discharge.
Research at the University of Innsbruck has calculated that magnetic fields generated by plasma may induce hallucinations in subjects located within of a severe lightning storm, like what happened in Transcranial magnetic stimulation (TMS).
Solar wind and cosmic rays
Some high energy cosmic rays produced by supernovas as well as solar particles from the solar wind, enter the atmosphere and electrify the air, which may create pathways for lightning bolts.
Lightning and climate change
Due to the low resolution of global climate models, accurately representing lightning in these climate models is difficult, largely due to their inability to simulate the convection and cloud ice fundamental to lightning formation. Research from the Future Climate for Africa programme demonstrates that using a convection-permitting model over Africa can more accurately capture convective thunderstorms and the distribution of ice particles. This research indicates climate change may increase the total amount of lightning only slightly: the total number of lightning days per year decreases, while more cloud ice and stronger convection leads to more lightning strikes occurring on days when lightning does occur.
A study from the University of Washington looked at lightning activity in the Arctic from 2010 to 2020. The ratio of Arctic summertime strokes was compared to total global strokes and was observed to be increasing with time, indicating that the region is becoming more influenced by lightning. The fraction of strokes above 65 degrees north was found to be increasing linearly with the NOAA global temperature anomaly and grew by a factor of 3 as the anomaly increased from 0.65 to 0.95 °C
Paleolightning
In culture and religion
Religion and mythology
In many cultures, lightning has been viewed as a sign or part of a deity or a deity in and of itself. These include the Greek god Zeus, the Aztec god Tlaloc, the Mayan God K, Slavic mythology's Perun, the Baltic Pērkons/Perkūnas, Thor in Norse mythology, Ukko in Finnish mythology, the Hindu god Indra, the Yoruba god Sango, Illapa in Inca mythology and the Shinto god Raijin. The ancient Etruscans produced guides to brontoscopic and fulgural divination of the future based on the omens supposedly displayed by thunder or lightning occurring on particular days of the year or in particular places. Such use of thunder and lightning in divination is also known as ceraunoscopy, a kind of aeromancy. In the traditional religion of the African Bantu tribes, lightning is a sign of the ire of the gods. Scriptures in Judaism, Islam and Christianity also ascribe supernatural importance to lightning. In Christianity, the Second Coming of Jesus is compared to lightning.
In popular culture
Although sometimes used figuratively, the idea that lightning never strikes the same place twice is a common myth. In fact, lightning can, and often does, strike the same place more than once. Lightning in a thunderstorm is more likely to strike objects and spots that are more prominent or conductive. For instance, lightning strikes the Empire State Building in New York City on average 23 times per year.
In French and Italian, the expression for "Love at first sight" is coup de foudre and colpo di fulmine, respectively, which literally translated means "lightning strike". Some European languages have a separate word for lightning which strikes the ground (as opposed to lightning in general); often it is a cognate of the English word "rays". The name of Australia's most celebrated thoroughbred horse, Phar Lap, derives from the shared Zhuang and Thai word for lightning.
Political and military culture
The bolt of lightning in heraldry is called a thunderbolt and is shown as a zigzag with non-pointed ends. This symbol usually represents power and speed.
Some political parties use lightning flashes as a symbol of power, such as the People's Action Party in Singapore, the British Union of Fascists during the 1930s, and the National States' Rights Party in the United States during the 1950s. The Schutzstaffel, the paramilitary wing of the Nazi Party, used the Sig rune in their logo which symbolizes lightning. The German word Blitzkrieg, which means "lightning war", was a major offensive strategy of the German army during World War II.
The lightning bolt is a common insignia for military communications units throughout the world. A lightning bolt is also the NATO symbol for a signal asset.
Data of injuries and deaths
The deadliest single direct lightning strike occurred when 21 people died as they huddled for safety in a hut that was hit (1975, Rhodesia).
The deadliest single indirect lightning strike was the 1994 Dronka lightning strike. 469 people died when lightning struck a set of oil tanks in 1994, causing burning oil to flood a town (1994, Dronka, Egypt).
In the United States an average of 23 people died from lightning per year from 2012 to 2021.
See also
Apollo 12 – A Saturn V rocket that was struck by lightning shortly after liftoff.
Harvesting lightning energy
Keraunography
Keraunomedicine – medical study of lightning casualties
Lichtenberg figure
Lightning injury
Lightning-prediction system
Roy Sullivan - Sullivan is recognized by Guinness World Records as the person struck by lightning more recorded times than any other human
St. Elmo's fire
Upper-atmospheric lightning
Vela satellites – satellites which could record lightning superbolts
References
Citations
Sources
Further reading
This is also available at
Sample, in .PDF form, consisting of the book through page 20.
External links
World Wide Lightning Location Network
Feynman's lecture on lightning
Articles containing video clips
Atmospheric electricity
Electric arcs
Electrical breakdown
Electrical phenomena
Terrestrial plasmas
Space plasmas
Storm
Weather hazards
Hazards of outdoor recreation | 0.762718 | 0.99937 | 0.762237 |
Venturi effect | The Venturi effect is the reduction in fluid pressure that results when a moving fluid speeds up as it flows through a constricted section (or choke) of a pipe. The Venturi effect is named after its discoverer, the 18th-century Italian physicist Giovanni Battista Venturi.
The effect has various engineering applications, as the reduction in pressure inside the constriction can be used both for measuring the fluid flow and for moving other fluids (e.g. in a vacuum ejector).
Background
In inviscid fluid dynamics, an incompressible fluid's velocity must increase as it passes through a constriction in accord with the principle of mass continuity, while its static pressure must decrease in accord with the principle of conservation of mechanical energy (Bernoulli's principle) or according to the Euler equations. Thus, any gain in kinetic energy a fluid may attain by its increased velocity through a constriction is balanced by a drop in pressure because of its loss in potential energy.
By measuring pressure, the flow rate can be determined, as in various flow measurement devices such as Venturi meters, Venturi nozzles and orifice plates.
Referring to the adjacent diagram, using Bernoulli's equation in the special case of steady, incompressible, inviscid flows (such as the flow of water or other liquid, or low-speed flow of gas) along a streamline, the theoretical pressure drop at the constriction is given by
where is the density of the fluid, is the (slower) fluid velocity where the pipe is wider, and is the (faster) fluid velocity where the pipe is narrower (as seen in the figure).
Choked flow
The limiting case of the Venturi effect is when a fluid reaches the state of choked flow, where the fluid velocity approaches the local speed of sound. When a fluid system is in a state of choked flow, a further decrease in the downstream pressure environment will not lead to an increase in velocity, unless the fluid is compressed.
The mass flow rate for a compressible fluid will increase with increased upstream pressure, which will increase the density of the fluid through the constriction (though the velocity will remain constant). This is the principle of operation of a de Laval nozzle. Increasing source temperature will also increase the local sonic velocity, thus allowing increased mass flow rate, but only if the nozzle area is also increased to compensate for the resulting decrease in density.
Expansion of the section
The Bernoulli equation is invertible, and pressure should rise when a fluid slows down. Nevertheless, if there is an expansion of the tube section, turbulence will appear, and the theorem will not hold. In all experimental Venturi tubes, the pressure in the entrance is compared to the pressure in the middle section; the output section is never compared with them.
Experimental apparatus
Venturi tubes
The simplest apparatus is a tubular setup known as a Venturi tube or simply a Venturi (plural: "Venturis" or occasionally "Venturies"). Fluid flows through a length of pipe of varying diameter. To avoid undue aerodynamic drag, a Venturi tube typically has an entry cone of 30 degrees and an exit cone of 5 degrees.
Venturi tubes are often used in processes where permanent pressure loss is not tolerable and where maximum accuracy is needed in case of highly viscous liquids.
Orifice plate
Venturi tubes are more expensive to construct than simple orifice plates, and both function on the same basic principle. However, for any given differential pressure, orifice plates cause significantly more permanent energy loss.
Instrumentation and measurement
Both Venturi tubes and orifice plates are used in industrial applications and in scientific laboratories for measuring the flow rate of liquids.
Flow rate
A Venturi can be used to measure the volumetric flow rate, , using Bernoulli's principle.
Since
then
A Venturi can also be used to mix a liquid with a gas. If a pump forces the liquid through a tube connected to a system consisting of a Venturi to increase the liquid speed (the diameter decreases), a short piece of tube with a small hole in it, and last a Venturi that decreases speed (so the pipe gets wider again), the gas will be sucked in through the small hole because of changes in pressure. At the end of the system, a mixture of liquid and gas will appear. See aspirator and pressure head for discussion of this type of siphon.
Differential pressure
As fluid flows through a Venturi, the expansion and compression of the fluids cause the pressure inside the Venturi to change. This principle can be used in metrology for gauges calibrated for differential pressures. This type of pressure measurement may be more convenient, for example, to measure fuel or combustion pressures in jet or rocket engines.
The first large-scale Venturi meters to measure liquid flows were developed by Clemens Herschel who used them to measure small and large flows of water and wastewater beginning at the end of the 19th century. While working for the Holyoke Water Power Company, Herschel would develop the means for measuring these flows to determine the water power consumption of different mills on the Holyoke Canal System, first beginning development of the device in 1886, two years later he would describe his invention of the Venturi meter to William Unwin in a letter dated June 5, 1888.
Compensation for temperature, pressure, and mass
Fundamentally, pressure-based meters measure kinetic energy density. Bernoulli's equation (used above) relates this to mass density and volumetric flow:
where constant terms are absorbed into k. Using the definitions of density, molar concentration, and molar mass, one can also derive mass flow or molar flow (i.e. standard volume flow):
However, measurements outside the design point must compensate for the effects of temperature, pressure, and molar mass on density and concentration. The ideal gas law is used to relate actual values to design values:
Substituting these two relations into the pressure-flow equations above yields the fully compensated flows:
Q, m, or n are easily isolated by dividing and taking the square root. Note that pressure-, temperature-, and mass-compensation is required for every flow, regardless of the end units or dimensions. Also we see the relations:
Examples
The Venturi effect may be observed or used in the following:
Machines
During Underway replenishment the helmsman of each ship must constantly steer away from the other ship due to the Venturi effect, otherwise they will collide.
Cargo eductors on oil product and chemical ship tankers
Inspirators mix air and flammable gas in grills, gas stoves and Bunsen burners
Water aspirators produce a partial vacuum using the kinetic energy from the faucet water pressure
Steam siphons use the kinetic energy from the steam pressure to create a partial vacuum
Atomizers disperse perfume or spray paint (i.e. from a spray gun or airbrush)
Carburetors use the effect to suck gasoline into an engine's intake air stream
Cylinder heads in piston engines have multiple Venturi areas like the valve seat and the port entrance, although these are not part of the design intent, merely a byproduct and any venturi effect is without specific function.
Wine aerators infuse air into wine as it is poured into a glass
Protein skimmers filter saltwater aquaria
Automated pool cleaners use pressure-side water flow to collect sediment and debris
Clarinets use a reverse taper to speed the air down the tube, enabling better tone, response and intonation
The leadpipe of a trombone, affecting the timbre
Industrial vacuum cleaners use compressed air
Venturi scrubbers are used to clean flue gas emissions
Injectors (also called ejectors) are used to add chlorine gas to water treatment chlorination systems
Steam injectors use the Venturi effect and the latent heat of evaporation to deliver feed water to a steam locomotive boiler.
Sandblasting nozzles accelerate and air and media mixture
Bilge water can be emptied from a moving boat through a small waste gate in the hull. The air pressure inside the moving boat is greater than the water sliding by beneath.
A scuba diving regulator uses the Venturi effect to assist maintaining the flow of gas once it starts flowing
In recoilless rifles to decrease the recoil of firing
The diffuser on an automobile
Race cars utilising ground effect to increase downforce and thus become capable of higher cornering speeds
Foam proportioners used to induct fire fighting foam concentrate into fire protection systems
Trompe air compressors entrain air into a falling column of water
The bolts in some brands of paintball markers
Low-speed wind tunnels can be considered very large Venturi because they take advantage of the Venturi effect to increase velocity and decrease pressure to simulate expected flight conditions.
Architecture
Hawa Mahal of Jaipur, also utilizes the Venturi effect, by allowing cool air to pass through, thus making the whole area more pleasant during the high temperatures in summer.
Large cities where wind is forced between buildings - the gap between the Twin Towers of the original World Trade Center was an extreme example of the phenomenon, which made the ground level plaza notoriously windswept. In fact, some gusts were so high that pedestrian travel had to be aided by ropes.
In the south of Iraq, near the modern town of Nasiriyah, a 4000-year-old flume structure has been discovered at the ancient site of Girsu. This construction by the ancient Sumerians forced the contents of a nineteen kilometre canal through a constriction to enable the side-channeling of water off to agricultural lands from a higher origin than would have been the case without the flume. A recent dig by archaeologists from the British museum confirmed the finding.
Nature
In windy mountain passes, resulting in erroneous pressure altimeter readings
The mistral wind in southern France increases in speed through the Rhone valley.
See also
Joule–Thomson effect
Venturi flume
Parshall flume
References
External links
3D animation of the Differential Pressure Flow Measuring Principle (Venturi meter)
Use of the Venturi effect for gas pumps to know when to turn off (video)
Fluid dynamics | 0.763616 | 0.998174 | 0.762222 |
Gustav Kirchhoff | Gustav Robert Kirchhoff (; 12 March 1824 – 17 October 1887) was a German physicist and mathematician who contributed to the fundamental understanding of electrical circuits, spectroscopy, and the emission of black-body radiation by heated objects.
He coined the term black-body radiation in 1860.
Several different sets of concepts are named "Kirchhoff's laws" after him, which include Kirchhoff's circuit laws, Kirchhoff's law of thermal radiation, and Kirchhoff's law of thermochemistry.
The Bunsen–Kirchhoff Award for spectroscopy is named after Kirchhoff and his colleague, Robert Bunsen.
Life and work
Gustav Kirchhoff was born on 12 March 1824 in Königsberg, Prussia, the son of Friedrich Kirchhoff, a lawyer, and Johanna Henriette Wittke. His family were Lutherans in the Evangelical Church of Prussia. He graduated from the Albertus University of Königsberg in 1847 where he attended the mathematico-physical seminar directed by Carl Gustav Jacob Jacobi, Franz Ernst Neumann and Friedrich Julius Richelot. In the same year, he moved to Berlin, where he stayed until he received a professorship at Breslau. Later, in 1857, he married Clara Richelot, the daughter of his mathematics professor Richelot. The couple had five children. Clara died in 1869. He married Luise Brömmel in 1872.
Kirchhoff formulated his circuit laws, which are now ubiquitous in electrical engineering, in 1845, while he was still a student. He completed this study as a seminar exercise; it later became his doctoral dissertation. He was called to the University of Heidelberg in 1854, where he collaborated in spectroscopic work with Robert Bunsen. In 1857, he calculated that an electric signal in a resistanceless wire travels along the wire at the speed of light. He proposed his law of thermal radiation in 1859, and gave a proof in 1861. Together Kirchhoff and Bunsen invented the spectroscope, which Kirchhoff used to pioneer the identification of the elements in the Sun, showing in 1859 that the Sun contains sodium. He and Bunsen discovered caesium and rubidium in 1861. At Heidelberg he ran a mathematico-physical seminar, modelled on Franz Ernst Neumann's, with the mathematician Leo Koenigsberger. Among those who attended this seminar were Arthur Schuster and Sofia Kovalevskaya.
He contributed greatly to the field of spectroscopy by formalizing three laws that describe the spectral composition of light emitted by incandescent objects, building substantially on the discoveries of David Alter and Anders Jonas Ångström. In 1862, he was awarded the Rumford Medal for his researches on the fixed lines of the solar spectrum, and on the inversion of the bright lines in the spectra of artificial light. In 1875 Kirchhoff accepted the first chair dedicated specifically to theoretical physics at Berlin.
He also contributed to optics, carefully solving the wave equation to provide a solid foundation for Huygens' principle (and correct it in the process).
In 1864, he was elected as a member of the American Philosophical Society.
In 1884, he became foreign member of the Royal Netherlands Academy of Arts and Sciences.
Kirchhoff died in 1887, and was buried in the St Matthäus Kirchhof Cemetery in Schöneberg, Berlin (just a few meters from the graves of the Brothers Grimm). Leopold Kronecker is buried in the same cemetery.
Kirchhoff's circuit laws
Kirchhoff's first law is that the algebraic sum of currents in a network of conductors meeting at a point (or node) is zero. The second law is that in a closed circuit, the directed sums of the voltages in the system is zero.
Kirchhoff's three laws of spectroscopy
A solid, liquid, or dense gas excited to emit light will radiate at all wavelengths and thus produce a continuous spectrum.
A low-density gas excited to emit light will do so at specific wavelengths, and this produces an emission spectrum.
If light composing a continuous spectrum passes through a cool, low-density gas, the result will be an absorption spectrum.
Kirchhoff did not know about the existence of energy levels in atoms. The existence of discrete spectral lines was known since Fraunhofer discovered them in 1814. And that the lines formed a discrete mathematical pattern was described by Johann Balmer in 1885. Joseph Larmor explained the splitting of the spectral lines in a magnetic field known as the Zeeman Effect by the oscillation of electrons. But these discrete spectral lines were not explained as electron transitions until the Bohr model of the atom in 1913, which helped lead to quantum mechanics.
Kirchhoff's law of thermal radiation
It was Kirchhoff's law of thermal radiation in which he proposed an unknown universal law for radiation that led Max Planck to the discovery of the quantum of action leading to quantum mechanics.
Kirchhoff's law of thermochemistry
Kirchhoff showed in 1858 that, in thermochemistry, the variation of the heat of a chemical reaction is given by the difference in heat capacity between products and reactants:
.
Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature.
Kirchhoff's theorem in graph theory
Kirchhoff also worked in the mathematical field of graph theory, in which he proved Kirchhoff's matrix tree theorem.
Works
Vorlesungen über mathematische Physik. 4 vols., B. G. Teubner, Leipzig 1876–1894.
Vol. 1: Mechanik. 1. Auflage, B. G. Teubner, Leipzig 1876 (online).
Vol. 2: Mathematische Optik. B. G. Teubner, Leipzig 1891 (Herausgegeben von Kurt Hensel, online).
Vol. 3: Electricität und Magnetismus. B. G. Teubner, Leipzig 1891 (Herausgegeben von Max Planck, online).
Vol. 4: Theorie der Wärme. B. G. Teubner, Leipzig 1894, Herausgegeben von Max Planck
See also
Kirchhoff equations
Kirchhoff integral theorem
Kirchhoff matrix
Kirchhoff stress tensor
Kirchhoff transformation
Kirchhoff's diffraction formula
Kirchhoff's perfect black bodies
Kirchhoff's theorem
Kirchhoff–Helmholtz integral
Kirchhoff–Love plate theory
Piola–Kirchhoff stress
Saint Venant–Kirchhoff model
Stokes–Kirchhoff attenuation formula
Circuit rank
Computational aeroacoustics
Flame emission spectroscopy
Spectroscope
Kirchhoff Institute of Physics
List of German inventors and discoverers
Notes
References
Bibliography
HathiTrust full text. Partial English translation available in Magie, William Francis, A Source Book in Physics (1963). Cambridge: Harvard University Press. p. 354-360.
Kirchhoff, Gustav (1860). “IV. Ueber das Verhältniß zwischen dem Emissionsvermögen und dem Absorptionsvermögen der Körper für Wärme und Licht,” Annalen der Physik 185(2), 275–301. (coinage of term “blackbody”) [On the relationship between the emissivity and the absorptivity of bodies for heat and light]
Further reading
Klaus Hentschel: Gustav Robert Kirchhoff und seine Zusammenarbeit mit Robert Wilhelm Bunsen, in: Karl von Meyenn (Hrsg.) Die Grossen Physiker, Munich: Beck, vol. 1 (1997), pp. 416–430, 475–477, 532–534.
Klaus Hentschel: Mapping the Spectrum. Techniques of Visual Representation in Research and Teaching, Oxford: OUP, 2002.
Kirchhoff's 1857 paper on the speed of electrical signals in a wire
External links
Open Library
1824 births
1887 deaths
Optical physicists
19th-century German inventors
Discoverers of chemical elements
Scientists from Königsberg
Spectroscopists
German fluid dynamicists
University of Königsberg alumni
Academic staff of the University of Breslau
Academic staff of Heidelberg University
Academic staff of the Humboldt University of Berlin
Honorary Fellows of the Royal Society of Edinburgh
Foreign members of the Royal Society
Foreign associates of the National Academy of Sciences
Members of the Royal Netherlands Academy of Arts and Sciences
Recipients of the Pour le Mérite (civil class)
Physicists from the Kingdom of Prussia
19th-century German physicists
Rare earth scientists
Fellows of the Royal Society of Edinburgh
Recipients of the Matteucci Medal
Members of the Göttingen Academy of Sciences and Humanities
Recipients of the Cothenius Medal | 0.766555 | 0.994332 | 0.762211 |
Oxford Calculators | The Oxford Calculators were a group of 14th-century thinkers, almost all associated with Merton College, Oxford; for this reason they were dubbed "The Merton School". These men took a strikingly logical and mathematical approach to philosophical problems.
The key "calculators", writing in the second quarter of the 14th century, were Thomas Bradwardine, William Heytesbury, Richard Swineshead and John Dumbleton.
Using the slightly earlier works of Walter Burley, Gerard of Brussels, and Nicole Oresme, these individuals expanded upon the concepts of 'latitudes' and what real world applications they could apply them to.
Science
The advances these men made were initially purely mathematical but later became relevant to mechanics. Using Aristotelian logic and physics, they studied and attempted to quantify physical and observable characteristics such as: heat, force, color, density, and light. Aristotle believed that only length and motion were able to be quantified. But they used his philosophy and proved it untrue by being able to calculate things such as temperature and power. Although they attempted to quantify these observable characteristics, their interests lay more in the philosophical and logical aspects than in natural world. They used numbers to disagree philosophically and prove the reasoning of "why" something worked the way it did and not only "how" something functioned the way that it did.
Historian David C. Lindberg and professor Michael H. Shank in their 2013 book, Cambridge History of Science, Volume 2: Medieval Science, wrote:
Lawrence M. Principe wrote:
Mean Speed Theorem
The Oxford Calculators distinguished kinematics from dynamics, emphasizing kinematics, and investigating instantaneous velocity. It is through their understanding of geometry and how different shapes could be used to represent a body in motion. The Calculators related these bodies in relative motion to geometrical shapes and also understood that a right triangle's area would be equivalent to a rectangle's if the rectangle's height was half of the triangle's. This, and developing Al-Battani's work on trigonometry is what led to the formulating of the mean speed theorem (though it was later credited to Galileo) which is also known as "The Law of Falling Bodies". A basic definition of the mean speed theorem is; a body moving with constant speed will travel the same distance as an accelerated body in the same period of time as long as the body with constant speed travels at half of the sum of initial and final velocities for the accelerated body. Its earliest known mention is found in Heytesbury's Rules for Solving Sophisms: a body uniformly accelerated or decelerated for a given time covers the same distance as it would if it were to travel for the same time uniformly with the speed of the middle instant of its motion, which is defined as its mean speed. Relative motion, also referred to as local motion, can be defined as motion relative to another object where the values for acceleration, velocity, and position are dependent upon a predetermined reference point.
The mathematical physicist and historian of science Clifford Truesdell, wrote:
Boethian Theory
In Tractatus de proportionibus (1328), Bradwardine extended the theory of proportions of Eudoxus to anticipate the concept of exponential growth, later developed by the Bernoulli and Euler, with compound interest as a special case. Arguments for the mean speed theorem (above) require the modern concept of limit, so Bradwardine had to use arguments of his day. Mathematician and mathematical historian Carl Benjamin Boyer writes, "Bradwardine developed the Boethian theory of double or triple or, more generally, what we would call 'n-tuple' proportion".
Boyer also writes that "the works of Bradwardine had contained some fundamentals of trigonometry". Yet "Bradwardine and his Oxford colleagues did not quite make the breakthrough to modern science." The most essential missing tool was algebra.
Bradwardine's Rule
Lindberg and Shank also wrote:The initial goal of Bradwardine's Rule was to come up with a single rule in a general form that would show the relationship between moving and resisting powers and speed while at the same time precluded motion when the moving power is less than or equal to the resisting power. Before Bradwardine decided to use his own theory of compounded ratios in his own rule he considered and rejected four other opinions on the relationship between powers, resistances, and speeds. He then went on to use his own rule of compounded ratios which says that the ratio of speeds follows the ratios of motive to resistive powers. By applying medieval ratio theory to a controversial topic is Aristotle's Physics, Brawardine was able to make a simple, definite, and sophisticated mathematical rule for the relationship between speeds, powers, and resistances. Bradwardine's Rule was quickly accepted in the fourteenth century, first among his contemporaries at Oxford, where Richard Swineshead and John Dumbleton used it for solving sophisms, the logical and physical puzzles that were just beginning to assume and important place in the undergraduate arts curriculum.
Latitude of Forms
The Latitude of Forms is a topic that many of the Oxford Calculators published volumes on. Developed by Nicole Orseme, a “Latitude" is an abstract concept of a range that forms may vary inside of. Before latitudes were introduced into mechanics, they were used in both medical and philosophical fields. Medical authors Galen and Avicenna can be given credit for the origin of the concept. “Galen says, for instance, that there is a latitude of health which is divided into three parts, each in turn having some latitude. First, there is the latitude of healthy bodies, second the latitude of neither health nor sickness, and third the latitude of sickness.” The calculators attempted to measure and explain these changes in latitude concretely and mathematically. John Dumbleton discusses latitudes in Part II and Part III of his work the Summa. He is critical of earlier philosophers in Part II as he believes latitudes are measurable and quantifiable and later in Part III of the Summa attempts to use latitudes to measure local motion. Roger Swineshead defines five latitudes for local motion being: First, the latitude of local motion, Second, the latitude of velocity of local motion, Third, the latitude of slowness of the local motion, Fourth, the latitude of the acquisition of the latitude of local motion, and the Fifth being, the latitude of the loss of the latitude of local motion. Each of these latitudes are infinite and are comparable to the velocity, acceleration, and deceleration of the local motion of an object.
People
Thomas Bradwardine
Thomas Bradwardine was born in 1290 in Sussex, England. An attending student educated at Balliol College, Oxford, he earned various degrees. He was a secular cleric, a scholar, a theologist, a mathematician, and a physicist. He became chancellor of the diocese of London and Dean of St Paul's, as well as chaplain and confessor to Edward III. During his time at Oxford, he authored many books including: De Geometria Speculativa (printed in Paris, 1530), De Arithmetica Practica (printed in Paris, 1502), and De Proportionibus Velocitatum in Motibus (printed in Paris in 1495). Bradwardine furthered the study of using mathematics to explain physical reality. Drawing on the work of Robert Grosseteste, Robert Kilwardby and Roger Bacon, his work was in direct opposition to William of Ockham.
Aristotle suggested that velocity was proportional to force and inversely proportional to resistance, doubling the force would double the velocity but doubling the resistance would halve the velocity (V ∝ F/R). Bradwardine objected saying that this is not observed because the velocity does not equal zero when the resistance exceeds the force. Instead, he proposed a new theory that, in modern terms, would be written as (V ∝ log F/R), which was widely accepted until the late sixteenth century.
William Heytesbury
William Heytesbury was a bursar at Merton until the late 1330s and he administered the college properties in Northumberland. Later in his life he was a chancellor of Oxford. He was the first to discover the mean-speed theorem, later "The Law of Falling Bodies". Unlike Bradwardine's theory, the theorem, also known as "The Merton Rule" is a probable truth.
His most noted work was Regulae Solvendi Sophismata (Rules for Solving Sophisms). Sophisma is a statement which one can argue to be both true and false. The resolution of these arguments and determination of the real state of affairs forces one to deal with logical matters such as the analysis of the meaning of the statement in question, and the application of logical rules to specific cases. An example would be the statement, "The compound H2O is both a solid and a liquid". When the temperature is low enough this statement is true. But it may be argued and proven false at a higher temperature. In his time, this work was logically advanced.
He was a second generation calculator. He built on Richard Klivingston's "Sophistimata and Bradwardine's "Insolubilia". Later, his work went on to influence Peter of Mantura and Paul of Venice.
Richard Swineshead
Richard Swineshead was also an English mathematician, logician, and natural philosopher. The sixteenth-century polymath Girolamo Cardano placed him in the top-ten intellects of all time, alongside Archimedes, Aristotle, and Euclid.
He became a member of the Oxford calculators in 1344. His main work was a series of treatises written in 1350. This work earned him the title of "The Calculator". His treatises were named Liber Calculationum, which means "Book of Calculations". His book dealt in exhaustive detail with quantitative physics and he had over fifty variations of Bradwardine's law.
John Dumbleton
John Dumbleton became a member of the calculators in 1338–39. After becoming a member, he left the calculators for a brief period of time to study theology in Paris in 1345–47. After his study there he returned to his work with the calculators in 1347–48. One of his main pieces of work, Summa logicae et philosophiae naturalis, focused on explaining the natural world in a coherent and realistic manner, unlike some of his colleagues, claiming that they were making light of serious endeavors. Dumbleton attempted many solutions to the latitude of things, most were refuted by Richard Swineshead in his Liber Calculationum.
See also
Jean Buridan
John Cantius
Gerard of Brussels
Henry of Langenstein
Scholasticism
Science in the Middle Ages
Domingo de Soto
Notes
References
Weisheipl, James A. (1959) "The Place of John Dumbleton in the Merton School"
Clagett, Marshall (1964) “Nicole Oresme and Medieval Scientific Thought.” Proceedings of the American Philosophical Society
Sylla, Edith D. (1973) "MEDIEVAL CONCEPTS OF THE LATITUDE OF FORMS: THE OXFORD CALCULATORS"
Sylla, Edith D. (1999) "Oxford Calculators", in The Cambridge Dictionary of Philosophy.
Gavroglu, Kostas; Renn, Jurgen (2007) "Positioning the History of Science".
Agutter, Paul S.; Wheatley, Denys N. (2008) "Thinking About Life"
Principe, Lawrence M. (2011) "The Scientific Revolution: A Very Short Introduction"
Further reading
Carl B. Boyer (1949), The History of Calculus and Its Conceptual Development, New York: Hafner, reprinted in 1959, New York: Dover.
John Longeway, (2003), "William Heytesbury", in The Stanford Encyclopedia of Philosophy. Accessed 2012 January 3.
Uta C. Merzbach and Carl B. Boyer (2011), A History of Mathematics", Third Edition, Hoboken, NJ: Wiley.
Edith Sylla (1982), "The Oxford Calculators", in Norman Kretzmann, Anthony Kenny, and Jan Pinborg, edd. The Cambridge History of Later Medieval Philosophy: From the Rediscovery of Aristotle to the Disintegration of Scholasticism, 1100-1600'', New York: Cambridge.
History of physics
Merton College, Oxford
History of the University of Oxford
14th century in science
Scholasticism | 0.7889 | 0.966156 | 0.762201 |
Stokes flow | Stokes flow (named after George Gabriel Stokes), also named creeping flow or creeping motion, is a type of fluid flow where advective inertial forces are small compared with viscous forces. The Reynolds number is low, i.e. . This is a typical situation in flows where the fluid velocities are very slow, the viscosities are very large, or the length-scales of the flow are very small. Creeping flow was first studied to understand lubrication. In nature, this type of flow occurs in the swimming of microorganisms and sperm. In technology, it occurs in paint, MEMS devices, and in the flow of viscous polymers generally.
The equations of motion for Stokes flow, called the Stokes equations, are a linearization of the Navier–Stokes equations, and thus can be solved by a number of well-known methods for linear differential equations. The primary Green's function of Stokes flow is the Stokeslet, which is associated with a singular point force embedded in a Stokes flow. From its derivatives, other fundamental solutions can be obtained. The Stokeslet was first derived by Oseen in 1927, although it was not named as such until 1953 by Hancock. The closed-form fundamental solutions for the generalized unsteady Stokes and Oseen flows associated with arbitrary time-dependent translational and rotational motions have been derived for the Newtonian and micropolar fluids.
Stokes equations
The equation of motion for Stokes flow can be obtained by linearizing the steady state Navier–Stokes equations. The inertial forces are assumed to be negligible in comparison to the viscous forces, and eliminating the inertial terms of the momentum balance in the Navier–Stokes equations reduces it to the momentum balance in the Stokes equations:
where is the stress (sum of viscous and pressure stresses), and an applied body force. The full Stokes equations also include an equation for the conservation of mass, commonly written in the form:
where is the fluid density and the fluid velocity. To obtain the equations of motion for incompressible flow, it is assumed that the density, , is a constant.
Furthermore, occasionally one might consider the unsteady Stokes equations, in which the term is added to the left hand side of the momentum balance equation.
Properties
The Stokes equations represent a considerable simplification of the full Navier–Stokes equations, especially in the incompressible Newtonian case. They are the leading-order simplification of the full Navier–Stokes equations, valid in the distinguished limit
Instantaneity
A Stokes flow has no dependence on time other than through time-dependent boundary conditions. This means that, given the boundary conditions of a Stokes flow, the flow can be found without knowledge of the flow at any other time.
Time-reversibility
An immediate consequence of instantaneity, time-reversibility means that a time-reversed Stokes flow solves the same equations as the original Stokes flow. This property can sometimes be used (in conjunction with linearity and symmetry in the boundary conditions) to derive results about a flow without solving it fully. Time reversibility means that it is difficult to mix two fluids using creeping flow.
While these properties are true for incompressible Newtonian Stokes flows, the non-linear and sometimes time-dependent nature of non-Newtonian fluids means that they do not hold in the more general case.
Stokes paradox
An interesting property of Stokes flow is known as the Stokes' paradox: that there can be no Stokes flow of a fluid around a disk in two dimensions; or, equivalently, the fact there is no non-trivial solution for the Stokes equations around an infinitely long cylinder.
Demonstration of time-reversibility
A Taylor–Couette system can create laminar flows in which concentric cylinders of fluid move past each other in an apparent spiral. A fluid such as corn syrup with high viscosity fills the gap between two cylinders, with colored regions of the fluid visible through the transparent outer cylinder.
The cylinders are rotated relative to one another at a low speed, which together with the high viscosity of the fluid and thinness of the gap gives a low Reynolds number, so that the apparent mixing of colors is actually laminar and can then be reversed to approximately the initial state. This creates a dramatic demonstration of seemingly mixing a fluid and then unmixing it by reversing the direction of the mixer.
Incompressible flow of Newtonian fluids
In the common case of an incompressible Newtonian fluid, the Stokes equations take the (vectorized) form:
where is the velocity of the fluid, is the gradient of the pressure, is the dynamic viscosity, and an applied body force. The resulting equations are linear in velocity and pressure, and therefore can take advantage of a variety of linear differential equation solvers.
Cartesian coordinates
With the velocity vector expanded as and similarly the body force vector , we may write the vector equation explicitly,
We arrive at these equations by making the assumptions that and the density is a constant.
Methods of solution
By stream function
The equation for an incompressible Newtonian Stokes flow can be solved by the stream function method in planar or in 3-D axisymmetric cases
By Green's function: the Stokeslet
The linearity of the Stokes equations in the case of an incompressible Newtonian fluid means that a Green's function, , exists. The Green's function is found by solving the Stokes equations with the forcing term replaced by a point force acting at the origin, and boundary conditions vanishing at infinity:
where is the Dirac delta function, and represents a point force acting at the origin. The solution for the pressure p and velocity u with |u| and p vanishing at infinity is given by
where
is a second-rank tensor (or more accurately tensor field) known as the Oseen tensor (after Carl Wilhelm Oseen). Here, r r is a quantity such that .
The terms Stokeslet and point-force solution are used to describe . Analogous to the point charge in electrostatics, the Stokeslet is force-free everywhere except at the origin, where it contains a force of strength .
For a continuous-force distribution (density) the solution (again vanishing at infinity) can then be constructed by superposition:
This integral representation of the velocity can be viewed as a reduction in dimensionality: from the three-dimensional partial differential equation to a two-dimensional integral equation for unknown densities.
By Papkovich–Neuber solution
The Papkovich–Neuber solution represents the velocity and pressure fields of an incompressible Newtonian Stokes flow in terms of two harmonic potentials.
By boundary element method
Certain problems, such as the evolution of the shape of a bubble in a Stokes flow, are conducive to numerical solution by the boundary element method. This technique can be applied to both 2- and 3-dimensional flows.
Some geometries
Hele-Shaw flow
Hele-Shaw flow is an example of a geometry for which inertia forces are negligible. It is defined by two parallel plates arranged very close together with the space between the plates occupied partly by fluid and partly by obstacles in the form of cylinders with generators normal to the plates.
Slender-body theory
Slender-body theory in Stokes flow is a simple approximate method of determining the irrotational flow field around bodies whose length is large compared with their width. The basis of the method is to choose a distribution of flow singularities along a line (since the body is slender) so that their irrotational flow in combination with a uniform stream approximately satisfies the zero normal velocity condition.
Spherical coordinates
Lamb's general solution arises from the fact that the pressure satisfies the Laplace equation, and can be expanded in a series of solid spherical harmonics in spherical coordinates. As a result, the solution to the Stokes equations can be written:
where and are solid spherical harmonics of order :
and the are the associated Legendre polynomials. The Lamb's solution can be used to describe the motion of fluid either inside or outside a sphere. For example, it can be used to describe the motion of fluid around a spherical particle with prescribed surface flow, a so-called squirmer, or to describe the flow inside a spherical drop of fluid. For interior flows, the terms with are dropped, while for exterior flows the terms with are dropped (often the convention is assumed for exterior flows to avoid indexing by negative numbers).
Theorems
Stokes solution and related Helmholtz theorem
The drag resistance to a moving sphere, also known as Stokes' solution is here summarised. Given a sphere of radius , travelling at velocity , in a Stokes fluid with dynamic viscosity , the drag force is given by:
The Stokes solution dissipates less energy than any other solenoidal vector field with the same boundary velocities: this is known as the Helmholtz minimum dissipation theorem.
Lorentz reciprocal theorem
The Lorentz reciprocal theorem states a relationship between two Stokes flows in the same region. Consider fluid filled region bounded by surface . Let the velocity fields and solve the Stokes equations in the domain , each with corresponding stress fields and . Then the following equality holds:
Where is the unit normal on the surface . The Lorentz reciprocal theorem can be used to show that Stokes flow "transmits" unchanged the total force and torque from an inner closed surface to an outer enclosing surface. The Lorentz reciprocal theorem can also be used to relate the swimming speed of a microorganism, such as cyanobacterium, to the surface velocity which is prescribed by deformations of the body shape via cilia or flagella.
The Lorentz reciprocal theorem has also been used in the context of elastohydrodynamic theory to derive the lift force exerted on a solid object moving tangent to the surface of an elastic interface at low Reynolds numbers.
Faxén's laws
Faxén's laws are direct relations that express the multipole moments in terms of the ambient flow and its derivatives. First developed by Hilding Faxén to calculate the force, , and torque, on a sphere, they take the following form:
where is the dynamic viscosity, is the particle radius, is the ambient flow, is the speed of the particle, is the angular velocity of the background flow, and is the angular velocity of the particle.
Faxén's laws can be generalized to describe the moments of other shapes, such as ellipsoids, spheroids, and spherical drops.
See also
References
Ockendon, H. & Ockendon J. R. (1995) Viscous Flow, Cambridge University Press. .
External links
Video demonstration of time-reversibility of Stokes flow by UNM Physics and Astronomy
Fluid dynamics
Equations of fluid dynamics | 0.769828 | 0.990052 | 0.762169 |
Atmospheric convection | Atmospheric convection is the result of a parcel-environment instability (temperature difference layer) in the atmosphere. Different lapse rates within dry and moist air masses lead to instability. Mixing of air during the day expands the height of the planetary boundary layer, leading to increased winds, cumulus cloud development, and decreased surface dew points. Convection involving moist air masses leads to thunderstorm development, which is often responsible for severe weather throughout the world. Special threats from thunderstorms include hail, downbursts, and tornadoes.
Overview
There are a few general archetypes of atmospheric instability that are used to explain convection (or lack thereof); a necessary but insufficient condition for convection is that the environmental lapse rate (the rate of decrease of temperature with height) is steeper than the lapse rate experienced by a rising parcel of air.
When this condition is met, upward-displaced air parcels can become buoyant and thus experience a further upward force. Buoyant convection begins at the level of free convection (LFC), above which an air parcel may ascend through the free convective layer (FCL) with positive buoyancy. Its buoyancy turns negative at the equilibrium level (EL), but the parcel's vertical momentum may carry it to the maximum parcel level (MPL) where the negative buoyancy decelerates the parcel to a stop. Integrating the buoyancy force over the parcel's vertical displacement yields convective available potential energy (CAPE), the joules of energy available per kilogram of potentially buoyant air. CAPE is an upper limit for an ideal undiluted parcel, and the square root of twice the CAPE is sometimes called a thermodynamic speed limit for updrafts, based on the simple kinetic energy equation.
However, such buoyant acceleration concepts give an oversimplified view of convection. Drag is an opposite force to counter buoyancy, so that parcel ascent occurs under a balance of forces, like the terminal velocity of a falling object. Buoyancy may be reduced by entrainment, which dilutes the parcel with environmental air.
Atmospheric convection is called "deep" when it extends from near the surface to above the 500 hPa level, generally stopping at the tropopause at around 200 hPa. Most atmospheric deep convection occurs in the tropics as the rising branch of the Hadley circulation and represents a strong local coupling between the surface and the upper troposphere which is largely absent in winter midlatitudes. Its counterpart in the ocean (deep convection downward in the water column) only occurs at a few locations.
Initiation
A thermal column (or thermal) is a vertical section of rising air in the lower altitudes of the Earth's atmosphere. Thermals are created by the uneven heating of the Earth's surface from solar radiation. The Sun warms the ground, which in turn warms the air directly above it. The warmer air expands, becoming less dense than the surrounding air mass, and creating a thermal low. The mass of lighter air rises, and as it does, it cools due to its expansion at lower high-altitude pressures. It stops rising when it has cooled to the same temperature as the surrounding air. Associated with a thermal is a downward flow surrounding the thermal column. The downward-moving exterior is caused by colder air being displaced at the top of the thermal. Another convection-driven weather effect is the sea breeze.
Thunderstorms
Warm air has a lower density than cool air, so warm air rises within cooler air, similar to hot air balloons. Clouds form as relatively warmer air carrying moisture rises within cooler air. As the moist air rises, it cools causing some of the water vapor in the rising packet of air to condense. When the moisture condenses, it releases energy known as latent heat of vaporization which allows the rising packet of air to cool less than its surrounding air, continuing the cloud's ascension. If enough instability is present in the atmosphere, this process will continue long enough for cumulonimbus clouds to form, which supports lightning and thunder. Generally, thunderstorms require three conditions to form: moisture, an unstable airmass, and a lifting force (heat).
All thunderstorms, regardless of type, go through three stages: the developing stage, the mature stage, and the dissipation stage. The average thunderstorm has a diameter. Depending on the conditions present in the atmosphere, these three stages take an average of 30 minutes to go through.
Types
There are four main types of thunderstorms: single-cell, multicell, squall line (also called multicell line), and supercell. Which type forms depends on the instability and relative wind conditions at different layers of the atmosphere ("wind shear"). Single-cell thunderstorms form in environments of low vertical wind shear and last only 20–30 minutes. Organized thunderstorms and thunderstorm clusters/lines can have longer life cycles as they form in environments of significant vertical wind shear, which aids the development of stronger updrafts as well as various forms of severe weather. The supercell is the strongest of the thunderstorms, most commonly associated with large hail, high winds, and tornado formation.
The latent heat release from condensation is the determinant between significant convection and almost no convection at all. The fact that air is generally cooler during winter months, and therefore cannot hold as much water vapor and associated latent heat, is why significant convection (thunderstorms) are infrequent in cooler areas during that period. Thundersnow is one situation where forcing mechanisms provide support for very steep environmental lapse rates, which as mentioned before is an archetype for favored convection. The small amount of latent heat released from air rising and condensing moisture in a thundersnow also serves to increase this convective potential, although minimally. There are also three types of thunderstorms: orographic, air mass, and frontal.
Boundaries and forcing
Despite the fact that there might be a layer in the atmosphere that has positive values of CAPE, if the parcel does not reach or begin rising to that level, the most significant convection that occurs in the FCL will not be realized. This can occur for numerous reasons. Primarily, it is the result of a cap, or convective inhibition (CIN/CINH). Processes that can erode this inhibition are heating of the Earth's surface and forcing. Such forcing mechanisms encourage upward vertical
velocity, characterized by a speed that is relatively low to what one finds in a thunderstorm updraft. Because of this, it is not the actual air being pushed to its LFC that "breaks through" the inhibition, but rather the forcing cools the inhibition adiabatically. This would counter, or "erode" the increase of temperature with height that is present during a capping inversion.
Forcing mechanisms that can lead to the eroding of inhibition are ones that create some sort of evacuation of mass in the upper parts of the atmosphere, or a surplus of mass in the low levels of the atmosphere, which would lead to upper-level divergence or lower-level convergence, respectively. An Upward vertical motion will often follow. Specifically, a cold front, sea/lake breeze, outflow boundary, or forcing through vorticity dynamics (differential positive vorticity advection) of the atmosphere such as with troughs, both shortwave and longwave. Jet streak dynamics through the imbalance of Coriolis and pressure gradient forces, causing subgeostrophic and supergeostrophic flows, can also create upward vertical velocities. There are numerous other atmospheric setups in which upward vertical velocities can be created.
Concerns regarding severe deep moist convection
Buoyancy is a key to thunderstorm growth and is necessary for any of the severe threats within a thunderstorm. There are other processes, not necessarily thermodynamic, that can increase updraft strength. These include updraft rotation, low-level convergence, and evacuation of mass out of the top of the updraft via strong upper-level winds and the jet stream.
Hail
Like other precipitation in cumulonimbus clouds hail begins as water droplets. As the droplets rise and the temperature goes below freezing, they become supercooled water and will freeze on contact with condensation nuclei. A cross-section through a large hailstone shows an onion-like structure. This means the hailstone is made of thick and translucent layers, alternating with layers that are thin, white, and opaque. Former theory suggested that hailstones were subjected to multiple descents and ascents, falling into a zone of humidity and refreezing as they were uplifted. This up-and-down motion was thought to be responsible for the successive layers of the hailstone. New research (based on theory and field study) has shown this is not necessarily true.
The storm's updraft, with upwardly directed wind speeds as high as , blow the forming hailstones up the cloud. As the hailstone ascends it passes into areas of the cloud where the concentration of humidity and supercooled water droplets varies. The hailstone's growth rate changes depending on the variation in humidity and supercooled water droplets that it encounters. The accretion rate of these water droplets is another factor in the hailstone's growth. When the hailstone moves into an area with a high concentration of water droplets, it captures the latter and acquires a translucent layer. Should the hailstone move into an area where mostly water vapour is available, it acquires a layer of opaque white ice.
Furthermore, the hailstone's speed depends on its position in the cloud's updraft and its mass. This determines the varying thicknesses of the layers of the hailstone. The accretion rate of supercooled water droplets onto the hailstone depends on the relative velocities between these water droplets and the hailstone itself. This means that generally, the larger hailstones will form some distance from the stronger updraft where they can pass more time growing As the hailstone grows it releases latent heat, which keeps its exterior in a liquid phase. Undergoing "wet growth", the outer layer is sticky, or more adhesive, so a single hailstone may grow by collision with other smaller hailstones, forming a larger entity with an irregular shape.
The hailstone will keep rising in the thunderstorm until its mass can no longer be supported by the updraft. This may take at least 30 minutes based on the force of the updrafts in the hail-producing thunderstorm, whose top is usually greater than high. It then falls toward the ground while continuing to grow, based on the same processes, until it leaves the cloud. It will later begin to melt as it passes into the air above freezing temperature
Thus, a unique trajectory in the thunderstorm is sufficient to explain the layer-like structure of the hailstone. The only case in which we can discuss multiple trajectories is in a multicellular thunderstorm where the hailstone may be ejected from the top of the "mother" cell and captured in the updraft of a more intense "daughter cell". This however is an exceptional case.
Downburst
A downburst is created by a column of sinking air that, after hitting ground level, spreads out in all directions and is capable of producing damaging straight-line winds of over , often producing damage similar to, but distinguishable from, that caused by tornadoes. This is because the physical properties of a downburst are completely different from those of a tornado. Downburst damage will radiate from a central point as the descending column spreads out when impacting the surface, whereas tornado damage tends towards convergent damage consistent with rotating winds. To differentiate between tornado damage and damage from a downburst, the term straight-line winds is applied to damage from microbursts.
Downbursts are particularly strong downdrafts from thunderstorms. Downbursts in air that is precipitation free or contains virga are known as dry downbursts; those accompanied with precipitation are known as wet downbursts. Most downbursts are less than in extent: these are called microbursts. Downbursts larger than in extent are sometimes called macrobursts. Downbursts can occur over large areas. In the extreme case, a derecho can cover a huge area more than wide and over long, lasting up to 12 hours or more, and is associated with some of the most intense straight-line winds, but the generative process is somewhat different from that of most downbursts.
Tornado
A tornado is a dangerous rotating column of air in contact with both the surface of the earth and the base of a cumulonimbus cloud (thundercloud), or a cumulus cloud in rare cases. Tornadoes come in many sizes but typically form a visible condensation funnel whose narrowest end reaches the earth and is surrounded by a cloud of debris and dust.
Tornadoes wind speeds generally average between and . They are approximately across and travel a few kilometers before dissipating. Some attain wind speeds in excess of , may stretch more than a across, and maintain contact with the ground for more than .
Tornadoes, despite being one of the most destructive weather phenomena, are generally short-lived. A long-lived tornado generally lasts no more than an hour, but some have been known to last for 2 hours or longer (for example, the Tri-state tornado). Due to their relatively short duration, less information is known about the development and formation of tornadoes.
Generally any cyclone based on its size and intensity has different instability dynamics. The most unstable azimuthal wavenumber is higher for bigger cyclones .
Measurement
The potential for convection in the atmosphere is often measured by an atmospheric temperature/dewpoint profile with height. This is often displayed on a Skew-T chart or other similar thermodynamic diagram. These can be plotted by a measured sounding analysis, which is the sending of a radiosonde attached to a balloon into the atmosphere to take the measurements with height. Forecast models can also create these diagrams, but are less accurate due to model uncertainties and biases, and have lower spatial resolution. Although, the temporal resolution of forecast model soundings is greater than the direct measurements, where the former can have plots for intervals of up to every 3 hours, and the latter as having only 2 per day (although when a convective event is expected a special sounding might be taken outside of the normal schedule of 00Z and then 12Z.).
Other forecasting concerns
Atmospheric convection can also be responsible for and have implications on a number of other weather conditions. A few examples on the smaller scale would include: Convection mixing the planetary boundary layer (PBL) and allowing drier air aloft to the surface thereby decreasing dew points, creating cumulus-type clouds that can limit a small amount of sunshine, increasing surface winds, making outflow boundaries/and other smaller boundaries more diffuse, and the eastward propagation of the dryline during the day. On a larger scale, the rising of the air can lead to warm core surface lows, often found in the desert southwest.
See also
Air parcel
Atmospheric subsidence
Atmospheric thermodynamics
Buoyancy
Convective storm detection
Thermal
References
Severe weather and convection
Atmospheric thermodynamics | 0.771352 | 0.988091 | 0.762166 |
ICanHazPDF | #ICanHazPDF is a hashtag used on Twitter to request access to academic journal articles which are behind paywalls. It began in 2011 by scientist Andrea Kuszewski. The name is derived from the meme I Can Has Cheezburger?
Process
Users request articles by tweeting an article's title, DOI or other linked information like a publisher's link, their email address, and the hashtag "#ICanHazPDF". Someone who has access to the article might then email it to them. The user then deletes the original tweet. Alternatively, users who do not wish to post their email address in the clear can use direct messaging to exchange contact information with a volunteer who has offered to share the article of interest.
Use and popularity
The practice amounts to copyright infringement in numerous countries, and so is arguably part of the 'black open access' trend. The majority of requests are for articles published in the last five years, and most users are from English-speaking countries. Requests for biology papers are more common than papers in other fields, despite subscription prices for chemistry, physics, and astronomy being, on average, higher than for biology. Possible reasons for people to use the hashtag include the reluctance of readers to pay for article access and the speed of the process compared to most university interlibrary loans.
See also
Academic journal publishing reform
Anna's Archive
Open Access Button
Library Genesis
Sci-Hub
Shadow library
Z-Library
References
External links
Hashtags
Copyright campaigns
Academic publishing
2011 introductions | 0.770744 | 0.988868 | 0.762164 |
Ultra-high vacuum | Ultra-high vacuum (often spelled ultrahigh in American English, UHV) is the vacuum regime characterised by pressures lower than about . UHV conditions are created by pumping the gas out of a UHV chamber. At these low pressures the mean free path of a gas molecule is greater than approximately 40 km, so the gas is in free molecular flow, and gas molecules will collide with the chamber walls many times before colliding with each other. Almost all molecular interactions therefore take place on various surfaces in the chamber.
UHV conditions are integral to scientific research. Surface science experiments often require a chemically clean sample surface with the absence of any unwanted adsorbates. Surface analysis tools such as X-ray photoelectron spectroscopy and low energy ion scattering require UHV conditions for the transmission of electron or ion beams. For the same reason, beam pipes in particle accelerators such as the Large Hadron Collider are kept at UHV.
Overview
Maintaining UHV conditions requires the use of unusual materials for equipment. Useful concepts for UHV include:
Sorption of gases
Kinetic theory of gases
Gas transport and pumping
Vacuum pumps and systems
Vapour pressure
Typically, UHV requires:
High pumping speed — possibly multiple vacuum pumps in series and/or parallel
Minimized surface area in the chamber
High conductance tubing to pumps — short and fat, without obstruction
Use of low-outgassing materials such as certain stainless steels
Avoid creating pits of trapped gas behind bolts, welding voids, etc.
Electropolishing of all metal parts after machining or welding
Use of low vapor pressure materials (ceramics, glass, metals, teflon if unbaked)
Baking of the system to remove water or hydrocarbons adsorbed to the walls
Chilling of chamber walls to cryogenic temperatures during use
Avoiding all traces of hydrocarbons, including skin oils in a fingerprint — gloves must always be used
Hydrogen and carbon monoxide are the most common background gases in a well-designed, well-baked UHV system. Both Hydrogen and CO diffuse out from the grain boundaries in stainless steel. Helium could diffuse through the steel and glass from the outside air, but this effect is usually negligible due to the low abundance of He in the atmosphere.
Measurement
Pressure
Measurement of high vacuum is done using a nonabsolute gauge that measures a pressure-related property of the vacuum. See, for example, Pacey. These gauges must be calibrated. The gauges capable of measuring the lowest pressures are magnetic gauges based upon the pressure dependence of the current in a spontaneous gas discharge in intersecting electric and magnetic fields.
UHV pressures are measured with an ion gauge, either of the hot filament or inverted magnetron type.
Leak rate
In any vacuum system, some gas will continue to escape into the chamber over time and slowly increase the pressure if it is not pumped out. This leak rate is usually measured in mbar L/s or torr L/s. While some gas release is inevitable, if the leak rate is too high, it can slow down or even prevent the system from reaching low pressure.
There are a variety of possible reasons for an increase in pressure. These include simple air leaks, virtual leaks, and desorption (either from surfaces or volume). A variety of methods for leak detection exist. Large leaks can be found by pressurizing the chamber, and looking for bubbles in soapy water, while tiny leaks can require more sensitive methods, up to using a tracer gas and specialized Helium mass spectrometer.
Outgassing
Outgassing is a problem for UHV systems. Outgassing can occur from two sources: surfaces and bulk materials. Outgassing from bulk materials is minimized by selection of materials with low vapor pressures (such as glass, stainless steel, and ceramics) for everything inside the system. Materials which are not generally considered absorbent can outgas, including most plastics and some metals. For example, vessels lined with a highly gas-permeable material such as palladium (which is a high-capacity hydrogen sponge) create special outgassing problems.
Outgassing from surfaces is a subtler problem. At extremely low pressures, more gas molecules are adsorbed on the walls than are floating in the chamber, so the total surface area inside a chamber is more important than its volume for reaching UHV. Water is a significant source of outgassing because a thin layer of water vapor rapidly adsorbs to everything whenever the chamber is opened to air. Water evaporates from surfaces too slowly to be fully removed at room temperature, but just fast enough to present a continuous level of background contamination. Removal of water and similar gases generally requires baking the UHV system at while vacuum pumps are running. During chamber use, the walls of the chamber may be chilled using liquid nitrogen to reduce outgassing further.
Bake-out
In order to reach low pressures, it is often useful to heat the entire system above for many hours (a process known as bake-out) to remove water and other trace gases which adsorb on the surfaces of the chamber. This may also be required upon "cycling" the equipment to atmosphere. This process significantly speeds up the process of outgassing, allowing low pressures to be reached much faster. After baking, to prevent humidity from getting back into the system after it is exposed to atmospheric pressure, a nitrogen gas flow that creates a small positive pressure can be maintained to keep the system dry.
System design
Pumping
There is no single vacuum pump that can operate all the way from atmospheric pressure to ultra-high vacuum. Instead, a series of different pumps is used, according to the appropriate pressure range for each pump. In the first stage, a roughing pump clears most of the gas from the chamber. This is followed by one or more vacuum pumps that operate at low pressures. Pumps commonly used in this second stage to achieve UHV include:
Turbomolecular pumps (especially compound pumps which incorporate a molecular drag section and/or magnetic bearing types)
Ion pumps
Titanium sublimation pumps
Non-evaporable getter (NEG) pumps
Cryopumps
Diffusion pumps, especially when used with a cryogenic trap designed to minimize backstreaming of pump oil into the systems.
Turbo pumps and diffusion pumps rely on supersonic attack upon system molecules by the blades and high speed vapor stream, respectively.
Airlocks
To save time, energy, and integrity of the UHV volume an airlock or load-lock vacuum system is often used. The airlock volume has one door or valve, such as a gate valve or UHV angle valve, facing the UHV side of the volume, and another door against atmospheric pressure through which samples or workpieces are initially introduced. After sample introduction and assuring that the door against atmosphere is closed, the airlock volume is typically pumped down to a medium-high vacuum. In some cases the workpiece itself is baked out or otherwise pre-cleaned under this medium-high vacuum. The gateway to the UHV chamber is then opened, the workpiece transferred to the UHV by robotic means or by other contrivance if necessary, and the UHV valve re-closed. While the initial workpiece is being processed under UHV, a subsequent sample can be introduced into the airlock volume, pre-cleaned, and so-on and so-forth, saving much time. Although a "puff" of gas is generally released into the UHV system when the valve to the airlock volume is opened, the UHV system pumps can generally snatch this gas away before it has time to adsorb onto the UHV surfaces. In a system well designed with suitable airlocks, the UHV components seldom need bakeout and the UHV may improve over time even as workpieces are introduced and removed.
Seals
Metal seals, with knife edges on both sides cutting into a soft, copper gasket are employed. This metal-to-metal seal can maintain pressures down to . Although generally considered single use, the skilled operator can obtain several uses through the use of feeler gauges of decreasing size with each iteration, as long as the knife edges are in perfect condition. For SRF cavities, indium seals are more commonly used in sealing two flat surfaces together using clamps to bring the surfaces together. The clamps need to be tightened slowly to ensure the indium seals compress uniformly all around.
Material limitations
Many common materials are used sparingly if at all due to high vapor pressure, high adsorptivity or absorptivity resulting in subsequent troublesome outgassing, or high permeability in the face of differential pressure (i.e.: "through-gassing"):
The majority of organic compounds cannot be used:
Plastics, other than PTFE and PEEK: plastics in other uses are replaced with ceramics or metals. Limited use of fluoroelastomers (such as Viton) and perfluoroelastomers (such as Kalrez) as gasket materials can be considered if metal gaskets are inconvenient, though these polymers can be expensive. Although through-gassing of elastomerics can not be avoided, experiments have shown that slow out-gassing of water vapor is, initially at least, the more important limitation. This effect can be minimized by pre-baking under medium vacuum. When selecting O-rings, permeation rate and permeation coefficients need to be considered. For example the penetration rate of nitrogen in Viton seals is 100 times lower than the penetration of nitrogen in silicon seals, which impacts the ultimate vacuum that can be achieved.
Glues: special glues for high vacuum must be used, generally epoxies with a high mineral filler content. Among the most popular of these include asbestos in the formulation. This allows for an epoxy with good initial properties and able to retain reasonable performance across multiple bake-outs.
Some steels: due to oxidization of carbon steel, which greatly increases adsorption area, only stainless steel is used. Particularly, non-leaded and low-sulfur austenitic grades such as 304 and 316 are preferred. These steels include at least 18% chromium and 8% nickel. Variants of stainless steel include low-carbon grades (such as 304L and 316L), and grades with additives such as niobium and molybdenum to reduce the formation of chromium carbide (which provides no corrosion resistance). Common designations include 316L (low carbon), and 316LN (low carbon with nitrogen), which can boast a significantly lower magnetic permeability with special welding techniques making them preferable for particle accelerator applications. Chromium carbide precipitation at the grain boundaries can render a stainless steel less resistant to oxidation.
Lead: Soldering is performed using lead-free solder. Occasionally pure lead is used as a gasket material between flat surfaces in lieu of a copper/knife edge system.
Indium: Indium is sometimes used as a deformable gasket material for vacuum seals, especially in cryogenic apparatus, but its low melting point prevents use in baked systems. In a more esoteric application, the low melting point of Indium is taken advantage of as a renewable seal in high vacuum valves. These valves are used several times, generally with the aid of a torque wrench set to increasing torque with each iteration. When the indium seal is exhausted, it is melted and reforms itself and thus is ready for another round of uses.
Zinc, cadmium: High vapor pressures during system bake-out virtually preclude their use.
Aluminum: Although aluminum itself has a vapor pressure which makes it unsuitable for use in UHV systems, the same oxides which protect aluminum against corrosion improve its characteristics under UHV. Although initial experiments with aluminum suggested milling under mineral oil to maintain a thin, consistent layer of oxide, it has become increasingly accepted that aluminum is a suitable UHV material without special preparation. Paradoxically, aluminum oxide, especially when embedded as particles in stainless steel as for example from sanding in an attempt to reduce the surface area of the steel, is considered a problematic contaminant.
Cleaning is very important for UHV. Common cleaning procedures include degreasing with detergents, organic solvents, or chlorinated hydrocarbons. Electropolishing is often used to reduce the surface area from which adsorbed gases can be emitted. Etching of stainless steel using hydrofluoric and nitric acid forms a chromium rich surface, followed by a nitric acid passivation step, which forms a chromium oxide rich surface. This surface retards the diffusion of hydrogen into the chamber.
Technical limitations:
Screws: Threads have a high surface area and tend to "trap" gases, and therefore, are avoided. Blind holes are especially avoided, due to the trapped gas at the base of the screw and slow venting through the threads, which is commonly known as a "virtual leak". This can be mitigated by designing components to include through-holes for all threaded connections, or by using vented screws (which have a hole drilled through their central axis or a notch along the threads). Vented Screws allow trapped gases to flow freely from the base of the screw, eliminating virtual leaks and speeding up the pump-down process.
Welding: Processes such as gas metal arc welding and shielded metal arc welding cannot be used, due to the deposition of impure material and potential introduction of voids or porosity. Gas tungsten arc welding (with an appropriate heat profile and properly selected filler material) is necessary. Other clean processes, such as electron beam welding or laser beam welding, are also acceptable; however, those that involve potential slag inclusions (such as submerged arc welding and flux-cored arc welding) are obviously not. To avoid trapping gas or high vapor pressure molecules, welds must fully penetrate the joint or be made from the interior surface, otherwise a virtual leak might appear.
UHV manipulator
A UHV manipulator allows an object which is inside a vacuum chamber and under vacuum to be mechanically positioned. It may provide rotary
motion, linear motion, or a combination of both. The most complex devices give motion in three axes and rotations around two of those axes. To generate the mechanical movement inside the chamber, three basic mechanisms are commonly employed: a mechanical coupling through the vacuum wall (using a vacuum-tight seal around the coupling: a welded metal bellows for example), a magnetic coupling that transfers motion from air-side to vacuum-side: or a sliding seal using special greases of very low vapor pressure or ferromagnetic fluid. Such special greases can exceed USD $400 per kilogram. Various forms of motion control are available for manipulators, such as knobs, handwheels, motors, stepping motors, piezoelectric motors, and pneumatics. The use of motors in a vacuum environment often requires special design or other special considerations, as the convective cooling taken for granted under atmospheric conditions is not available in a UHV environment.
The manipulator or sample holder may include features that allow additional control and testing of a sample, such as the ability to apply heat, cooling, voltage, or a magnetic field. Sample heating can be accomplished by electron bombardment or thermal radiation. For electron bombardment, the sample holder is equipped with a filament which emits electrons when biased at a high negative potential. The impact of the
electrons bombarding the sample at high energy causes it to heat. For thermal radiation, a filament is mounted close to the sample and resistively heated to high temperature. The infrared energy from the filament heats the sample.
Typical uses
Ultra-high vacuum is necessary for many surface analytic techniques such as:
X-ray photoelectron spectroscopy (XPS)
Auger electron spectroscopy (AES)
Secondary ion mass spectrometry (SIMS)
Thermal desorption spectroscopy (TPD)
Thin film growth and preparation techniques with stringent requirements for purity, such as molecular beam epitaxy (MBE), UHV chemical vapor deposition (CVD), atomic layer deposition (ALD) and UHV pulsed laser deposition (PLD)
Angle resolved photoemission spectroscopy (ARPES)
Field emission microscopy and Field ion microscopy
Atom Probe Tomography (APT)
UHV is necessary for these applications to reduce surface contamination, by reducing the number of molecules reaching the sample over a given time period. At , it only takes 1 second to cover a surface with a contaminant, so much lower pressures are needed for long experiments.
UHV is also required for:
Particle accelerators The Large Hadron Collider (LHC) has three UH vacuum systems. The lowest pressure is found in the pipes the proton beam speeds through near the interaction (collision) points. Here helium cooling pipes also act as cryopumps. The maximum allowable pressure is
Gravitational wave detectors such as LIGO, VIRGO, GEO 600, and TAMA 300. The LIGO experimental apparatus is housed in a vacuum chamber at in order to eliminate temperature fluctuations and sound waves which would jostle the mirrors far too much for gravitational waves to be sensed.
Atomic physics experiments which use cold atoms, such as ion trapping or making Bose–Einstein condensates.
While not compulsory, it can prove beneficial in applications such as:
Molecular beam epitaxy, E-beam evaporation, sputtering and other deposition techniques.
Atomic force microscopy. High vacuum enables high Q factors on the cantilever oscillation.
Scanning tunneling microscopy. High vacuum reduces oxidation and contamination, hence enables imaging and the achievement of atomic resolution on clean metal and semiconductor surfaces, e.g. imaging the surface reconstruction of the unoxidized silicon surface.
Electron-beam lithography
See also
Journal of Vacuum Science and Technology
Orders of magnitude (pressure)
Vacuum engineering
Vacuum gauge
Vacuum state
References
External links
Online Surface Science Course
Vacuum systems
Vacuum | 0.770438 | 0.989242 | 0.76215 |
Axial tilt | In astronomy, axial tilt, also known as obliquity, is the angle between an object's rotational axis and its orbital axis, which is the line perpendicular to its orbital plane; equivalently, it is the angle between its equatorial plane and orbital plane. It differs from orbital inclination.
At an obliquity of 0 degrees, the two axes point in the same direction; that is, the rotational axis is perpendicular to the orbital plane.
The rotational axis of Earth, for example, is the imaginary line that passes through both the North Pole and South Pole, whereas the Earth's orbital axis is the line perpendicular to the imaginary plane through which the Earth moves as it revolves around the Sun; the Earth's obliquity or axial tilt is the angle between these two lines.
Over the course of an orbital period, the obliquity usually does not change considerably, and the orientation of the axis remains the same relative to the background of stars. This causes one pole to be pointed more toward the Sun on one side of the orbit, and more away from the Sun on the other side—the cause of the seasons on Earth.
Standards
There are two standard methods of specifying a planet's tilt. One way is based on the planet's north pole, defined in relation to the direction of Earth's north pole, and the other way is based on the planet's positive pole, defined by the right-hand rule:
The International Astronomical Union (IAU) defines the north pole of a planet as that which lies on Earth's north side of the invariable plane of the Solar System; under this system, Venus is tilted 3° and rotates retrograde, opposite that of most of the other planets.
The IAU also uses the right-hand rule to define a positive pole for the purpose of determining orientation. Using this convention, Venus is tilted 177° ("upside down") and rotates prograde.
Earth
Earth's orbital plane is known as the ecliptic plane, and Earth's tilt is known to astronomers as the obliquity of the ecliptic, being the angle between the ecliptic and the celestial equator on the celestial sphere. It is denoted by the Greek letter Epsilon ε.
Earth currently has an axial tilt of about 23.44°. This value remains about the same relative to a stationary orbital plane throughout the cycles of axial precession. But the ecliptic (i.e., Earth's orbit) moves due to planetary perturbations, and the obliquity of the ecliptic is not a fixed quantity. At present, it is decreasing at a rate of about 46.8″ per century (see details in Short term below).
History
The ancient Greeks had good measurements of the obliquity since about 350 BCE, when Pytheas of Marseilles measured the shadow of a gnomon at the summer solstice. About 830 CE, the Caliph Al-Mamun of Baghdad directed his astronomers to measure the obliquity, and the result was used in the Arab world for many years. In 1437, Ulugh Beg determined the Earth's axial tilt as 23°30′17″ (23.5047°).
During the Middle Ages, it was widely believed that both precession and Earth's obliquity oscillated around a mean value, with a period of 672 years, an idea known as trepidation of the equinoxes. Perhaps the first to realize this was incorrect (during historic time) was Ibn al-Shatir in the fourteenth century and the first to realize that the obliquity is decreasing at a relatively constant rate was Fracastoro in 1538. The first accurate, modern, western observations of the obliquity were probably those of Tycho Brahe from Denmark, about 1584, although observations by several others, including al-Ma'mun, al-Tusi, Purbach, Regiomontanus, and Walther, could have provided similar information.
Seasons
Earth's axis remains tilted in the same direction with reference to the background stars throughout a year (regardless of where it is in its orbit) due to the gyroscope effect. This means that one pole (and the associated hemisphere of Earth) will be directed away from the Sun at one side of the orbit, and half an orbit later (half a year later) this pole will be directed towards the Sun. This is the cause of Earth's seasons. Summer occurs in the Northern hemisphere when the north pole is directed toward the Sun. Variations in Earth's axial tilt can influence the seasons and is likely a factor in long-term climatic change (also see Milankovitch cycles).
Oscillation
Short term
The exact angular value of the obliquity is found by observation of the motions of Earth and planets over many years. Astronomers produce new fundamental ephemerides as the accuracy of observation improves and as the understanding of the dynamics increases, and from these ephemerides various astronomical values, including the obliquity, are derived.
Annual almanacs are published listing the derived values and methods of use. Until 1983, the Astronomical Almanac's angular value of the mean obliquity for any date was calculated based on the work of Newcomb, who analyzed positions of the planets until about 1895:
where is the obliquity and is tropical centuries from B1900.0 to the date in question.
From 1984, the Jet Propulsion Laboratory's DE series of computer-generated ephemerides took over as the fundamental ephemeris of the Astronomical Almanac. Obliquity based on DE200, which analyzed observations from 1911 to 1979, was calculated:
where hereafter is Julian centuries from J2000.0.
JPL's fundamental ephemerides have been continually updated. For instance, according to IAU resolution in 2006 in favor of the P03 astronomical model, the Astronomical Almanac for 2010 specifies:
These expressions for the obliquity are intended for high precision over a relatively short time span, perhaps several centuries. Jacques Laskar computed an expression to order good to 0.02″ over 1000 years and several arcseconds over 10,000 years.
where here is multiples of 10,000 Julian years from J2000.0.
These expressions are for the so-called mean obliquity, that is, the obliquity free from short-term variations. Periodic motions of the Moon and of Earth in its orbit cause much smaller (9.2 arcseconds) short-period (about 18.6 years) oscillations of the rotation axis of Earth, known as nutation, which add a periodic component to Earth's obliquity. The true or instantaneous obliquity includes this nutation.
Long term
Using numerical methods to simulate Solar System behavior over a period of several million years, long-term changes in Earth's orbit, and hence its obliquity, have been investigated. For the past 5 million years, Earth's obliquity has varied between and , with a mean period of 41,040 years. This cycle is a combination of precession and the largest term in the motion of the ecliptic. For the next 1 million years, the cycle will carry the obliquity between and .
The Moon has a stabilizing effect on Earth's obliquity. Frequency map analysis conducted in 1993 suggested that, in the absence of the Moon, the obliquity could change rapidly due to orbital resonances and chaotic behavior of the Solar System, reaching as high as 90° in as little as a few million years (also see Orbit of the Moon). However, more recent numerical simulations made in 2011 indicated that even in the absence of the Moon, Earth's obliquity might not be quite so unstable; varying only by about 20–25°. To resolve this contradiction, diffusion rate of obliquity has been calculated, and it was found that it takes more than billions of years for Earth's obliquity to reach near 90°. The Moon's stabilizing effect will continue for less than two billion years. As the Moon continues to recede from Earth due to tidal acceleration, resonances may occur which will cause large oscillations of the obliquity.
Solar System bodies
All four of the innermost, rocky planets of the Solar System may have had large variations of their obliquity in the past. Since obliquity is the angle between the axis of rotation and the direction perpendicular to the orbital plane, it changes as the orbital plane changes due to the influence of other planets. But the axis of rotation can also move (axial precession), due to torque exerted by the Sun on a planet's equatorial bulge. Like Earth, all of the rocky planets show axial precession. If the precession rate were very fast the obliquity would actually remain fairly constant even as the orbital plane changes. The rate varies due to tidal dissipation and core-mantle interaction, among other things. When a planet's precession rate approaches certain values, orbital resonances may cause large changes in obliquity. The amplitude of the contribution having one of the resonant rates is divided by the difference between the resonant rate and the precession rate, so it becomes large when the two are similar.
Mercury and Venus have most likely been stabilized by the tidal dissipation of the Sun. Earth was stabilized by the Moon, as mentioned above, but before its formation, Earth, too, could have passed through times of instability. Mars's obliquity is quite variable over millions of years and may be in a chaotic state; it varies as much as 0° to 60° over some millions of years, depending on perturbations of the planets. Some authors dispute that Mars's obliquity is chaotic, and show that tidal dissipation and viscous core-mantle coupling are adequate for it to have reached a fully damped state, similar to Mercury and Venus.
The occasional shifts in the axial tilt of Mars have been suggested as an explanation for the appearance and disappearance of rivers and lakes over the course of the existence of Mars. A shift could cause a burst of methane into the atmosphere, causing warming, but then the methane would be destroyed and the climate would become arid again.
The obliquities of the outer planets are considered relatively stable.
Extrasolar planets
The stellar obliquity , i.e. the axial tilt of a star with respect to the orbital plane of one of its planets, has been determined for only a few systems. By 2012, 49 stars have had sky-projected spin-orbit misalignment has been observed, which serves as a lower limit to . Most of these measurements rely on the Rossiter–McLaughlin effect. Since the launch of space-based telescopes such as Kepler space telescope, it has been made possible to determine and estimate the obliquity of an extrasolar planet. The rotational flattening of the planet and the entourage of moons and/or rings, which are traceable with high-precision photometry provide access to planetary obliquity, . Many extrasolar planets have since had their obliquity determined, such as Kepler-186f and Kepler-413b.
Astrophysicists have applied tidal theories to predict the obliquity of extrasolar planets. It has been shown that the obliquities of exoplanets in the habitable zone around low-mass stars tend to be eroded in less than 109 years, which means that they would not have tilt-induced seasons as Earth has.
See also
Axial parallelism
Milankovitch cycles
Polar motion
Pole shift
Rotation around a fixed axis
True polar wander
References
External links
National Space Science Data Center
Obliquity of the Ecliptic Calculator
Precession
Planetary science | 0.76402 | 0.99755 | 0.762149 |
Landslide | Landslides, also known as landslips, or rockslides, are several forms of mass wasting that may include a wide range of ground movements, such as rockfalls, mudflows, shallow or deep-seated slope failures and debris flows. Landslides occur in a variety of environments, characterized by either steep or gentle slope gradients, from mountain ranges to coastal cliffs or even underwater, in which case they are called submarine landslides.
Gravity is the primary driving force for a landslide to occur, but there are other factors affecting slope stability that produce specific conditions that make a slope prone to failure. In many cases, the landslide is triggered by a specific event (such as a heavy rainfall, an earthquake, a slope cut to build a road, and many others), although this is not always identifiable.
Landslides are frequently made worse by human development (such as urban sprawl) and resource exploitation (such as mining and deforestation). Land degradation frequently leads to less stabilization of soil by vegetation. Additionally, global warming caused by climate change and other human impact on the environment, can increase the frequency of natural events (such as extreme weather) which trigger landslides. Landslide mitigation describes the policy and practices for reducing the risk of human impacts of landslides, reducing the risk of natural disaster.
Causes
Landslides occur when the slope (or a portion of it) undergoes some processes that change its condition from stable to unstable. This is essentially due to a decrease in the shear strength of the slope material, an increase in the shear stress borne by the material, or a combination of the two. A change in the stability of a slope can be caused by a number of factors, acting together or alone. Natural causes of landslides include:
increase in water content (loss of suction) or saturation by rain water infiltration, snow melting, or glaciers melting;
rising of groundwater or increase of pore water pressure (e.g. due to aquifer recharge in rainy seasons, or by rain water infiltration);
increase of hydrostatic pressure in cracks and fractures;
loss or absence of vertical vegetative structure, soil nutrients, and soil structure (e.g. after a wildfire);
erosion of the top of a slope by rivers or sea waves;
physical and chemical weathering (e.g. by repeated freezing and thawing, heating and cooling, salt leaking in the groundwater or mineral dissolution);
ground shaking caused by earthquakes, which can destabilize the slope directly (e.g., by inducing soil liquefaction) or weaken the material and cause cracks that will eventually produce a landslide;
volcanic eruptions;
changes in pore fluid composition;
changes in temperature (seasonal or induced by climate change).
Landslides are aggravated by human activities, such as:
deforestation, cultivation and construction;
vibrations from machinery or traffic;
blasting and mining;
earthwork (e.g. by altering the shape of a slope, or imposing new loads);
in shallow soils, the removal of deep-rooted vegetation that binds colluvium to bedrock;
agricultural or forestry activities (logging), and urbanization, which change the amount of water infiltrating the soil.
temporal variation in land use and land cover (LULC): it includes the human abandonment of farming areas, e.g. due to the economic and social transformations which occurred in Europe after the Second World War. Land degradation and extreme rainfall can increase the frequency of erosion and landslide phenomena.
Types
Hungr-Leroueil-Picarelli classification
In traditional usage, the term landslide has at one time or another been used to cover almost all forms of mass movement of rocks and regolith at the Earth's surface. In 1978, geologist David Varnes noted this imprecise usage and proposed a new, much tighter scheme for the classification of mass movements and subsidence processes. This scheme was later modified by Cruden and Varnes in 1996, and refined by Hutchinson (1988), Hungr et al. (2001), and finally by Hungr, Leroueil and Picarelli (2014). The classification resulting from the latest update is provided below.
Under this classification, six types of movement are recognized. Each type can be seen both in rock and in soil. A fall is a movement of isolated blocks or chunks of soil in free-fall. The term topple refers to blocks coming away by rotation from a vertical face. A slide is the movement of a body of material that generally remains intact while moving over one or several inclined surfaces or thin layers of material (also called shear zones) in which large deformations are concentrated. Slides are also sub-classified by the form of the surface(s) or shear zone(s) on which movement happens. The planes may be broadly parallel to the surface ("planar slides") or spoon-shaped ("rotational slides"). Slides can occur catastrophically, but movement on the surface can also be gradual and progressive. Spreads are a form of subsidence, in which a layer of material cracks, opens up, and expands laterally. Flows are the movement of fluidised material, which can be both dry or rich in water (such as in mud flows). Flows can move imperceptibly for years, or accelerate rapidly and cause disasters. Slope deformations are slow, distributed movements that can affect entire mountain slopes or portions of it. Some landslides are complex in the sense that they feature different movement types in different portions of the moving body, or they evolve from one movement type to another over time. For example, a landslide can initiate as a rock fall or topple and then, as the blocks disintegrate upon the impact, transform into a debris slide or flow. An avalanching effect can also be present, in which the moving mass entrains additional material along its path.
Flows
Slope material that becomes saturated with water may produce a debris flow or mud flow. However, also dry debris can exhibit flow-like movement. Flowing debris or mud may pick up trees, houses and cars, and block bridges and rivers causing flooding along its path. This phenomenon is particularly hazardous in alpine areas, where narrow gorges and steep valleys are conducive of faster flows. Debris and mud flows may initiate on the slopes or result from the fluidization of landslide material as it gains speed or incorporates further debris and water along its path. River blockages as the flow reaches a main stream can generate temporary dams. As the impoundments fail, a domino effect may be created, with a remarkable growth in the volume of the flowing mass, and in its destructive power.
An earthflow is the downslope movement of mostly fine-grained material. Earthflows can move at speeds within a very wide range, from as low as 1 mm/yr to many km/h. Though these are a lot like mudflows, overall they are more slow-moving and are covered with solid material carried along by the flow from within. Clay, fine sand and silt, and fine-grained, pyroclastic material are all susceptible to earthflows. These flows are usually controlled by the pore water pressures within the mass, which should be high enough to produce a low shearing resistance. On the slopes, some earthflow may be recognized by their elongated shape, with one or more lobes at their toes. As these lobes spread out, drainage of the mass increases and the margins dry out, lowering the overall velocity of the flow. This process also causes the flow to thicken. Earthflows occur more often during periods of high precipitation, which saturates the ground and builds up water pressures. However, earthflows that keep advancing also during dry seasons are not uncommon. Fissures may develop during the movement of clayey materials, which facilitate the intrusion of water into the moving mass and produce faster responses to precipitation.
A rock avalanche, sometimes referred to as sturzstrom, is a large and fast-moving landslide of the flow type. It is rarer than other types of landslides but it is often very destructive. It exhibits typically a long runout, flowing very far over a low-angle, flat, or even slightly uphill terrain. The mechanisms favoring the long runout can be different, but they typically result in the weakening of the sliding mass as the speed increases. The causes of this weakening are not completely understood. Especially for the largest landslides, it may involve the very quick heating of the shear zone due to friction, which may even cause the water that is present to vaporize and build up a large pressure, producing a sort of hovercraft effect. In some cases, the very high temperature may even cause some of the minerals to melt. During the movement, the rock in the shear zone may also be finely ground, producing a nanometer-size mineral powder that may act as a lubricant, reducing the resistance to motion and promoting larger speeds and longer runouts. The weakening mechanisms in large rock avalanches are similar to those occurring in seismic faults.
Slides
Slides can occur in any rock or soil material and are characterized by the movement of a mass over a planar or curvilinear surface or shear zone.
A debris slide is a type of slide characterized by the chaotic movement of material mixed with water and/or ice. It is usually triggered by the saturation of thickly vegetated slopes which results in an incoherent mixture of broken timber, smaller vegetation and other debris. Debris flows and avalanches differ from debris slides because their movement is fluid-like and generally much more rapid. This is usually a result of lower shear resistances and steeper slopes. Typically, debris slides start with the detachment of large rock fragments high on the slopes, which break apart as they descend.
Clay and silt slides are usually slow but can experience episodic acceleration in response to heavy rainfall or rapid snowmelt. They are often seen on gentle slopes and move over planar surfaces, such as over the underlying bedrock. Failure surfaces can also form within the clay or silt layer itself, and they usually have concave shapes, resulting in rotational slides
Shallow and deep-seated landslides
Slope failure mechanisms often contain large uncertainties and could be significantly affected by heterogeneity of soil properties. A landslide in which the sliding surface is located within the soil mantle or weathered bedrock (typically to a depth from few decimeters to some meters) is called a shallow landslide. Debris slides and debris flows are usually shallow. Shallow landslides can often happen in areas that have slopes with high permeable soils on top of low permeable soils. The low permeable soil traps the water in the shallower soil generating high water pressures. As the top soil is filled with water, it can become unstable and slide downslope.
Deep-seated landslides are those in which the sliding surface is mostly deeply located, for instance well below the maximum rooting depth of trees. They usually involve deep regolith, weathered rock, and/or bedrock and include large slope failures associated with translational, rotational, or complex movements. They tend to form along a plane of weakness such as a fault or bedding plane. They can be visually identified by concave scarps at the top and steep areas at the toe. Deep-seated landslides also shape landscapes over geological timescales and produce sediment that strongly alters the course of fluvial streams.
Related phenomena
An avalanche, similar in mechanism to a landslide, involves a large amount of ice, snow and rock falling quickly down the side of a mountain.
A pyroclastic flow is caused by a collapsing cloud of hot ash, gas and rocks from a volcanic explosion that moves rapidly down an erupting volcano.
Extreme precipitation and flow can cause gully formation in flatter environments not susceptible to landslides.
Resulting tsunamis
Landslides that occur undersea, or have impact into water e.g. significant rockfall or volcanic collapse into the sea, can generate tsunamis. Massive landslides can also generate megatsunamis, which are usually hundreds of meters high. In 1958, one such tsunami occurred in Lituya Bay in Alaska.
Landslide prediction mapping
Landslide hazard analysis and mapping can provide useful information for catastrophic loss reduction, and assist in the development of guidelines for sustainable land-use planning. The analysis is used to identify the factors that are related to landslides, estimate the relative contribution of factors causing slope failures, establish a relation between the factors and landslides, and to predict the landslide hazard in the future based on such a relationship. The factors that have been used for landslide hazard analysis can usually be grouped into geomorphology, geology, land use/land cover, and hydrogeology. Since many factors are considered for landslide hazard mapping, GIS is an appropriate tool because it has functions of collection, storage, manipulation, display, and analysis of large amounts of spatially referenced data which can be handled fast and effectively. Cardenas reported evidence on the exhaustive use of GIS in conjunction of uncertainty modelling tools for landslide mapping. Remote sensing techniques are also highly employed for landslide hazard assessment and analysis. Before and after aerial photographs and satellite imagery are used to gather landslide characteristics, like distribution and classification, and factors like slope, lithology, and land use/land cover to be used to help predict future events. Before and after imagery also helps to reveal how the landscape changed after an event, what may have triggered the landslide, and shows the process of regeneration and recovery.
Using satellite imagery in combination with GIS and on-the-ground studies, it is possible to generate maps of likely occurrences of future landslides. Such maps should show the locations of previous events as well as clearly indicate the probable locations of future events. In general, to predict landslides, one must assume that their occurrence is determined by certain geologic factors, and that future landslides will occur under the same conditions as past events. Therefore, it is necessary to establish a relationship between the geomorphologic conditions in which the past events took place and the expected future conditions.
Natural disasters are a dramatic example of people living in conflict with the environment. Early predictions and warnings are essential for the reduction of property damage and loss of life. Because landslides occur frequently and can represent some of the most destructive forces on earth, it is imperative to have a good understanding as to what causes them and how people can either help prevent them from occurring or simply avoid them when they do occur. Sustainable land management and development is also an essential key to reducing the negative impacts felt by landslides.
GIS offers a superior method for landslide analysis because it allows one to capture, store, manipulate, analyze, and display large amounts of data quickly and effectively. Because so many variables are involved, it is important to be able to overlay the many layers of data to develop a full and accurate portrayal of what is taking place on the Earth's surface. Researchers need to know which variables are the most important factors that trigger landslides in any given location. Using GIS, extremely detailed maps can be generated to show past events and likely future events which have the potential to save lives, property, and money.
Since the ‘90s, GIS have been also successfully used in conjunction to decision support systems, to show on a map real-time risk evaluations based on monitoring data gathered in the area of the Val Pola disaster (Italy).
Prehistoric landslides
Storegga Slide, some 8,000 years ago off the western coast of Norway. Caused massive tsunamis in Doggerland and other areas connected to the North Sea. A total volume of debris was involved; comparable to a thick area the size of Iceland. The landslide is thought to be among the largest in history.
Landslide which moved Heart Mountain to its current location, the largest continental landslide discovered so far. In the 48 million years since the slide occurred, erosion has removed most of the portion of the slide.
Flims Rockslide, about , Switzerland, some 10,000 years ago in post-glacial Pleistocene/Holocene, the largest so far described in the Alps and on dry land that can be easily identified in a modestly eroded state.
The landslide around 200 BC which formed Lake Waikaremoana on the North Island of New Zealand, where a large block of the Ngamoko Range slid and dammed a gorge of Waikaretaheke River, forming a natural reservoir up to deep.
Cheekye Fan, British Columbia, Canada, about , Late Pleistocene in age.
The Manang-Braga rock avalanche/debris flow may have formed Marsyangdi Valley in the Annapurna Region, Nepal, during an interstadial period belonging to the last glacial period. Over of material are estimated to have been moved in the single event, making it one of the largest continental landslides.
Tsergo Ri landslide, a massive slope failure north of Kathmandu, Nepal, involving an estimated . Prior to this landslide the mountain may have been the world's 15th mountain above .
Historical landslides
The 1806 Goldau landslide on 2 September 1806
The Cap Diamant Québec rockslide on 19 September 1889
Frank Slide, Turtle Mountain, Alberta, Canada, on 29 April 1903
Khait landslide, Khait, Tajikistan, Soviet Union, on 10 July 1949
A magnitude 7.5 earthquake in Yellowstone Park (17 August 1959) caused a landslide that blocked the Madison River, and created Quake Lake.
Monte Toc landslide falling into the Vajont Dam basin in Italy, causing a megatsunami and about 2000 deaths, on 9 October 1963
Hope Slide landslide near Hope, British Columbia on 9 January 1965.
The 1966 Aberfan disaster
Tuve landslide in Gothenburg, Sweden on 30 November 1977.
The 1979 Abbotsford landslip, Dunedin, New Zealand on 8 August 1979.
The eruption of Mount St. Helens (18 May 1980) caused an enormous landslide when the top 1300 feet of the volcano suddenly gave way.
Val Pola landslide during Valtellina disaster (1987) Italy
Thredbo landslide, Australia on 30 July 1997, destroyed hostel.
Vargas mudslides, due to heavy rains in Vargas State, Venezuela, in December, 1999, causing tens of thousands of deaths.
2005 La Conchita landslide in Ventura, California causing 10 deaths.
2006 Southern Leyte mudslide in Saint Bernard, Southern Leyte, causing 1,126 deaths and buried the village of Guinsaugon.
2007 Chittagong mudslide, in Chittagong, Bangladesh, on 11 June 2007.
2008 Cairo landslide on 6 September 2008.
The 2009 Peloritani Mountains disaster caused 37 deaths, on October 1.
The 2010 Uganda landslide caused over 100 deaths following heavy rain in Bududa region.
Zhouqu county mudslide in Gansu, China on 8 August 2010.
Devil's Slide, an ongoing landslide in San Mateo County, California
2011 Rio de Janeiro landslide in Rio de Janeiro, Brazil on 11 January 2011, causing 610 deaths.
2014 Pune landslide, in Pune, India.
2014 Oso mudslide, in Oso, Washington
2017 Mocoa landslide, in Mocoa, Colombia
2022 Ischia landslide
2024 Gofa landslides, in Gofa, Ethiopia
2024 Wayanad landslides, in Wayanad, Kerala, India
Extraterrestrial landslides
Evidence of past landslides has been detected on many bodies in the solar system, but since most observations are made by probes that only observe for a limited time and most bodies in the solar system appear to be geologically inactive not many landslides are known to have happened in recent times. Both Venus and Mars have been subject to long-term mapping by orbiting satellites, and examples of landslides have been observed on both planets.
Landslide mitigation
Climate-change impact on landslides
Climate-change impact on temperature, both average rainfall and rainfall extremes, and evapotranspiration may affect landslide distribution, frequency and intensity (62). However, this impact shows strong variability in different areas (63). Therefore, the effects of climate change on landslides need to be studied on a regional scale.
Climate change can have both positive and negative impacts on landslides
Temperature rise may increase evapotranspiration, leading to a reduction in soil moisture and stimulate vegetation growth, also due to a CO2 increase in the atmosphere. Both effects may reduce landslides in some conditions.
On the other side, temperature rise causes an increase of landslides due to
the acceleration of snowmelt and an increase of rain on snow during spring, leading to strong infiltration events (64).
Permafrost degradation that reduces the cohesion of soils and rock masses due to the loss of interstitial ice (65). This mainly occurs at high elevation.
Glacier retreat that has the dual effect of relieving mountain slopes and increasing their steepness.
Since the average precipitation is expected to decrease or increase regionally (63), rainfall induced landslides may change accordingly, due to changes in infiltration, groundwater levels and river bank erosion.
Weather extremes are expected to increase due to climate change including heavy precipitation (63). This yields negative effects on landslides due to focused infiltration in soil and rock (66) and an increase of runoff events, which may trigger debris flows.
See also
Avalanche
California landslides
Deformation monitoring
Earthquake engineering
Geotechnical engineering
Huayco
Landslide dam
Natural disaster
Railway slide fence
Rockslide
Sector collapse
Slump (geology)
Urban search and rescue
Washaway
References
External links
United States Geological Survey site (archived 25 March 2002)
British Geological Survey landslides site
British Geological Survey National Landslide Database
International Consortium on Landslides
Environmental soil science
Hazards of outdoor recreation
Natural disasters
no:Skred | 0.762988 | 0.998866 | 0.762123 |
The Road to Reality | The Road to Reality: A Complete Guide to the Laws of the Universe is a book on modern physics by the British mathematical physicist Roger Penrose, published in 2004. It covers the basics of the Standard Model of particle physics, discussing general relativity and quantum mechanics, and discusses the possible unification of these two theories.
Overview
The book discusses the physical world. Many fields that 19th century scientists believed were separate, such as electricity and magnetism, are aspects of more fundamental properties. Some texts, both popular and university level, introduce these topics as separate concepts, and then reveal their combination much later. The Road to Reality reverses this process, first expounding the underlying mathematics of space–time, then showing how electromagnetism and other phenomena fall out fully formed.
The book is just over 1100 pages, of which the first 383 are dedicated to mathematics—Penrose's goal is to acquaint inquisitive readers with the mathematical tools needed to understand the remainder of the book in depth. Physics enters the discussion on page 383 with the topic of spacetime. From there it moves on to fields in spacetime, deriving the classical electrical and magnetic forces from first principles; that is, if one lives in spacetime of a particular sort, these fields develop naturally as a consequence. Energy and conservation laws appear in the discussion of Lagrangians and Hamiltonians, before moving on to a full discussion of quantum physics, particle theory and quantum field theory. A discussion of the measurement problem in quantum mechanics is given a full chapter; superstrings are given a chapter near the end of the book, as are loop gravity and twistor theory. The book ends with an exploration of other theories and possible ways forward.
The final chapters reflect Penrose's personal perspective, which differs in some respects from what he regards as the current fashion among theoretical physicists. He is skeptical about string theory, to which he prefers loop quantum gravity. He is optimistic about his own approach, twistor theory. He also holds some controversial views about the role of consciousness in physics, as laid out in his earlier books (see Shadows of the Mind).
Reception
According to Brian Blank:
According to Nicholas Lezard:
According to Lee Smolin:
According to Frank Wilczek:
Editions
Jonathan Cape (1st edition), 2004, hardcover,
Alfred A. Knopf (publisher), February 2005, hardcover,
Vintage Books, 2005, softcover,
Vintage Books, 2006, softcover,
Vintage Books, 2007, softcover,
References
External links
Site with errata and solutions to some exercises from the first few chapters. Not sponsored by Penrose.
Archive of the Road to Reality internet forum, now defunct.
Solutions for many Road to Reality exercises.
2004 non-fiction books
Alfred A. Knopf books
Cosmology books
Mathematics books
Popular physics books
Quantum mind
String theory books
Works by Roger Penrose | 0.778926 | 0.978409 | 0.762109 |
Thrust vectoring | Thrust vectoring, also known as thrust vector control (TVC), is the ability of an aircraft, rocket or other vehicle to manipulate the direction of the thrust from its engine(s) or motor(s) to control the attitude or angular velocity of the vehicle.
In rocketry and ballistic missiles that fly outside the atmosphere, aerodynamic control surfaces are ineffective, so thrust vectoring is the primary means of attitude control. Exhaust vanes and gimbaled engines were used in the 1930s by Robert Goddard.
For aircraft, the method was originally envisaged to provide upward vertical thrust as a means to give aircraft vertical (VTOL) or short (STOL) takeoff and landing ability. Subsequently, it was realized that using vectored thrust in combat situations enabled aircraft to perform various maneuvers not available to conventional-engined planes. To perform turns, aircraft that use no thrust vectoring must rely on aerodynamic control surfaces only, such as ailerons or elevator; aircraft with vectoring must still use control surfaces, but to a lesser extent.
In missile literature originating from Russian sources, thrust vectoring is referred to as gas-dynamic steering or gas-dynamic control.
Methods
Rockets and ballistic missiles
Nominally, the line of action of the thrust vector of a rocket nozzle passes through the vehicle's centre of mass, generating zero net torque about the mass centre. It is possible to generate pitch and yaw moments by deflecting the main rocket thrust vector so that it does not pass through the mass centre. Because the line of action is generally oriented nearly parallel to the roll axis, roll control usually requires the use of two or more separately hinged nozzles or a separate system altogether, such as fins, or vanes in the exhaust plume of the rocket engine, deflecting the main thrust. Thrust vector control (TVC) is only possible when the propulsion system is creating thrust; separate mechanisms are required for attitude and flight path control during other stages of flight.
Thrust vectoring can be achieved by four basic means:
Gimbaled engine(s) or nozzle(s)
Reactive fluid injection
Auxiliary "Vernier" thrusters
Exhaust vanes, also known as jet vanes
Gimbaled thrust
Thrust vectoring for many liquid rockets is achieved by gimbaling the whole engine. This involves moving the entire combustion chamber and outer engine bell as on the Titan II's twin first-stage motors, or even the entire engine assembly including the related fuel and oxidizer pumps. The Saturn V and the Space Shuttle used gimbaled engines.
A later method developed for solid propellant ballistic missiles achieves thrust vectoring by deflecting only the nozzle of the rocket using electric actuators or hydraulic cylinders. The nozzle is attached to the missile via a ball joint with a hole in the centre, or a flexible seal made of a thermally resistant material, the latter generally requiring more torque and a higher power actuation system. The Trident C4 and D5 systems are controlled via hydraulically actuated nozzle. The STS SRBs used gimbaled nozzles.
Propellant injection
Another method of thrust vectoring used on solid propellant ballistic missiles is liquid injection, in which the rocket nozzle is fixed, however a fluid is introduced into the exhaust flow from injectors mounted around the aft end of the missile. If the liquid is injected on only one side of the missile, it modifies that side of the exhaust plume, resulting in different thrust on that side thus an asymmetric net force on the missile. This was the control system used on the Minuteman II and the early SLBMs of the United States Navy.
Vernier thrusters
An effect similar to thrust vectoring can be produced with multiple vernier thrusters, small auxiliary combustion chambers which lack their own turbopumps and can gimbal on one axis. These were used on the Atlas and R-7 missiles and are still used on the Soyuz rocket, which is descended from the R-7, but are seldom used on new designs due to their complexity and weight. These are distinct from reaction control system thrusters, which are fixed and independent rocket engines used for maneuvering in space.
Exhaust vanes
One of the earliest methods of thrust vectoring in rocket engines was to place vanes in the engine's exhaust stream. These exhaust vanes or jet vanes allow the thrust to be deflected without moving any parts of the engine, but reduce the rocket's efficiency. They have the benefit of allowing roll control with only a single engine, which nozzle gimbaling does not. The V-2 used graphite exhaust vanes and aerodynamic vanes, as did the Redstone, derived from the V-2. The Sapphire and Nexo rockets of the amateur group Copenhagen Suborbitals provide a modern example of jet vanes. Jet vanes must be made of a refractory material or actively cooled to prevent them from melting. Sapphire used solid copper vanes for copper's high heat capacity and thermal conductivity, and Nexo used graphite for its high melting point, but unless actively cooled, jet vanes will undergo significant erosion. This, combined with jet vanes' inefficiency, mostly precludes their use in new rockets.
Tactical missiles and small projectiles
Some smaller sized atmospheric tactical missiles, such as the AIM-9X Sidewinder, eschew flight control surfaces and instead use mechanical vanes to deflect rocket motor exhaust to one side.
By using mechanical vanes to deflect the exhaust of the missile's rocket motor, a missile can steer itself even shortly after being launched (when the missile is moving slowly, before it has reached a high speed). This is because even though the missile is moving at a low speed, the rocket motor's exhaust has a high enough speed to provide sufficient forces on the mechanical vanes. Thus, thrust vectoring can reduce a missile's minimum range. For example, anti-tank missiles such as the Eryx and the PARS 3 LR use thrust vectoring for this reason.
Some other projectiles that use thrust-vectoring:
9M330
Strix mortar round uses twelve midsection lateral thruster rockets to provide terminal course corrections
AAD uses jet vanes
Astra (missile)
Akash (missile)
BrahMos
QRSAM uses jet vanes
MPATGM uses jet vanes
AAM-5
Barak 8 uses jet vanes
A-Darter uses jet vanes
ASRAAM uses jet vanes
R-73 (missile) uses jet vanes
HQ-9 uses jet vanes
PL-10 (ASR) uses jet vanes
MICA (missile) uses jet vanes
PARS 3 LR uses jet vanes
IRIS-T
Aster missile family combines aerodynamic control and the direct thrust vector control called "PIF-PAF"
AIM-9X uses four jet vanes inside the exhaust, that move as the fins move.
9M96E uses a gas-dynamic control system enables maneuver at altitudes of up to 35km at forces of over 20g, which permits engagement of non-strategic ballistic missiles.
9K720 Iskander is controlled during the whole flight with gas-dynamic and aerodynamic control surfaces.
Dongfeng subclasses/JL-2/JL-3 ballistic missiles (allegedly fitted with TVC control)
Aircraft
Most currently operational vectored thrust aircraft use turbofans with rotating nozzles or vanes to deflect the exhaust stream. This method allows designs to deflect thrust through as much as 90 degrees relative to the aircraft centreline. If an aircraft uses thrust vectoring for VTOL operations the engine must be sized for vertical lift, rather than normal flight, which results in a weight penalty. Afterburning (or Plenum Chamber Burning, PCB, in the bypass stream) is difficult to incorporate and is impractical for take-off and landing thrust vectoring, because the very hot exhaust can damage runway surfaces. Without afterburning it is hard to reach supersonic flight speeds. A PCB engine, the Bristol Siddeley BS100, was cancelled in 1965.
Tiltrotor aircraft vector thrust via rotating turboprop engine nacelles. The mechanical complexities of this design are quite troublesome, including twisting flexible internal components and driveshaft power transfer between engines. Most current tiltrotor designs feature two rotors in a side-by-side configuration. If such a craft is flown in a way where it enters a vortex ring state, one of the rotors will always enter slightly before the other, causing the aircraft to perform a drastic and unplanned roll.
Thrust vectoring is also used as a control mechanism for airships. An early application was the British Army airship Delta, which first flew in 1912. It was later used on HMA (His Majesty's Airship) No. 9r, a British rigid airship that first flew in 1916 and the twin 1930s-era U.S. Navy rigid airships USS Akron and USS Macon that were used as airborne aircraft carriers, and a similar form of thrust vectoring is also particularly valuable today for the control of modern non-rigid airships. In this use, most of the load is usually supported by buoyancy and vectored thrust is used to control the motion of the aircraft. The first airship that used a control system based on pressurized air was Enrico Forlanini's Omnia Dir in 1930s.
A design for a jet incorporating thrust vectoring was submitted in 1949 to the British Air Ministry by Percy Walwyn; Walwyn's drawings are preserved at the National Aerospace Library at Farnborough. Official interest was curtailed when it was realised that the designer was a patient in a mental hospital.
Now being researched, Fluidic Thrust Vectoring (FTV) diverts thrust via secondary fluidic injections. Tests show that air forced into a jet engine exhaust stream can deflect thrust up to 15 degrees. Such nozzles are desirable for their lower mass and cost (up to 50% less), inertia (for faster, stronger control response), complexity (mechanically simpler, fewer or no moving parts or surfaces, less maintenance), and radar cross section for stealth. This will likely be used in many unmanned aerial vehicle (UAVs), and 6th generation fighter aircraft.
Vectoring nozzles
Thrust-vectoring flight control (TVFC) is obtained through deflection of the aircraft jets in some or all of the pitch, yaw and roll directions. In the extreme, deflection of the jets in yaw, pitch and roll creates desired forces and moments enabling complete directional control of the aircraft flight path without the implementation of the conventional aerodynamic flight controls (CAFC). TVFC can also be used to hold stationary flight in areas of the flight envelope where the main aerodynamic surfaces are stalled. TVFC includes control of STOVL aircraft during the hover and during the transition between hover and forward speeds below 50 knots where aerodynamic surfaces are ineffective.
When vectored thrust control uses a single propelling jet, as with a single-engined aircraft, the ability to produce rolling moments may not be possible. An example is an afterburning supersonic nozzle where nozzle functions are throat area, exit area, pitch vectoring and yaw vectoring. These functions are controlled by four separate actuators. A simpler variant using only three actuators would not have independent exit area control.
When TVFC is implemented to complement CAFC, agility and safety of the aircraft are maximized. Increased safety may occur in the event of malfunctioning CAFC as a result of battle damage.
To implement TVFC a variety of nozzles both mechanical and fluidic may be applied. This includes convergent and convergent-divergent nozzles that may be fixed or geometrically variable. It also includes variable mechanisms within a fixed nozzle, such as rotating cascades and rotating exit vanes. Within these aircraft nozzles, the geometry itself may vary from two-dimensional (2-D) to axisymmetric or elliptic. The number of nozzles on a given aircraft to achieve TVFC can vary from one on a CTOL aircraft to a minimum of four in the case of STOVL aircraft.
Definitions
Axisymmetric Nozzles with circular exits.
Conventional aerodynamic flight control (CAFC) Pitch, yaw-pitch, yaw-pitch-roll or any other combination of aircraft control through aerodynamic deflection using rudders, flaps, elevators and/or ailerons.
Converging-diverging nozzle (C-D) Generally used on supersonic jet aircraft where nozzle pressure ratio (npr) > 3. The engine exhaust is expanded through a converging section to achieve Mach 1 and then expanded through a diverging section to achieve supersonic speed at the exit plane, or less at low npr.
Converging nozzle Generally used on subsonic and transonic jet aircraft where npr < 3. The engine exhaust is expanded through a converging section to achieve Mach 1 at the exit plane, or less at low npr.
Effective Vectoring Angle The average angle of deflection of the jet stream centreline at any given moment in time.
Fixed nozzle A thrust-vectoring nozzle of invariant geometry or one of variant geometry maintaining a constant geometric area ratio, during vectoring. This will also be referred to as a civil aircraft nozzle and represents the nozzle thrust vectoring control applicable to passenger, transport, cargo and other subsonic aircraft.
Fluidic thrust vectoring The manipulation or control of the exhaust flow with the use of a secondary air source, typically bleed air from the engine compressor or fan.
Geometric vectoring angle Geometric centreline of the nozzle during vectoring. For those nozzles vectored at the geometric throat and beyond, this can differ considerably from the effective vectoring angle.
Three-bearing swivel duct nozzle (3BSD) Three angled segments of engine exhaust duct rotate relative to one another about duct centreline to produce nozzle thrust axis pitch and yaw.
Three-dimensional (3-D) Nozzles with multi-axis or pitch and yaw control.
Thrust vectoring (TV) The deflection of the jet away from the body-axis through the implementation of a flexible nozzle, flaps, paddles, auxiliary fluid mechanics or similar methods.
Thrust-vectoring flight control (TVFC) Pitch, yaw-pitch, yaw-pitch-roll, or any other combination of aircraft control through deflection of thrust generally issuing from an air-breathing turbofan engine.
Two-dimensional (2-D) Nozzles with square or rectangular exits. In addition to the geometrical shape 2-D can also refer to the degree-of-freedom (DOF) controlled which is single axis, or pitch-only, in which case round nozzles are included.
Two-dimensional converging-diverging (2-D C-D) Square, rectangular, or round supersonic nozzles on fighter aircraft with pitch-only control.
Variable nozzle A thrust-vectoring nozzle of variable geometry maintaining a constant, or allowing a variable, effective nozzle area ratio, during vectoring. This will also be referred to as a military aircraft nozzle as it represents the nozzle thrust vectoring control applicable to fighter and other supersonic aircraft with afterburning. The convergent section may be fully controlled with the divergent section following a pre-determined relationship to the convergent throat area. Alternatively, the throat area and the exit area may be controlled independently, to allow the divergent section to match the exact flight condition.
Methods of nozzle control
Geometric area ratios Maintaining a fixed geometric area ratio from the throat to the exit during vectoring. The effective throat is constricted as the vectoring angle increases.
Effective area ratios Maintaining a fixed effective area ratio from the throat to the exit during vectoring. The geometric throat is opened as the vectoring angle increases.
Differential area ratios Maximizing nozzle expansion efficiency generally through predicting the optimal effective area as a function of the mass flow rate.
Methods of thrust vectoring
Type I Nozzles whose baseframe mechanically is rotated before the geometrical throat.
Type II Nozzles whose baseframe is mechanically rotated at the geometrical throat.
Type III Nozzles whose baseframe is not rotated. Rather, the addition of mechanical deflection post-exit vanes or paddles enables jet deflection.
Type IV Jet deflection through counter-flowing or co-flowing (by shock-vector control or throat shifting) auxiliary jet streams. Fluid-based jet deflection using secondary fluidic injection.
Additional type Nozzles whose upstream exhaust duct consists of wedge-shaped segments which rotate relative to each other about the duct centreline.
Operational examples
Aircraft
An example of 2D thrust vectoring is the Rolls-Royce Pegasus engine used in the Hawker Siddeley Harrier, as well as in the AV-8B Harrier II variant.
Widespread use of thrust vectoring for enhanced maneuverability in Western production-model fighter aircraft didn't occur until the deployment of the Lockheed Martin F-22 Raptor fifth-generation jet fighter in 2005, with its afterburning, 2D thrust-vectoring Pratt & Whitney F119 turbofan.
While the Lockheed Martin F-35 Lightning II uses a conventional afterburning turbofan (Pratt & Whitney F135) to facilitate supersonic operation, its F-35B variant, developed for joint usage by the US Marine Corps, Royal Air Force, Royal Navy, and Italian Navy, also incorporates a vertically mounted, low-pressure shaft-driven remote fan, which is driven through a clutch during landing from the engine. Both the exhaust from this fan and the main engine's fan are deflected by thrust vectoring nozzles, to provide the appropriate combination of lift and propulsive thrust. It is not conceived for enhanced maneuverability in combat, only for VTOL operation, and the F-35A and F-35C do not use thrust vectoring at all.
The Sukhoi Su-30MKI, produced by India under licence at Hindustan Aeronautics Limited, is in active service with the Indian Air Force. The TVC makes the aircraft highly maneuverable, capable of near-zero airspeed at high angles of attack without stalling, and dynamic aerobatics at low speeds. The Su-30MKI is powered by two Al-31FP afterburning turbofans. The TVC nozzles of the MKI are mounted 32 degrees outward to longitudinal engine axis (i.e. in the horizontal plane) and can be deflected ±15 degrees in the vertical plane. This produces a corkscrew effect, greatly enhancing the turning capability of the aircraft.
A few computerized studies add thrust vectoring to extant passenger airliners, like the Boeing 727 and 747, to prevent catastrophic failures, while the experimental X-48C may be jet-steered in the future.
Other
Examples of rockets and missiles which use thrust vectoring include both large systems such as the Space Shuttle Solid Rocket Booster (SRB), S-300P (SA-10) surface-to-air missile, UGM-27 Polaris nuclear ballistic missile and RT-23 (SS-24) ballistic missile and smaller battlefield weapons such as Swingfire.
The principles of air thrust vectoring have been recently adapted to military sea applications in the form of fast water-jet steering that provide super-agility. Examples are the fast patrol boat Dvora Mk-III, the Hamina class missile boat and the US Navy's Littoral combat ships.
List of vectored thrust aircraft
Thrust vectoring can convey two main benefits: VTOL/STOL, and higher maneuverability. Aircraft are usually optimized to maximally exploit one benefit, though will gain in the other.
For VTOL ability
Bell Model 65
Bell X-14
Bell Boeing V-22 Osprey
Boeing X-32
Dornier Do 31
EWR VJ 101
Harrier jump jet
British Aerospace Harrier II
British Aerospace Sea Harrier
Hawker Siddeley Harrier
McDonnell Douglas AV-8B Harrier II
Hawker Siddeley Kestrel
Hawker Siddeley P.1127
Lockheed Martin F-35B Lightning II
VFW VAK 191B
Yakovlev Yak-38
Yakovlev Yak-141
For higher maneuverability
Vectoring in two dimensions
McDonnell Douglas F-15 STOL/MTD (experimental)
Lockheed Martin F-22 Raptor (pitch only)
Chengdu J-20 (earlier variants with WS-10C, pitch and roll)
McDonnell Douglas X-36 (yaw only)
Boeing X-45A (yaw only)
Me 163 B experimentally used a rocket steering paddle for the yaw axis
Sukhoi Su-30MKI /MKM/ MKA/ SM (pitch and roll)
Sukhoi Su-35S
Vectoring in three dimensions
McDonnell Douglas F-15 ACTIVE (experimental)
Mitsubishi X-2 (experimental)
McDonnell Douglas F-18 HARV (experimental)
General Dynamics F-16 VISTA (experimental)
Rockwell-MBB X-31 (experimental)
Chengdu J-10B TVC testbed (experimental)
Mikoyan MiG-35 (MiG-29OVT, not in production aircraft)
Sukhoi Su-37 (demonstrator)
Sukhoi Su-47 (experimental)
Sukhoi Su-57
Airships
23 class airship, a series of British, World War 1 airships
Airship Industries Skyship 600 modern airship
Zeppelin NT modern, thrust–vectoring airship
Helicopters
Sikorsky XV-2
NOTAR
See also
Index of aviation articles
Gimbaled thrust
Reverse thrust
Tiltjet
Tiltrotor
Tiltwing
Tail-sitter
VTOL
References
8. Wilson, Erich A., "An Introduction to Thrust-Vectored Aircraft Nozzles",
External links
Application of Thrust Vectoring to Reduce Vertical Tail Size
Jet engines
Airship technology | 0.764874 | 0.996368 | 0.762095 |
Reciprocating motion | Reciprocating motion, also called reciprocation, is a repetitive up-and-down or back-and-forth linear motion. It is found in a wide range of mechanisms, including reciprocating engines and pumps. The two opposite motions that comprise a single reciprocation cycle are called strokes.
A crank can be used to convert into reciprocating motion, or conversely turn reciprocating motion into circular motion.
For example, inside an internal combustion engine (a type of reciprocating engine), the expansion of burning fuel in the cylinders periodically pushes the piston down, which, through the connecting rod, turns the crankshaft. The continuing rotation of the crankshaft drives the piston back up, ready for the next cycle. The piston moves in a reciprocating motion, which is converted into the
circular motion of the crankshaft, which ultimately propels the vehicle or does other useful work.
The reciprocating motion of a pump piston is close to but different from, sinusoidal simple harmonic motion. Assuming the wheel is driven at a perfect constant rotational velocity, the point on the crankshaft which connects to the connecting rod rotates smoothly at a constant velocity in a circle. Thus, the displacement of that point is indeed exactly sinusoidal by definition. However, during the cycle, the angle of the connecting rod changes continuously, so the horizontal displacement of the "far" end of the connecting rod (i.e., connected to the piston) differs slightly from sinusoidal. Additionally, if the wheel is not spinning with perfect constant rotational velocity, such as in a steam locomotive starting up from a stop, the motion will be even less sinusoidal.
See also
References
Mechanical engineering | 0.771907 | 0.987276 | 0.762085 |
Viscoelasticity | In materials science and continuum mechanics, viscoelasticity is the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Viscous materials, like water, resist both shear flow and strain linearly with time when a stress is applied. Elastic materials strain when stretched and immediately return to their original state once the stress is removed.
Viscoelastic materials have elements of both of these properties and, as such, exhibit time-dependent strain. Whereas elasticity is usually the result of bond stretching along crystallographic planes in an ordered solid, viscosity is the result of the diffusion of atoms or molecules inside an amorphous material.
Background
In the nineteenth century, physicists such as James Clerk Maxwell, Ludwig Boltzmann, and Lord Kelvin researched and experimented with creep and recovery of glasses, metals, and rubbers. Viscoelasticity was further examined in the late twentieth century when synthetic polymers were engineered and used in a variety of applications. Viscoelasticity calculations depend heavily on the viscosity variable, η. The inverse of η is also known as fluidity, φ. The value of either can be derived as a function of temperature or as a given value (i.e. for a dashpot).
Depending on the change of strain rate versus stress inside a material, the viscosity can be categorized as having a linear, non-linear, or plastic response. When a material exhibits a linear response it is categorized as a Newtonian material. In this case the stress is linearly proportional to the strain rate. If the material exhibits a non-linear response to the strain rate, it is categorized as non-Newtonian fluid. There is also an interesting case where the viscosity decreases as the shear/strain rate remains constant. A material which exhibits this type of behavior is known as thixotropic. In addition, when the stress is independent of this strain rate, the material exhibits plastic deformation. Many viscoelastic materials exhibit rubber like behavior explained by the thermodynamic theory of polymer elasticity.
Some examples of viscoelastic materials are amorphous polymers, semicrystalline polymers, biopolymers, metals at very high temperatures, and bitumen materials. Cracking occurs when the strain is applied quickly and outside of the elastic limit. Ligaments and tendons are viscoelastic, so the extent of the potential damage to them depends on both the rate of the change of their length and the force applied.
A viscoelastic material has the following properties:
hysteresis is seen in the stress–strain curve
stress relaxation occurs: step constant strain causes decreasing stress
creep occurs: step constant stress causes increasing strain
its stiffness depends on the strain rate or the stress rate
Elastic versus viscoelastic behavior
Unlike purely elastic substances, a viscoelastic substance has an elastic component and a viscous component. The viscosity of a viscoelastic substance gives the substance a strain rate dependence on time. Purely elastic materials do not dissipate energy (heat) when a load is applied, then removed. However, a viscoelastic substance dissipates energy when a load is applied, then removed. Hysteresis is observed in the stress–strain curve, with the area of the loop being equal to the energy lost during the loading cycle. Since viscosity is the resistance to thermally activated plastic deformation, a viscous material will lose energy through a loading cycle. Plastic deformation results in lost energy, which is uncharacteristic of a purely elastic material's reaction to a loading cycle.
Specifically, viscoelasticity is a molecular rearrangement. When a stress is applied to a viscoelastic material such as a polymer, parts of the long polymer chain change positions. This movement or rearrangement is called creep. Polymers remain a solid material even when these parts of their chains are rearranging in order to accommodate the stress, and as this occurs, it creates a back stress in the material. When the back stress is the same magnitude as the applied stress, the material no longer creeps. When the original stress is taken away, the accumulated back stresses will cause the polymer to return to its original form. The material creeps, which gives the prefix visco-, and the material fully recovers, which gives the suffix -elasticity.
Linear viscoelasticity and nonlinear viscoelasticity
Linear viscoelasticity is when the function is separable in both creep response and load. All linear viscoelastic models can be represented by a Volterra equation connecting stress and strain:
or
where
is time
is stress
is strain
and are instantaneous elastic moduli for creep and relaxation
is the creep function
is the relaxation function
Linear viscoelasticity is usually applicable only for small deformations.
Nonlinear viscoelasticity is when the function is not separable. It usually happens when the deformations are large or if the material changes its properties under deformations. Nonlinear viscoelasticity also elucidates observed phenomena such as normal stresses, shear thinning, and extensional thickening in viscoelastic fluids.
An anelastic material is a special case of a viscoelastic material: an anelastic material will fully recover to its original state on the removal of load.
When distinguishing between elastic, viscous, and forms of viscoelastic behavior, it is helpful to reference the time scale of the measurement relative to the relaxation times of the material being observed, known as the Deborah number (De) where:
where
is the relaxation time of the material
is time
Dynamic modulus
Viscoelasticity is studied using dynamic mechanical analysis, applying a small oscillatory stress and measuring the resulting strain.
Purely elastic materials have stress and strain in phase, so that the response of one caused by the other is immediate.
In purely viscous materials, strain lags stress by a 90 degree phase.
Viscoelastic materials exhibit behavior somewhere in the middle of these two types of material, exhibiting some lag in strain.
A complex dynamic modulus G can be used to represent the relations between the oscillating stress and strain:
where ; is the storage modulus and is the loss modulus:
where and are the amplitudes of stress and strain respectively, and is the phase shift between them.
Constitutive models of linear viscoelasticity
Viscoelastic materials, such as amorphous polymers, semicrystalline polymers, biopolymers and even the living tissue and cells, can be modeled in order to determine their stress and strain or force and displacement interactions as well as their temporal dependencies. These models, which include the Maxwell model, the Kelvin–Voigt model, the standard linear solid model, and the Burgers model, are used to predict a material's response under different loading conditions.
Viscoelastic behavior has elastic and viscous components modeled as linear combinations of springs and dashpots, respectively. Each model differs in the arrangement of these elements, and all of these viscoelastic models can be equivalently modeled as electrical circuits.
In an equivalent electrical circuit, stress is represented by current, and strain rate by voltage. The elastic modulus of a spring is analogous to the inverse of a circuit's inductance (it stores energy) and the viscosity of a dashpot to a circuit's resistance (it dissipates energy).
The elastic components, as previously mentioned, can be modeled as springs of elastic constant E, given the formula:
where σ is the stress, E is the elastic modulus of the material, and ε is the strain that occurs under the given stress, similar to Hooke's law.
The viscous components can be modeled as dashpots such that the stress–strain rate relationship can be given as,
where σ is the stress, η is the viscosity of the material, and dε/dt is the time derivative of strain.
The relationship between stress and strain can be simplified for specific stress or strain rates. For high stress or strain rates/short time periods, the time derivative components of the stress–strain relationship dominate. In these conditions it can be approximated as a rigid rod capable of sustaining high loads without deforming. Hence, the dashpot can be considered to be a "short-circuit".
Conversely, for low stress states/longer time periods, the time derivative components are negligible and the dashpot can be effectively removed from the system – an "open" circuit. As a result, only the spring connected in parallel to the dashpot will contribute to the total strain in the system.
Maxwell model
The Maxwell model can be represented by a purely viscous damper and a purely elastic spring connected in series, as shown in the diagram. The model can be represented by the following equation:
Under this model, if the material is put under a constant strain, the stresses gradually relax. When a material is put under a constant stress, the strain has two components. First, an elastic component occurs instantaneously, corresponding to the spring, and relaxes immediately upon release of the stress. The second is a viscous component that grows with time as long as the stress is applied. The Maxwell model predicts that stress decays exponentially with time, which is accurate for most polymers. One limitation of this model is that it does not predict creep accurately. The Maxwell model for creep or constant-stress conditions postulates that strain will increase linearly with time. However, polymers for the most part show the strain rate to be decreasing with time.
This model can be applied to soft solids: thermoplastic polymers in the vicinity of their melting temperature, fresh concrete (neglecting its aging), and numerous metals at a temperature close to their melting point.
The equation introduced here, however, lacks a consistent derivation from more microscopic model and is not observer independent. The Upper-convected Maxwell model is its sound formulation in tems of the Cauchy stress tensor and constitutes the simplest tensorial constitutive model for viscoelasticity (see e.g. or
).
Kelvin–Voigt model
The Kelvin–Voigt model, also known as the Voigt model, consists of a Newtonian damper and Hookean elastic spring connected in parallel, as shown in the picture. It is used to explain the creep behaviour of polymers.
The constitutive relation is expressed as a linear first-order differential equation:
This model represents a solid undergoing reversible, viscoelastic strain. Upon application of a constant stress, the material deforms at a decreasing rate, asymptotically approaching the steady-state strain. When the stress is released, the material gradually relaxes to its undeformed state. At constant stress (creep), the model is quite realistic as it predicts strain to tend to σ/E as time continues to infinity. Similar to the Maxwell model, the Kelvin–Voigt model also has limitations. The model is extremely good with modelling creep in materials, but with regards to relaxation the model is much less accurate.
This model can be applied to organic polymers, rubber, and wood when the load is not too high.
Standard linear solid model
The standard linear solid model, also known as the Zener model, consists of two springs and a dashpot. It is the simplest model that describes both the creep and stress relaxation behaviors of a viscoelastic material properly. For this model, the governing constitutive relations are:
Under a constant stress, the modeled material will instantaneously deform to some strain, which is the instantaneous elastic portion of the strain. After that it will continue to deform and asymptotically approach a steady-state strain, which is the retarded elastic portion of the strain. Although the standard linear solid model is more accurate than the Maxwell and Kelvin–Voigt models in predicting material responses, mathematically it returns inaccurate results for strain under specific loading conditions.
Jeffreys model
The Jeffreys model like the Zener model is a three element model. It consist of two dashpots and a spring.
It was proposed in 1929 by Harold Jeffreys to study Earth's mantle.
Burgers model
The Burgers model consists of either two Maxwell components in parallel or a Kelvin–Voigt component, a spring and a dashpot in series. For this model, the governing constitutive relations are:
This model incorporates viscous flow into the standard linear solid model, giving a linearly increasing asymptote for strain under fixed loading conditions.
Generalized Maxwell model
The generalized Maxwell model, also known as the Wiechert model, is the most general form of the linear model for viscoelasticity. It takes into account that the relaxation does not occur at a single time, but at a distribution of times. Due to molecular segments of different lengths with shorter ones contributing less than longer ones, there is a varying time distribution. The Wiechert model shows this by having as many spring–dashpot Maxwell elements as necessary to accurately represent the distribution. The figure on the right shows the generalised Wiechert model.
Applications: metals and alloys at temperatures lower than one quarter of their absolute melting temperature (expressed in K).
Constitutive models for nonlinear viscoelasticity
Non-linear viscoelastic constitutive equations are needed to quantitatively account for phenomena in fluids like differences in normal stresses, shear thinning, and extensional thickening. Necessarily, the history experienced by the material is needed to account for time-dependent behavior, and is typically included in models as a history kernel K.
Second-order fluid
The second-order fluid is typically considered the simplest nonlinear viscoelastic model, and typically occurs in a narrow region of materials behavior occurring at high strain amplitudes and Deborah number between Newtonian fluids and other more complicated nonlinear viscoelastic fluids. The second-order fluid constitutive equation is given by:
where:
is the identity tensor
is the deformation tensor
denote viscosity, and first and second normal stress coefficients, respectively
denotes the upper-convected derivative of the deformation tensor where and is the material time derivative of the deformation tensor.
Upper-convected Maxwell model
The upper-convected Maxwell model incorporates nonlinear time behavior into the viscoelastic Maxwell model, given by:
where denotes the stress tensor.
Oldroyd-B model
The Oldroyd-B model is an extension of the Upper Convected Maxwell model and is interpreted as a solvent filled with elastic bead and spring dumbbells.
The model is named after its creator James G. Oldroyd.
The model can be written as:
where:
is the stress tensor;
is the relaxation time;
is the retardation time = ;
is the upper convected time derivative of stress tensor:
is the fluid velocity;
is the total viscosity composed of solvent and polymer components, ;
is the deformation rate tensor or rate of strain tensor, .
Whilst the model gives good approximations of viscoelastic fluids in shear flow, it has an unphysical singularity in extensional flow, where the dumbbells are infinitely stretched. This is, however, specific to idealised flow; in the case of a cross-slot geometry the extensional flow is not ideal, so the stress, although singular, remains integrable, although the stress is infinite in a correspondingly infinitely small region.
If the solvent viscosity is zero, the Oldroyd-B becomes the upper convected Maxwell model.
Wagner model
Wagner model is might be considered as a simplified practical form of the Bernstein–Kearsley–Zapas model. The model was developed by German rheologist Manfred Wagner.
For the isothermal conditions the model can be written as:
where:
is the Cauchy stress tensor as function of time t,
p is the pressure
is the unity tensor
M is the memory function showing, usually expressed as a sum of exponential terms for each mode of relaxation: where for each mode of the relaxation, is the relaxation modulus and is the relaxation time;
is the strain damping function that depends upon the first and second invariants of Finger tensor .
The strain damping function is usually written as:
If the value of the strain hardening function is equal to one, then the deformation is small; if it approaches zero, then the deformations are large.
Prony series
In a one-dimensional relaxation test, the material is subjected to a sudden strain that is kept constant over the duration of the test, and the stress is measured over time. The initial stress is due to the elastic response of the material. Then, the stress relaxes over time due to the viscous effects in the material. Typically, either a tensile, compressive, bulk compression, or shear strain is applied. The resulting stress vs. time data can be fitted with a number of equations, called models. Only the notation changes depending on the type of strain applied: tensile-compressive relaxation is denoted , shear is denoted , bulk is denoted . The Prony series for the shear relaxation is
where is the long term modulus once the material is totally relaxed, are the relaxation times (not to be confused with in the diagram); the higher their values, the longer it takes for the stress to relax. The data is fitted with the equation by using a minimization algorithm that adjust the parameters to minimize the error between the predicted and data values.
An alternative form is obtained noting that the elastic modulus is related to the long term modulus by
Therefore,
This form is convenient when the elastic shear modulus is obtained from data independent from the relaxation data, and/or for computer implementation, when it is desired to specify the elastic properties separately from the viscous properties, as in Simulia (2010).
A creep experiment is usually easier to perform than a relaxation one, so most data is available as (creep) compliance vs. time. Unfortunately, there is no known closed form for the (creep) compliance in terms of the coefficient of the Prony
series. So, if one has creep data, it is not easy to get the coefficients of the (relaxation) Prony series, which are needed for example in. An expedient way to obtain these coefficients is the following. First, fit the creep data with a model that has closed form solutions in both compliance and relaxation; for example the Maxwell-Kelvin model
(eq. 7.18-7.19) in Barbero (2007) or the Standard Solid Model (eq. 7.20-7.21) in Barbero (2007) (section 7.1.3). Once the parameters of the creep model are known, produce relaxation pseudo-data with the conjugate relaxation model for the same
times of the original data. Finally, fit the pseudo data with the Prony series.
Effect of temperature
The secondary bonds of a polymer constantly break and reform due to thermal motion. Application of a stress favors some conformations over others, so the molecules of the polymer will gradually "flow" into the favored conformations over time. Because thermal motion is one factor contributing to the deformation of polymers, viscoelastic properties change with increasing or decreasing temperature. In most cases, the creep modulus, defined as the ratio of applied stress to the time-dependent strain, decreases with increasing temperature. Generally speaking, an increase in temperature correlates to a logarithmic decrease in the time required to impart equal strain under a constant stress. In other words, it takes less work to stretch a viscoelastic material an equal distance at a higher temperature than it does at a lower temperature.
More detailed effect of temperature on the viscoelastic behavior of polymer can be plotted as shown.
There are mainly five regions (some denoted four, which combines IV and V together) included in the typical polymers.
Region I: Glassy state of the polymer is presented in this region. The temperature in this region for a given polymer is too low to endow molecular motion. Hence the motion of the molecules is frozen in this area. The mechanical property is hard and brittle in this region.
Region II: Polymer passes glass transition temperature in this region. Beyond Tg, the thermal energy provided by the environment is enough to unfreeze the motion of molecules. The molecules are allowed to have local motion in this region hence leading to a sharp drop in stiffness compared to Region I.
Region III: Rubbery plateau region. Materials lie in this region would exist long-range elasticity driven by entropy. For instance, a rubber band is disordered in the initial state of this region. When stretching the rubber band, you also align the structure to be more ordered. Therefore, when releasing the rubber band, it will spontaneously seek higher entropy state hence goes back to its initial state. This is what we called entropy-driven elasticity shape recovery.
Region IV: The behavior in the rubbery flow region is highly time-dependent. Polymers in this region would need to use a time-temperature superposition to get more detailed information to cautiously decide how to use the materials. For instance, if the material is used to cope with short interaction time purpose, it could present as 'hard' material. While using for long interaction time purposes, it would act as 'soft' material.
Region V: Viscous polymer flows easily in this region. Another significant drop in stiffness.
Extreme cold temperatures can cause viscoelastic materials to change to the glass phase and become brittle. For example, exposure of pressure sensitive adhesives to extreme cold (dry ice, freeze spray, etc.) causes them to lose their tack, resulting in debonding.
Viscoelastic creep
When subjected to a step constant stress, viscoelastic materials experience a time-dependent increase in strain. This phenomenon is known as viscoelastic creep.
At time , a viscoelastic material is loaded with a constant stress that is maintained for a sufficiently long time period. The material responds to the stress with a strain that increases until the material ultimately fails, if it is a viscoelastic liquid. If, on the other hand, it is a viscoelastic solid, it may or may not fail depending on the applied stress versus the material's ultimate resistance. When the stress is maintained for a shorter time period, the material undergoes an initial strain until a time , after which the strain immediately decreases (discontinuity) then gradually decreases at times to a residual strain.
Viscoelastic creep data can be presented by plotting the creep modulus (constant applied stress divided by total strain at a particular time) as a function of time. Below its critical stress, the viscoelastic creep modulus is independent of stress applied. A family of curves describing strain versus time response to various applied stress may be represented by a single viscoelastic creep modulus versus time curve if the applied stresses are below the material's critical stress value.
Viscoelastic creep is important when considering long-term structural design. Given loading and temperature conditions, designers can choose materials that best suit component lifetimes.
Measurement
Shear rheometry
Shear rheometers are based on the idea of putting the material to be measured between two plates, one or both of which move in a shear direction to induce stresses and strains in the material. The testing can be done at constant strain rate, stress, or in an oscillatory fashion (a form of dynamic mechanical analysis). Shear rheometers are typically limited by edge effects where the material may leak out from between the two plates and slipping at the material/plate interface.
Extensional rheometry
Extensional rheometers, also known as extensiometers, measure viscoelastic properties by pulling a viscoelastic fluid, typically uniaxially. Because this typically makes use of capillary forces and confines the fluid to a narrow geometry, the technique is often limited to fluids with relatively low viscosity like dilute polymer solutions or some molten polymers. Extensional rheometers are also limited by edge effects at the ends of the extensiometer and pressure differences between inside and outside the capillary.
Despite the apparent limitations mentioned above, extensional rheometry can also be performed on high viscosity fluids. Although this requires the use of different instruments, these techniques and apparatuses allow for the study of the extensional viscoelastic properties of materials such as polymer melts. Three of the most common extensional rheometry instruments developed within the last 50 years are the Meissner-type rheometer, the filament stretching rheometer (FiSER), and the Sentmanat Extensional Rheometer (SER).
The Meissner-type rheometer, developed by Meissner and Hostettler in 1996, uses two sets of counter-rotating rollers to strain a sample uniaxially. This method uses a constant sample length throughout the experiment, and supports the sample in between the rollers via an air cushion to eliminate sample sagging effects. It does suffer from a few issues – for one, the fluid may slip at the belts which leads to lower strain rates than one would expect. Additionally, this equipment is challenging to operate and costly to purchase and maintain.
The FiSER rheometer simply contains fluid in between two plates. During an experiment, the top plate is held steady and a force is applied to the bottom plate, moving it away from the top one. The strain rate is measured by the rate of change of the sample radius at its middle. It is calculated using the following equation:
where is the mid-radius value and is the strain rate. The viscosity of the sample is then calculated using the following equation:
where is the sample viscosity, and is the force applied to the sample to pull it apart.
Much like the Meissner-type rheometer, the SER rheometer uses a set of two rollers to strain a sample at a given rate. It then calculates the sample viscosity using the well known equation:
where is the stress, is the viscosity and is the strain rate. The stress in this case is determined via torque transducers present in the instrument. The small size of this instrument makes it easy to use and eliminates sample sagging between the rollers. A schematic detailing the operation of the SER extensional rheometer can be found on the right.
Other methods
Though there are many instruments that test the mechanical and viscoelastic response of materials, broadband viscoelastic spectroscopy (BVS) and resonant ultrasound spectroscopy (RUS) are more commonly used to test viscoelastic behavior because they can be used above and below ambient temperatures and are more specific to testing viscoelasticity. These two instruments employ a damping mechanism at various frequencies and time ranges with no appeal to time–temperature superposition. Using BVS and RUS to study the mechanical properties of materials is important to understanding how a material exhibiting viscoelasticity will perform.
See also
Bingham plastic
Biomaterial
Biomechanics
Blood viscoelasticity
Constant viscosity elastic fluids
Deformation index
Glass transition
Pressure-sensitive adhesive
Rheology
Rubber elasticity
Silly Putty
Viscoelasticity of bone
Viscoplasticity
Visco-elastic jets
References
Silbey and Alberty (2001): Physical Chemistry, 857. John Wiley & Sons, Inc.
Alan S. Wineman and K. R. Rajagopal (2000): Mechanical Response of Polymers: An Introduction
Allen and Thomas (1999): The Structure of Materials, 51.
Crandal et al. (1999): An Introduction to the Mechanics of Solids 348
J. Lemaitre and J. L. Chaboche (1994) Mechanics of solid materials
Yu. Dimitrienko (2011) Nonlinear continuum mechanics and Large Inelastic Deformations, Springer, 772p
Materials science
Elasticity (physics)
Non-Newtonian fluids
Continuum mechanics
Rubber properties
Hysteresis | 0.765623 | 0.995326 | 0.762045 |
Kepler problem | In classical mechanics, the Kepler problem is a special case of the two-body problem, in which the two bodies interact by a central force that varies in strength as the inverse square of the distance between them. The force may be either attractive or repulsive. The problem is to find the position or speed of the two bodies over time given their masses, positions, and velocities. Using classical mechanics, the solution can be expressed as a Kepler orbit using six orbital elements.
The Kepler problem is named after Johannes Kepler, who proposed Kepler's laws of planetary motion (which are part of classical mechanics and solved the problem for the orbits of the planets) and investigated the types of forces that would result in orbits obeying those laws (called Kepler's inverse problem).
For a discussion of the Kepler problem specific to radial orbits, see Radial trajectory. General relativity provides more accurate solutions to the two-body problem, especially in strong gravitational fields.
Applications
The inverse square law behind the Kepler problem is the most important central force law.
The Kepler problem is important in celestial mechanics, since Newtonian gravity obeys an inverse square law. Examples include a satellite moving about a planet, a planet about its sun, or two binary stars about each other. The Kepler problem is also important in the motion of two charged particles, since Coulomb’s law of electrostatics also obeys an inverse square law.
The Kepler problem and the simple harmonic oscillator problem are the two most fundamental problems in classical mechanics. They are the only two problems that have closed orbits for every possible set of initial conditions, i.e., return to their starting point with the same velocity (Bertrand's theorem).
The Kepler problem also conserves the Laplace–Runge–Lenz vector, which has since been generalized to include other interactions. The solution of the Kepler problem allowed scientists to show that planetary motion could be explained entirely by classical mechanics and Newton’s law of gravity; the scientific explanation of planetary motion played an important role in ushering in the Enlightenment.
History
The Kepler problem begins with the empirical results of Johannes Kepler arduously derived by analysis of the astronomical observations of Tycho Brache. After some 70 attempts to match the data to circular orbits, Kepler hit upon the idea of the elliptic orbit. He eventually summarized his results in the form of three laws of planetary motion.
What is now called the Kepler problem was first discussed by Isaac Newton as a major part of his Principia. His "Theorema I" begins with the first two of his three axioms or laws of motion and results in Kepler's second law of planetary motion. Next Newton proves his "Theorema II" which shows that if Kepler's second law results, then the force involved must be along the line between the two bodies. In other words, Newton proves what today might be called the "inverse Kepler problem": the orbit characteristics require the force to depend on the inverse square of the distance.
Mathematical definition
The central force F between two objects varies in strength as the inverse square of the distance r between them:
where k is a constant and represents the unit vector along the line between them. The force may be either attractive (k < 0) or repulsive (k > 0). The corresponding scalar potential is:
Solution of the Kepler problem
The equation of motion for the radius of a particle
of mass moving in a central potential is given by Lagrange's equations
and the angular momentum is conserved. For illustration, the first term on the left-hand side is zero for circular orbits, and the applied inwards force equals the centripetal force requirement , as expected.
If L is not zero the definition of angular momentum allows a change of independent variable from to
giving the new equation of motion that is independent of time
The expansion of the first term is
This equation becomes quasilinear on making the change of variables and multiplying both sides by
After substitution and rearrangement:
For an inverse-square force law such as the gravitational or electrostatic potential, the scalar potential can be written
The orbit can be derived from the general equation
whose solution is the constant plus a simple sinusoid
where (the eccentricity) and (the phase offset) are constants of integration.
This is the general formula for a conic section that has one focus at the origin; corresponds to a circle, corresponds to an ellipse, corresponds to a parabola, and corresponds to a hyperbola. The eccentricity is related to the total energy (cf. the Laplace–Runge–Lenz vector)
Comparing these formulae shows that corresponds to an ellipse (all solutions which are closed orbits are ellipses), corresponds to a parabola, and corresponds to a hyperbola. In particular, for perfectly circular orbits (the central force exactly equals the centripetal force requirement, which determines the required angular velocity for a given circular radius).
For a repulsive force (k > 0) only e > 1 applies.
See also
Action-angle coordinates
Bertrand's theorem
Binet equation
Hamilton–Jacobi equation
Laplace–Runge–Lenz vector
Kepler orbit
Kepler problem in general relativity
Kepler's equation
Kepler's laws of planetary motion
References
Classical mechanics
Johannes Kepler | 0.772299 | 0.986711 | 0.762036 |
Faint young Sun paradox | The faint young Sun paradox or faint young Sun problem describes the apparent contradiction between observations of liquid water early in Earth's history and the astrophysical expectation that the Sun's output would be only 70 percent as intense during that epoch as it is during the modern epoch. The paradox is this: with the young Sun's output at only 70 percent of its current output, early Earth would be expected to be completely frozen, but early Earth seems to have had liquid water and supported life.
The issue was raised by astronomers Carl Sagan and George Mullen in 1972.
Proposed resolutions of this paradox have taken into account greenhouse effects, changes to planetary albedo, astrophysical influences, or combinations of these suggestions. The predominant theory is that the greenhouse gas carbon dioxide contributed most to the warming of the Earth.
Solar evolution
Models of stellar structure, especially the standard solar model predict a brightening of the Sun. The brightening is caused by a decrease in the number of particles per unit mass due to nuclear fusion in the Sun's core, from four protons and electrons each to one helium nucleus and two electrons. Fewer particles would exert less pressure. A collapse under the enormous gravity is prevented by an increase in temperature, which is both cause and effect of a higher rate of nuclear fusion.
More recent modeling studies have shown that the Sun is currently 1.4 times as bright today than it was 4.6 billion years ago (Ga), and that the brightening has accelerated considerably. At the surface of the Sun, more fusion power means a higher solar luminosity (via slight increases in temperature and radius), which on Earth is termed radiative forcing.
Theories
Greenhouse gases
Sagan and Mullen suggested during their descriptions of the paradox that it might be solved by high concentrations of ammonia gas, NH3. However, it has since been shown that while ammonia is an effective greenhouse gas, it is easily destroyed photochemically in the atmosphere and converted to nitrogen (N2) and hydrogen (H2) gases. It was suggested (again by Sagan) that a photochemical haze could have prevented this destruction of ammonia and allowed it to continue acting as a greenhouse gas during this time; however, by 2001, this idea was tested using a photochemical model and discounted. Furthermore, such a haze is thought to have cooled Earth's surface beneath it and counteracted the greenhouse effect. Around 2010, scholars at the University of Colorado revived the idea, arguing that the ammonia hypothesis is a viable contributor if the haze formed a fractal pattern.
It is now thought that carbon dioxide was present in higher concentrations during this period of lower solar radiation. It was first proposed and tested as part of Earth's atmospheric evolution in the late 1970s. An atmosphere that contained about 1,000 times the present atmospheric level (or PAL) was found to be consistent with the evolutionary path of Earth's carbon cycle and solar evolution.
The primary mechanism for attaining such high CO2 concentrations is the carbon cycle. On large timescales, the inorganic branch of the carbon cycle, which is known as the carbonate–silicate cycle is responsible for determining the partitioning of CO2 between the atmosphere and the surface of Earth. In particular, during a time of low surface temperatures, rainfall and weathering rates would be reduced, allowing for the build-up of carbon dioxide in the atmosphere on timescales of 0.5 million years.
Specifically, using 1-D models, which represent Earth as a single point (instead of something that varies across 3 dimensions) scientists have determined that at 4.5 Ga, with a 30% dimmer Sun, a minimum partial pressure of 0.1 bar of CO2 is required to maintain an above-freezing surface temperature; 10 bar of CO2 has been suggested as a plausible upper limit.
The amount of carbon dioxide levels is still under debate. In 2001, Sleep and Zahnle suggested that increased weathering on the sea floor on a young, tectonically active Earth could have reduced carbon dioxide levels. Then in 2010, Rosing et al. analyzed marine sediments called banded iron formations and found large amounts of various iron-rich minerals, including magnetite (Fe3O4), an oxidized mineral alongside siderite (FeCO3), a reduced mineral and saw that they formed during the first half of Earth's history (and not afterward). The minerals' relative coexistence suggested an analogous balance between CO2 and H2. In the analysis, Rosing et al. connected the atmospheric H2 concentrations with regulation by biotic methanogenesis. Anaerobic, single-celled organisms that produced methane (CH4) may therefore have contributed to the warming in addition to carbon dioxide.
Tidal heating
The Moon was originally much closer to the Earth, which rotated faster than it does today, resulting in greater tidal heating than experienced today. Original estimates found that even early tidal heating would be minimal, perhaps 0.02 watts per square meter. (For comparison, the solar energy incident on the Earth's atmosphere is on the order of 1,000 watts per square meter.)
However, around 2021, a team led by René Heller in Germany argued that such estimates were simplistic and that in some plausible models tidal heating might have contributed on the order of 10 watts per square meter and increased the equilibrium temperature by up to five degrees Celsius on a timescale of 100 million years. Such a contribution would partially resolve the paradox but is insufficient to solve the faint young paradox on its own without additional factors such as greenhouse heating. The underlying assumption of Moon's formation just outside of the Roche limit is not certain, however: a magnetized disk of debris could have transported angular momentum leading to a less massive Moon in a higher orbit.
Cosmic rays
A minority view propounded by the Israeli-American physicist Nir Shaviv uses climatological influences of solar wind combined with a hypothesis of Danish physicist Henrik Svensmark for a cooling effect of cosmic rays. According to Shaviv, the early Sun had emitted a stronger solar wind that produced a protective effect against cosmic rays. In that early age, a moderate greenhouse effect comparable to today's would have been sufficient to explain a largely ice-free Earth. Evidence for a more active early Sun has been found in meteorites.
The temperature minimum around 2.4 Ga goes along with a cosmic ray flux modulation by a variable star formation rate in the Milky Way. The reduced solar impact later results in a stronger impact of cosmic ray flux, which is hypothesized to lead to a relationship with climatological variations.
Mass loss from Sun
It has been proposed several times that mass loss from the faint young Sun in the form of stronger solar winds could have compensated for the low temperatures from greenhouse gas forcing. In this framework, the early Sun underwent an extended period of higher solar wind output. Based on exoplanetary data, this caused a mass loss from the Sun of 5−6 percent over its lifetime, resulting in a more consistent level of solar luminosity (as the early Sun had more mass, resulting in more energy output than was predicted).
In order to explain the warm conditions in the Archean eon, this mass loss must have occurred over an interval of about one billion years. Records of ion implantation from meteorites and lunar samples show that the elevated rate of solar wind flux only lasted for a period of 100 million years. Observations of the young Sun-like star π1 Ursae Majoris match this rate of decline in the stellar wind output, suggesting that a higher mass loss rate cannot by itself resolve the paradox.
Changes in clouds
If greenhouse gas concentrations did not compensate completely for the fainter Sun, the moderate temperature range may be explained by a lower surface albedo. At the time, a smaller area of exposed continental land would have resulted in fewer cloud condensation nuclei both in the form of wind-blown dust and biogenic sources. A lower albedo allows a higher fraction of solar radiation to penetrate to the surface. Goldblatt and Zahnle (2011) investigated whether a change in cloud fraction could have been sufficiently warming and found that the net effect was equally as likely to have been negative as positive. At most the effect could have raised surface temperatures to just above freezing on average.
Another proposed mechanism of cloud cover reduction relates a decrease in cosmic rays during this time to reduced cloud fraction. However, this mechanism does not work for several reasons, including the fact that ions do not limit cloud formation as much as cloud condensation nuclei, and cosmic rays have been found to have little impact on global mean temperature. Clouds continue to be the dominant source of uncertainty in 3-D global climate models, and a consensus has yet to be reached on how changes in cloud spatial patterns and cloud type may have affected Earth's climate during this time.
Local Hubble expansion
Although both simulations and direct measurements of effects of Hubble's law on gravitationally bound systems are returning inconclusive results as of 2022, it was noted that orbital expansion with a fraction of local Hubble expansion rate may explain the observed anomalies in orbital evolution, including a faint young Sun paradox.
Gaia hypothesis
The Gaia hypothesis holds that biological processes work to maintain a stable surface climate on Earth to maintain habitability through various negative feedback mechanisms. While organic processes, such as the organic carbon cycle, work to regulate dramatic climate changes, and that the surface of Earth has presumably remained habitable, this hypothesis has been criticized as intractable. Furthermore, life has existed on the surface of Earth through dramatic changes in climate, including Snowball Earth episodes. There are also strong and weak versions of the Gaia hypothesis, which has caused some tension in this research area.
On other planets
Mars
Mars has its own version of the faint young Sun paradox. Martian terrains show clear signs of past liquid water on the surface, including outflow channels, gullies, modified craters, and valley networks. These geomorphic features suggest Mars had an ocean on its surface and river networks that resemble current Earth's during the late Noachian (4.1–3.7 Ga). It is unclear how Mars's orbital pattern, which places it even further from the Sun, and the faintness of the young Sun could have produced what is thought to have been a very warm and wet climate on Mars. Scientists debate over which geomorphological features can be attributed to shorelines or other water flow markers and which can be ascribed to other mechanisms. Nevertheless, the geologic evidence, including observations of widespread fluvial erosion in the southern highlands, are generally consistent with an early warm and semi-arid climate.
Given the orbital and solar conditions of early Mars, a greenhouse effect would have been necessary to increase surface temperatures at least 65 K in order for these surface features to have been carved by flowing water. A much denser, CO2-dominated atmosphere has been proposed as a way to produce such a temperature increase. This would depend upon the carbon cycle and the rate of volcanism throughout the pre-Noachian and Noachian, which is not well known. Volatile outgassing is thought to have occurred during these periods.
One way to ascertain whether Mars possessed a thick CO2-rich atmosphere is to examine carbonate deposits. A primary sink for carbon in Earth's atmosphere is the carbonate–silicate cycle. However it would have been difficult for CO2 to have accumulated in the Martian atmosphere in this way because the greenhouse effect would have been outstripped by CO2 condensation.
A volcanically-outgassed CO2-H2 greenhouse is a plausible scenario suggested recently for early Mars. Intermittent bursts of methane may have been another possibility. Such greenhouse gas combinations appear necessary because carbon dioxide alone, even at pressures exceeding a few bar, cannot explain the temperatures required for the presence of surface liquid water on early Mars.
Venus
Venus's atmosphere is composed of 96% carbon dioxide. Billions of years ago, when the Sun was 25 to 30% dimmer, Venus's surface temperature could have been much cooler, and its climate could have resembled current Earth's, complete with a hydrological cycle—before it experienced a runaway greenhouse effect.
See also
Cool early Earth
Effective temperature – of a planet, dependent on reflectivity of its surface and clouds.
Isua Greenstone Belt
Paleoclimatology
References
Further reading
Sun
Climate history
Paradoxes
1972 in science
Unsolved problems in astronomy | 0.770605 | 0.988879 | 0.762034 |
Debye length | In plasmas and electrolytes, the Debye length (Debye radius or Debye–Hückel screening length), is a measure of a charge carrier's net electrostatic effect in a solution and how far its electrostatic effect persists. With each Debye length the charges are increasingly electrically screened and the electric potential decreases in magnitude by 1/e. A Debye sphere is a volume whose radius is the Debye length. Debye length is an important parameter in plasma physics, electrolytes, and colloids (DLVO theory). The corresponding Debye screening wave vector for particles of density , charge at a temperature is given by in Gaussian units. Expressions in MKS units will be given below. The analogous quantities at very low temperatures are known as the Thomas–Fermi length and the Thomas–Fermi wave vector. They are of interest in describing the behaviour of electrons in metals at room temperature.
The Debye length is named after the Dutch-American physicist and chemist Peter Debye (1884–1966), a Nobel laureate in Chemistry.
Physical origin
The Debye length arises naturally in the thermodynamic description of large systems of mobile charges. In a system of different species of charges, the -th species carries charge and has concentration at position . According to the so-called "primitive model", these charges are distributed in a continuous medium that is characterized only by its relative static permittivity, .
This distribution of charges within this medium gives rise to an electric potential that satisfies Poisson's equation:
where , is the electric constant, and is a charge density external (logically, not spatially) to the medium.
The mobile charges not only contribute in establishing but also move in response to the associated Coulomb force, .
If we further assume the system to be in thermodynamic equilibrium with a heat bath at absolute temperature , then the concentrations of discrete charges, , may be considered to be thermodynamic (ensemble) averages and the associated electric potential to be a thermodynamic mean field.
With these assumptions, the concentration of the -th charge species is described by the Boltzmann distribution,
where is the Boltzmann constant and where is the mean
concentration of charges of species .
Identifying the instantaneous concentrations and potential in the Poisson equation with their mean-field counterparts in the Boltzmann distribution yields the Poisson–Boltzmann equation:
Solutions to this nonlinear equation are known for some simple systems. Solutions for more general systems may be obtained in the high-temperature (weak coupling) limit, , by Taylor expanding the exponential:
This approximation yields the linearized Poisson–Boltzmann equation
which also is known as the Debye–Hückel equation:
The second term on the right-hand side vanishes for systems that are electrically neutral. The term in parentheses divided by , has the units of an inverse length squared and by
dimensional analysis leads to the definition of the characteristic length scale
that commonly is referred to as the Debye–Hückel length. As the only characteristic length scale in the Debye–Hückel equation, sets the scale for variations in the potential and in the concentrations of charged species. All charged species contribute to the Debye–Hückel length in the same way, regardless of the sign of their charges. For an electrically neutral system, the Poisson equation becomes
To illustrate Debye screening, the potential produced by an external point charge is
The bare Coulomb potential is exponentially screened by the medium, over a distance of the Debye length: this is called Debye screening or shielding (Screening effect).
The Debye–Hückel length may be expressed in terms of the Bjerrum length as
where is the integer charge number that relates the charge on the -th ionic
species to the elementary charge .
In a plasma
For a weakly collisional plasma, Debye shielding can be introduced in a very intuitive way by taking into account the granular character of such a plasma. Let us imagine a sphere about one of its electrons, and compare the number of electrons crossing this sphere with and without Coulomb repulsion. With repulsion, this number is smaller. Therefore, according to Gauss theorem, the apparent charge of the first electron is smaller than in the absence of repulsion. The larger the sphere radius, the larger is the number of deflected electrons, and the smaller the apparent charge: this is Debye shielding. Since the global deflection of particles includes the contributions of many other ones, the density of the electrons does not change, at variance with the shielding at work next to a Langmuir probe (Debye sheath). Ions bring a similar contribution to shielding, because of the attractive Coulombian deflection of charges with opposite signs.
This intuitive picture leads to an effective calculation of Debye shielding (see section II.A.2 of ). The assumption of a Boltzmann distribution is not necessary in this calculation: it works for whatever particle distribution function. The calculation also avoids approximating weakly collisional plasmas as continuous media. An N-body calculation reveals that the bare Coulomb acceleration of a particle by another one is modified by a contribution mediated by all other particles, a signature of Debye shielding (see section 8 of ). When starting from random particle positions, the typical time-scale for shielding to set in is the time for a thermal particle to cross a Debye length, i.e. the inverse of the plasma frequency. Therefore in a weakly collisional plasma, collisions play an essential role by bringing a cooperative self-organization process: Debye shielding. This shielding is important to get a finite diffusion coefficient in the calculation of Coulomb scattering (Coulomb collision).
In a non-isothermic plasma, the temperatures for electrons and heavy species may differ while the background medium may be treated as the vacuum and the Debye length is
where
λD is the Debye length,
ε0 is the permittivity of free space,
kB is the Boltzmann constant,
qe is the charge of an electron,
Te and Ti are the temperatures of the electrons and ions, respectively,
ne is the density of electrons,
nj is the density of atomic species j, with positive ionic charge zjqe
Even in quasineutral cold plasma, where ion contribution virtually seems to be larger due to lower ion temperature, the ion term is actually often dropped, giving
although this is only valid when the mobility of ions is negligible compared to the process's timescale.
Typical values
In space plasmas where the electron density is relatively low, the Debye length may reach macroscopic values, such as in the magnetosphere, solar wind, interstellar medium and intergalactic medium. See the table here below:
In an electrolyte solution
In an electrolyte or a colloidal suspension, the Debye length for a monovalent electrolyte is usually denoted with symbol κ−1
where
I is the ionic strength of the electrolyte in number/m3 units,
ε0 is the permittivity of free space,
εr is the dielectric constant,
kB is the Boltzmann constant,
T is the absolute temperature in kelvins,
is the elementary charge,
or, for a symmetric monovalent electrolyte,
where
R is the gas constant,
F is the Faraday constant,
C0 is the electrolyte concentration in molar units (M or mol/L).
Alternatively,
where is the Bjerrum length of the medium in nm,
and the factor derives from transforming unit volume from cubic dm to cubic nm.
For deionized water at room temperature, at pH=7, λB ≈ 1μm.
At room temperature, one can consider in water the relation:
where
κ−1 is expressed in nanometres (nm)
I is the ionic strength expressed in molar (M or mol/L)
There is a method of estimating an approximate value of the Debye length in liquids using conductivity, which is described in ISO Standard, and the book.
In semiconductors
The Debye length has become increasingly significant in the modeling of solid state devices as improvements in lithographic technologies have enabled smaller geometries.
The Debye length of semiconductors is given:
where
ε is the dielectric constant,
kB is the Boltzmann constant,
T is the absolute temperature in kelvins,
q is the elementary charge, and
Ndop is the net density of dopants (either donors or acceptors).
When doping profiles exceed the Debye length, majority carriers no longer behave according to the distribution of the dopants. Instead, a measure of the profile of the doping gradients provides an "effective" profile that better matches the profile of the majority carrier density.
In the context of solids, Thomas–Fermi screening length may be required instead of Debye length.
See also
Bjerrum length
Debye–Falkenhagen effect
Plasma oscillation
Shielding effect
Screening effect
References
Further reading
Electricity
Electronics concepts
Colloidal chemistry
Plasma parameters
Electrochemistry
Length
Peter Debye | 0.766372 | 0.994332 | 0.762028 |
Michael Crichton | John Michael Crichton (; October 23, 1942 – November 4, 2008) was an American author, screenwriter and filmmaker. His books have sold over 200 million copies worldwide, and over a dozen have been adapted into films. His literary works heavily feature technology and are usually within the science fiction, techno-thriller, and medical fiction genres. Crichton's novels often explore human technological advancement and attempted dominance over nature, both with frequently catastrophic results; many of his works are cautionary tales, especially regarding themes of biotechnology. Several of his stories center specifically around themes of genetic modification, hybridization, paleontology and/or zoology. Many feature medical or scientific underpinnings, reflective of his own medical training and scientific background.
Crichton received an M.D. from Harvard Medical School in 1969 but did not practice medicine, choosing to focus on his writing instead. Initially writing under a pseudonym, he eventually wrote 26 novels, including: The Andromeda Strain (1969), The Terminal Man (1972), The Great Train Robbery (1975), Congo (1980), Sphere (1987), Jurassic Park (1990), Rising Sun (1992), Disclosure (1994), The Lost World (1995), Airframe (1996), Timeline (1999), Prey (2002), State of Fear (2004), and Next (2006). Several novels, in various states of completion, were published after his death in 2008.
Crichton was also involved in the film and television industry. In 1973, he wrote and directed Westworld, the first film to use 2D computer-generated imagery. He also directed Coma (1978), The First Great Train Robbery (1978), Looker (1981), and Runaway (1984). He was the creator of the television series ER (1994–2009), and several of his novels were adapted into films, most notably the Jurassic Park franchise.
Life
Early life
John Michael Crichton was born on October 23, 1942, in Chicago, Illinois, to John Henderson Crichton, a journalist, and Zula Miller Crichton, a homemaker. He was raised on Long Island, in Roslyn, New York, and he showed a keen interest in writing from a young age; at 16, he had an article about a trip he took to Sunset Crater published in The New York Times.
Crichton later recalled, "Roslyn was another world. Looking back, it's remarkable what wasn't going on. There was no terror. No fear of children being abused. No fear of random murder. No drug use we knew about. I walked to school. I rode my bike for miles and miles, to the movie on Main Street and piano lessons and the like. Kids had freedom. It wasn't such a dangerous world... We studied our butts off, and we got a tremendously good education there."
Crichton had always planned on becoming a writer and began his studies at Harvard College in 1960. During his undergraduate study in literature, he conducted an experiment to expose a professor whom he believed was giving him abnormally low marks and criticizing his literary style. Informing another professor of his suspicions, Crichton submitted an essay by George Orwell under his own name. The paper was returned by his unwitting professor with a mark of "B−". He later said, "Now Orwell was a wonderful writer, and if a B-minus was all he could get, I thought I'd better drop English as my major." His differences with the English department led Crichton to switch his undergraduate concentration. He earned his Bachelor's degree in biological anthropology summa cum laude in 1964, and was initiated into the Phi Beta Kappa Society.
Crichton received a Henry Russell Shaw Traveling Fellowship from 1964 to 1965, which allowed him to serve as a visiting lecturer in anthropology at the University of Cambridge in the United Kingdom. Crichton later enrolled at Harvard Medical School. Crichton later said "about two weeks into medical school I realized I hated it. This isn't unusual since everyone hates medical school – even happy, practicing physicians."
Pseudonymous novels (1965–1968)
In 1965, while at Harvard Medical School, Crichton wrote a novel, Odds On. "I wrote for furniture and groceries", he said later. Odds On is a 215-page paperback novel which describes an attempted robbery at an isolated hotel on the Costa Brava in Spain. The robbery is planned scientifically with the help of a critical path analysis computer program, but unforeseen events get in the way. Crichton submitted it to Doubleday, where a reader liked it but felt it was not for the company. Doubleday passed it on to New American Library, which published it in 1966. Crichton used the pen name John Lange because he planned to become a doctor and did not want his patients to worry that he would use them for his plots. The name came from cultural anthropologist Andrew Lang. Crichton added an "e" to the surname and substituted his own real first name, John, for Andrew. The novel was successful enough to lead to a series of John Lange novels. Film rights were sold in 1969, but no movie resulted.
The second Lange novel, Scratch One (1967), relates the story of Roger Carr, a handsome, charming, privileged man who practices law, more as a means to support his playboy lifestyle than a career. Carr is sent to Nice, France, where he has notable political connections, but is mistaken for an assassin and finds his life in jeopardy. Crichton wrote the book while traveling through Europe on a travel fellowship. He visited the Cannes Film Festival and Monaco Grand Prix, and then decided, "any idiot should be able to write a potboiler set in Cannes and Monaco", and wrote it in eleven days. He later described the book as "no good". His third John Lange novel, Easy Go (1968), is the story of Harold Barnaby, a brilliant Egyptologist who discovers a concealed message while translating hieroglyphics informing him of an unnamed pharaoh whose tomb is yet to be discovered. Crichton said the book earned him $1,500. Crichton later said: "My feeling about the Lange books is that my competition is in-flight movies. One can read the books in an hour and a half, and be more satisfactorily amused than watching Doris Day. I write them fast and the reader reads them fast and I get things off my back."
Crichton's fourth novel was A Case of Need (1968), a medical thriller. The novel had a different tone from the Lange books; accordingly, Crichton used the pen name "Jeffery Hudson", based on Sir Jeffrey Hudson, a 17th-century dwarf in the court of queen consort Henrietta Maria of England. The novel would prove a turning point in Crichton's future novels, in which technology is important in the subject matter, although this novel was as much about medical practice. The novel earned him an Edgar Award in 1969. He intended to use the "Jeffery Hudson" pseudonym for other medical novels but ended up using it only once. The book was later adapted into the film The Carey Treatment (1972).
Early novels and screenplays (1969–1974)
Crichton says after he finished his third year of medical school: "I stopped believing that one day I'd love it and realized that what I loved was writing." He began publishing book reviews under his name. In 1969, Crichton wrote a review for The New Republic (as J. Michael Crichton), critiquing Kurt Vonnegut's recently published Slaughterhouse-Five. He also continued to write Lange novels: Zero Cool (1969), dealt with an American radiologist on vacation in Spain who is caught in a murderous crossfire between rival gangs seeking a precious artifact. The Venom Business (1969) relates the story of a smuggler who uses his exceptional skill as a snake handler to his advantage by importing snakes to be used by drug companies and universities for medical research.
The first novel that was published under Crichton's name was The Andromeda Strain (1969), which proved to be the most important novel of his career and established him as a bestselling author. The novel documented the efforts of a team of scientists investigating a deadly extraterrestrial microorganism that fatally clots human blood, causing death within two minutes. Crichton was inspired to write it after reading The IPCRESS File by Len Deighton while studying in England. Crichton says he was "terrifically impressed" by the book – "a lot of Andromeda is traceable to Ipcress in terms of trying to create an imaginary world using recognizable techniques and real people." He wrote the novel over three years. The novel became an instant hit, and film rights were sold for $250,000. It was adapted into a 1971 film by director Robert Wise.
During his clinical rotations at the Boston City Hospital, Crichton grew disenchanted with the culture there, which appeared to emphasize the interests and reputations of doctors over the interests of patients. He graduated from Harvard, obtaining an MD in 1969, and undertook a post-doctoral fellowship study at the Salk Institute for Biological Studies in La Jolla, California, from 1969 to 1970. He never obtained a license to practice medicine, devoting himself to his writing career instead. Reflecting on his career in medicine years later, Crichton concluded that patients too often shunned responsibility for their own health, relying on doctors as miracle workers rather than advisors. He experimented with astral projection, aura viewing, and clairvoyance, coming to believe that these included real phenomena that scientists had too eagerly dismissed as paranormal.
Three more Crichton books under pseudonyms were published in 1970. Two were Lange novels, Drug of Choice and Grave Descend. Grave Descend earned him an Edgar Award nomination the following year. There was also Dealing: or the Berkeley-to-Boston Forty-Brick Lost-Bag Blues written with his younger brother Douglas Crichton. Dealing was written under the pen name "Michael Douglas", using their first names. Michael Crichton wrote it "completely from beginning to end". Then his brother rewrote it from beginning to end, and then Crichton rewrote it again. This novel was made into a movie in 1972. Around this time Crichton also wrote and sold an original film script, Morton's Run. He also wrote the screenplay Lucifer Harkness in Darkness.
Aside from fiction, Crichton wrote several other books based on medical or scientific themes, often based upon his own observations in his field of expertise. In 1970, he published Five Patients, which recounts his experiences of hospital practices in the late 1960s at Massachusetts General Hospital in Boston. The book follows each of five patients through their hospital experience and the context of their treatment, revealing inadequacies in the hospital institution at the time. The book relates the experiences of Ralph Orlando, a construction worker seriously injured in a scaffold collapse; John O'Connor, a middle-aged dispatcher suffering from fever that has reduced him to a delirious wreck; Peter Luchesi, a young man who severs his hand in an accident; Sylvia Thompson, an airline passenger who suffers chest pains; and Edith Murphy, a mother of three who is diagnosed with a life-threatening disease. In Five Patients, Crichton examines a brief history of medicine up to 1969 to help place hospital culture and practice into context, and addresses the costs and politics of American healthcare. In 1974, he wrote a pilot script for a medical series, "24 Hours", based on his book Five Patients, however, networks were not enthusiastic.
As a personal friend of the artist Jasper Johns, Crichton compiled many of Johns' works in a coffee table book, published as Jasper Johns. It was originally published in 1970 by Harry N. Abrams, Inc. in association with the Whitney Museum of American Art and again in January 1977, with a second revised edition published in 1994. The psychiatrist Janet Ross owned a copy of the painting Numbers by Jasper Johns in Crichton's later novel The Terminal Man. The technophobic antagonist of the story found it odd that a person would paint numbers as they were inorganic.
In 1972, Crichton published his last novel as John Lange: Binary, relates the story of a villainous middle-class businessman, who attempts to assassinate the President of the United States by stealing an army shipment of the two precursor chemicals that form a deadly nerve agent.
The Terminal Man (1972), is about a psychomotor epileptic sufferer, Harry Benson, who regularly suffers seizures followed by blackouts, and conducts himself inappropriately during seizures, waking up hours later with no knowledge of what he has done. Believed to be psychotic, he is investigated and electrodes are implanted in his brain. The book continued the preoccupation in Crichton's novels with machine-human interaction and technology. The novel was adapted into a 1974 film directed by Mike Hodges and starring George Segal. Crichton was hired to adapt his novel The Terminal Man into a script by Warner Bros. The studio felt he had departed from the source material too much and had another writer adapt it for the 1974 film.
ABC TV wanted to buy the film rights to Crichton's novel Binary. The author agreed on the provision that he could direct the film. ABC agreed provided someone other than Crichton write the script. The result, Pursuit (1972) was a ratings success. Crichton then wrote and directed the 1973 low-budget science fiction western-thriller film Westworld about robots that run amok, which was his feature film directorial debut. It was the first feature film using 2D computer-generated imagery (CGI). The producer of Westworld hired Crichton to write an original script, which became the erotic thriller Extreme Close-Up (1973). Directed by Jeannot Szwarc, the movie disappointed Crichton.
Period novels and directing (1975–1988)
In 1975, Crichton wrote The Great Train Robbery, which would become a bestseller. The novel is a recreation of the Great Gold Robbery of 1855, a massive gold heist, which takes place on a train traveling through Victorian era England. A considerable portion of the book was set in London. Crichton had become aware of the story when lecturing at the University of Cambridge. He later read the transcripts of the court trial and started researching the historical period.
In 1976, Crichton published Eaters of the Dead, a novel about a 10th-century Muslim who travels with a group of Vikings to their settlement. Eaters of the Dead is narrated as a scientific commentary on an old manuscript and was inspired by two sources. The first three chapters retell Ahmad ibn Fadlan's personal account of his journey north and his experiences in encountering the Rus', a Varangian tribe, whilst the remainder is based upon the story of Beowulf, culminating in battles with the 'mist-monsters', or 'wendol', a relict group of Neanderthals.
Crichton wrote and directed the suspense film Coma (1978), adapted from the 1977 novel of the same name by Robin Cook, a friend of his. There are other similarities in terms of genre and the fact that both Cook and Crichton had medical degrees, were of similar age, and wrote about similar subjects. The film was a popular success. Crichton then wrote and directed an adaptation of his own book, The Great Train Robbery (1978), starring Sean Connery and Donald Sutherland. The film would go on to be nominated for Best Cinematography Award by the British Society of Cinematographers, also garnering an Edgar Allan Poe Award for Best Motion Picture by the Mystery Writers Association of America.
In 1979 it was announced that Crichton would direct a movie version of his novel Eaters of the Dead for the newly formed Orion Pictures. This did not occur. Crichton pitched the idea of a modern day King Solomon's Mines to 20th Century Fox who paid him $1.5 million for the film rights to the novel, a screenplay and directorial fee for the movie, before a word had been written. He had never worked that way before, usually writing the book then selling it. He eventually managed to finish the book, titled Congo, which became a best seller. Crichton did the screenplay for Congo after he wrote and directed Looker (1981). Looker was a financial disappointment. Crichton came close to directing a film of Congo with Sean Connery, but the film did not happen. Eventually a film version was made in 1995 by Frank Marshall.
In 1984, Telarium released a graphic adventure based on Congo. Because Crichton had sold all adaptation rights to the novel, he set the game, named Amazon, in South America, and Amy the gorilla became Paco the parrot. That year Crichton also wrote and directed Runaway (1984), a police thriller set in the near future which was a box office disappointment.
Crichton had begun writing Sphere in 1967 as a companion piece to The Andromeda Strain. His initial storyline began with American scientists discovering a 300-year-old spaceship underwater with stenciled markings in English. However, Crichton later realized that he "didn't know where to go with it" and put off completing the book until a later date. The novel was published in 1987. It relates the story of psychologist Norman Johnson, who is required by the U.S. Navy to join a team of scientists assembled by the U.S. Government to examine an enormous alien spacecraft discovered on the bed of the Pacific Ocean, and believed to have been there for over 300 years. The novel begins as a science fiction story, but rapidly changes into a psychological thriller, ultimately exploring the nature of the human imagination. The novel was adapted into the 1998 film directed by Barry Levinson and starring Dustin Hoffman.
Crichton worked—-as a director only—on Physical Evidence (1989), a thriller originally conceived as a sequel to Jagged Edge.
In 1988, Crichton was a visiting writer at the Massachusetts Institute of Technology.
A book of autobiographical writings, Travels was published in 1988.
Jurassic Park and subsequent works (1989–1999)
In 1990, Crichton published the novel Jurassic Park. Crichton utilized the presentation of "fiction as fact", used in his previous novels, Eaters of the Dead and The Andromeda Strain. In addition, chaos theory and its philosophical implications are used to explain the collapse of an amusement park in a "biological preserve" on Isla Nublar, a fictional island to the west of Costa Rica. The novel had begun as a screenplay Crichton had written in 1983, about a graduate student who recreates a dinosaur. Reasoning that genetic research is expensive and that "there is no pressing need to create a dinosaur", Crichton concluded that it would emerge from a "desire to entertain", which led him to set the novel in a wildlife park of extinct animals. The story had originally been told from the point of view of a child, but Crichton changed it because everyone who read the draft felt it would be better if told by an adult.
Steven Spielberg learned of the novel in October 1989 while he and Crichton were discussing a screenplay that would later be developed into the television series ER. Before the book was published, Crichton demanded a non-negotiable fee of $1.5 million as well as a substantial percentage of the gross. Warner Bros. and Tim Burton, Sony Pictures Entertainment and Richard Donner, and 20th Century Fox and Joe Dante bid for the rights, but Universal eventually acquired the rights in May 1990 for Spielberg. Universal paid Crichton a further $500,000 to adapt his own novel, which he had completed by the time Spielberg was filming Hook. Crichton noted that, because the book was "fairly long", his script only had about 10% to 20% of the novel's content. The film, directed by Spielberg, was released in 1993.
In 1992, Crichton published the novel Rising Sun, an internationally bestselling crime thriller about a murder in the Los Angeles headquarters of Nakamoto, a fictional Japanese corporation. The book was adapted into the 1993 film directed by Philip Kaufman and starring Sean Connery and Wesley Snipes; it was released the same year as the adaptation of Jurassic Park.
The theme of his next novel, Disclosure, published in 1994, was sexual harassment—a theme previously explored in his 1972 novel, Binary. Unlike that novel however, Disclosure centers on sexual politics in the workplace, emphasizing an array of paradoxes in traditional gender roles by featuring a male protagonist who is being sexually harassed by a female executive. As a result, the book has been criticized harshly by some feminist commentators and accused of being anti-feminist. Crichton, anticipating this response, offered a rebuttal at the close of the novel which states that a "role-reversal" story uncovers aspects of the subject that would not be seen as easily with a female protagonist. The novel was made into a film the same year, directed by Barry Levinson and starring Michael Douglas and Demi Moore.
Crichton was the creator and an executive producer of the television drama ER, based on his 1974 pilot script 24 Hours. Spielberg helped develop the show, serving as an executive producer for season one and offering advice (he insisted on Julianna Margulies becoming a regular, for example). It was also through Spielberg's Amblin Entertainment that John Wells was attached as the show's executive producer.
In 1995, Crichton published The Lost World as a sequel to Jurassic Park. The title was a reference to Arthur Conan Doyle's The Lost World (1912). It was made into the 1997 film two years later, again directed by Spielberg. In March 1994, Crichton said there would probably be a sequel novel as well as a film adaptation, stating that he had an idea for the novel's story.
Then, in 1996, Crichton published Airframe, an aero-techno-thriller. The book continued Crichton's overall theme of the failure of humans in human-machine interaction, given that the plane worked perfectly and the accident would not have occurred had the pilot reacted properly.
He also wrote Twister (1996) with Anne-Marie Martin, his wife at the time.
In 1999, Crichton published Timeline, a science-fiction novel in which experts time travel back to the medieval period. The novel, which continued Crichton's long history of combining technical details and action in his books, explores quantum physics and time travel directly; it was also warmly received by medieval scholars, who praised his depiction of the challenges involved in researching the Middle Ages. In 1999, Crichton founded Timeline Computer Entertainment with David Smith. Although he signed a multi-title publishing deal with Eidos Interactive, only one game, Timeline, was ever published. Released by Eidos Interactive on November 10, 2000, for PCs, the game received negative reviews. A 2003 film based on the book was directed by Richard Donner and starring Paul Walker, Gerard Butler and Frances O'Connor.
Eaters of the Dead was adapted into the 1999 film The 13th Warrior directed by John McTiernan, who was later removed, with Crichton himself taking over direction of reshoots.
Final novels and later life (2000–2008)
In 2002, Crichton published Prey, about developments in science and technology, specifically nanotechnology. The novel explores relatively recent phenomena engendered by the work of the scientific community, such as: artificial life, emergence (and by extension, complexity), genetic algorithms, and agent-based computing.
In 2004, Crichton published State of Fear, a novel concerning eco-terrorists who attempt mass murder to support their views. The novel's central premise is that climate scientists exaggerate global warming. A review in Nature found the novel "likely to mislead the unwary". The novel had an initial print run of 1.5 million copies and reached the No. 1 bestseller position at Amazon.com and No. 2 on The New York Times Best Seller list for one week in January 2005.
The last novel published while he was still living was Next in 2006. The novel follows many characters, including transgenic animals, in a quest to survive in a world dominated by genetic research, corporate greed, and legal interventions, wherein government and private investors spend billions of dollars every year on genetic research.
In 2006, Crichton clashed with journalist Michael Crowley, a senior editor of the magazine The New Republic. In March 2006, Crowley wrote a strongly critical review of State of Fear, focusing on Crichton's stance on global warming. In the same year, Crichton published the novel Next, which contains a minor character named "Mick Crowley", who is a Yale graduate and a Washington, D.C.–based political columnist. The character was portrayed as a child molester with a small penis. The real Crowley, also a Yale graduate, alleged that by including a similarly named character Crichton had libeled him.
Posthumous works
Several novels that were in various states of completion upon Crichton's death have since been published. The first, Pirate Latitudes, was found as a manuscript on one of his computers after his death. It centers on a fictional privateer who attempts to raid a Spanish galleon. It was published in November 2009 by HarperCollins.
Additionally, Crichton had completed the outline for and was roughly a third of the way through a novel titled Micro, a novel which centers on technology that shrinks humans to microscopic sizes. Micro was completed by Richard Preston using Crichton's notes and files, and was published in November 2011.
On July 28, 2016, Crichton's website and HarperCollins announced the publication of a third posthumous novel, titled Dragon Teeth, which he had written in 1974. It is a historical novel set during the Bone Wars, and includes the real life characters of Othniel Charles Marsh and Edward Drinker Cope. The novel was released in May 2017.
In addition, some of his published works are being continued by other authors. On February 26, 2019, Crichton's website and HarperCollins announced the publication of The Andromeda Evolution, the sequel to The Andromeda Strain, a collaboration with CrichtonSun LLC. and author Daniel H. Wilson. It was released on November 12, 2019.
In 2020, it was announced that his unpublished works will be adapted into TV series and films in collaboration with CrichtonSun and Range Media Partners.
On December 15, 2022, it was announced that James Patterson would coauthor a novel about a mega-eruption of Hawaii's Mauna Loa volcano, based on an unfinished manuscript by Crichton. The novel, Eruption, was released on June 3, 2024.
Scientific and legal career
Video games and computing
In 1983, Crichton wrote Electronic Life, a book that introduces BASIC programming to its readers. The book, written like a glossary, with entries such as: "Afraid of Computers (everybody is)," "Buying a Computer" and "Computer Crime," was intended to introduce the idea of personal computers to a reader who might be faced with the hardship of using them at work or at home for the first time. It defined basic computer jargon and assured readers that they could master the machine when it inevitably arrived. In his words, being able to program a computer is liberation: "In my experience, you assert control over a computer—show it who's the boss—by making it do something unique. That means programming it... If you devote a couple of hours to programming a new machine, you'll feel better about it ever afterward." In the book, Crichton predicts a number of events in the history of computer development, that computer networks would increase in importance as a matter of convenience, including the sharing of information and pictures that we see online today, which the telephone never could. He also makes predictions for computer games, dismissing them as "the hula hoops of the 80s," and saying "already there are indications that the mania for twitch games may be fading." In a section of the book called "Microprocessors, or how I flunked biostatistics at Harvard," Crichton again seeks his revenge on the teacher who had given him abnormally low grades in college. Within the book, Crichton included many self-written demonstrative Applesoft (for Apple II) and BASICA (for IBM PC compatibles) programs.
Amazon is a graphical adventure game created by Crichton and produced by John Wells. Trillium released it in the United States in 1984 initially for the Apple II, Atari 8-bit computers, and Commodore 64. Amazon sold more than 100,000 copies, making it a significant commercial success at the time. It has plot elements similar to those previously used in Congo.
Crichton started a company selling a computer program he had originally written to help him create budgets for his movies. He often sought to utilize computing in films, such as Westworld, which was the first film to employ computer-generated special effects. He also pushed Spielberg to include them in the Jurassic Park films. For his pioneering use of computer programs in film production he was awarded the Academy Award for Technical Achievement in 1995.
Intellectual property cases
In November 2006, at the National Press Club in Washington, D.C., Crichton joked that he considered himself an expert in intellectual property law. He had been involved in several lawsuits with others claiming credit for his work.
In 1985, the United States Court of Appeals for the Ninth Circuit heard Berkic v. Crichton, 761 F.2d 1289 (1985). Plaintiff Ted Berkic wrote a screenplay called Reincarnation Inc., which he claims Crichton plagiarized for the movie Coma. The court ruled in Crichton's favor, stating the works were not substantially similar.
In the 1996 case, Williams v. Crichton, 84 F.3d 581 (2d Cir. 1996), Geoffrey Williams claimed that Jurassic Park violated his copyright covering his dinosaur-themed children's stories published in the late 1980s. The court granted summary judgment in favor of Crichton.
In 1998, A United States District Court in Missouri heard the case of Kessler v. Crichton that actually went all the way to a jury trial, unlike the other cases. Plaintiff Stephen Kessler claimed the movie Twister (1996) was based on his work Catch the Wind. It took the jury about 45 minutes to reach a verdict in favor of Crichton. After the verdict, Crichton refused to shake Kessler's hand.
Crichton later summarized his intellectual property legal cases: "I always win."
Global warming
Crichton became well known for attacking the science behind global warming. He testified on the subject before Congress in 2005.
His views would be contested by a number of scientists and commentators. An example is meteorologist Jeffrey Masters's review of Crichton's 2004 novel State of Fear:
Peter Doran, author of the paper in the January 2002 issue of Nature, which reported the finding referred to above, stating that some areas of Antarctica had cooled between 1986 and 2000, wrote an opinion piece in The New York Times of July 27, 2006, in which he stated "Our results have been misused as 'evidence' against global warming by Michael Crichton in his novel State of Fear." Al Gore said on March 21, 2007, before a U.S. House committee: "The planet has a fever. If your baby has a fever, you go to the doctor... if your doctor tells you you need to intervene here, you don't say 'Well, I read a science fiction novel that tells me it's not a problem. Several commentators have interpreted this as a reference to State of Fear.
Literary technique and style
Crichton's novels, including Jurassic Park, have been described by The Guardian as "harking back to the fantasy adventure fiction of Sir Arthur Conan Doyle, Jules Verne, Edgar Rice Burroughs, and Edgar Wallace, but with a contemporary spin, assisted by cutting-edge technology references made accessible for the general reader." According to The Guardian, "Michael Crichton wasn't really interested in characters, but his innate talent for storytelling enabled him to breathe new life into the science fiction thriller." Like The Guardian, The New York Times has also noted the boys adventure quality to his novels interfused with modern technology and science. According to The New York Times,
Crichton's works were frequently cautionary, his plots often portrayed scientific advancements going awry, commonly resulting in worst-case scenarios. A notable recurring theme in Crichton's plots is the pathological failure of complex systems and their safeguards, whether biological (Jurassic Park), militaristic/organizational (The Andromeda Strain), technological (Airframe), or cybernetic (Westworld). This theme of the inevitable breakdown of "perfect" systems and the failure of "fail-safe measures" can be seen strongly in the poster for Westworld, whose slogan was, "Where nothing can possibly go worng", and in the discussion of chaos theory in Jurassic Park. His 1973 movie Westworld contains one of the earliest references to a computer virus and is the first mention of the concept of a computer virus in a movie. Crichton believed, however, that his view of technology had been misunderstood as
The use of author surrogate was a feature of Crichton's writings from the beginning of his career. In A Case of Need, one of his pseudonymous whodunit stories, Crichton used first-person narrative to portray the hero, a Bostonian pathologist, who is running against the clock to clear a friend's name from medical malpractice in a girl's death from a hack-job abortion.
Crichton has used the literary technique known as the false document. Eaters of the Dead is a "recreation" of the Old English epic Beowulf presented as a scholarly translation of Ahmad ibn Fadlan's 10th century manuscript. The Andromeda Strain and Jurassic Park incorporate fictionalized scientific documents in the form of diagrams, computer output, DNA sequences, footnotes, and bibliography. The Terminal Man and State of Fear include authentic published scientific works that illustrate the premise point.
Crichton often employs the premise of diverse experts or specialists assembled to tackle a unique problem requiring their individual talents and knowledge. The premise was used for The Andromeda Strain, Sphere, Jurassic Park, and, to a lesser extent, Timeline. Sometimes the individual characters in this dynamic work in the private sector and are suddenly called upon by the government to form an immediate response team once some incident or discovery triggers their mobilization. This premise or plot device has been imitated and used by other authors and screenwriters in several books, movies and television shows since.
Personal life
As an adolescent, Crichton felt isolated because of his height (6 ft 9 in, or 206 cm). During the 1970s and 1980s, he consulted psychics and enlightenment gurus to make him feel more socially acceptable and to improve his positive karma. As a result of these experiences, Crichton practiced meditation throughout much of his life. While he is often regarded as a deist, he never publicly confirmed this. When asked in an online Q&A if he were a spiritual person, Crichton responded with: "Yes, but it is difficult to talk about."
Crichton was a workaholic. When drafting a novel, which would typically take him six or seven weeks, Crichton withdrew completely to follow what he called "a structured approach" of ritualistic self-denial. As he neared writing the end of each book, he would rise increasingly early each day, meaning that he would sleep for less than four hours by going to bed at 10 p.m. and waking at 2 am.
In 1992, Crichton was ranked among People magazine's 50 most beautiful people.
He married five times. Four of the marriages ended in divorce: Joan Radam (1965–1970); Kathleen St. Johns (1978–1980); Suzanna Childs (1981–1983); and actress Anne-Marie Martin (1987–2003), the mother of his daughter (born 1989). At the time of his death, Crichton was married to Sherri Alexander (married 2005), who was six months pregnant with their son, born on February 12, 2009.
Politics
From 1990 to 1995, Crichton donated $9,750 to Democratic candidates for office. According to Pat Choate, Crichton was a supporter of Reform candidate Ross Perot in the 1996 United States presidential election.
Crichton's 1992 novel Rising Sun delved into the political and economic effect of Japan–United States relations. The novel warns against foreign direct investment in the U.S. economy, with Crichton describing it in interviews as "economic suicide" for America. Crichton stated that his novel was written as a "wakeup call" to Americans.
In a 2003 speech, Crichton warned against partisanship in environmental legislation, arguing for an apolitical environmentalist movement.
In 2005, Crichton reportedly met with Republican President George W. Bush to discuss Crichton's novel State of Fear, of which Bush was a fan. According to Fred Barnes, Bush and Crichton "talked for an hour and were in near-total agreement."
In September 2005, Crichton testified on climate change before the U.S. Senate Committee on Environment and Public Works. Crichton testified about his doubts that human activities are significantly contributing to global warming, and encouraged U.S. lawmakers to more closely examine the methodology of climate science before voting on policy. His testimony received praise from Republican Senator Jim Inhofe, and criticism from Democratic Senator Hillary Clinton.
Illness and death
According to Crichton's brother Douglas, Crichton was diagnosed with lymphoma in early 2008. In accordance with the private way in which Crichton lived, his cancer was not made public until his death. He was undergoing chemotherapy treatment at the time of his death, and Crichton's physicians and relatives had been expecting him to recover. He died at age 66 on November 4, 2008.
Crichton had an extensive collection of 20th-century American art, which Christie's auctioned in May 2010.
Reception
Science fiction novels
Most of Crichton's novels address issues emerging in scientific research fields. In a number of his novels (Jurassic Park, The Lost World, Next, Congo) genomics plays an important role. Usually, the drama revolves around the sudden eruption of a scientific crisis, revealing the disruptive impacts new forms of knowledge and technology may have, as is stated in The Andromeda Strain, Crichton's first science fiction novel: "This book recounts the five-day history of a major American scientific crisis" (1969, p. 3) or The Terminal Man where unexpected behaviors are realized when electrodes are implanted into a person's brain.
Awards
Mystery Writers of America's Edgar Allan Poe Award, Best Novel, 1969 – A Case of Need
Association of American Medical Writers Award, 1970
Seiun Award for Best Foreign Novel (Best Translated Long Work), 1971 for The Andromeda Strain
Mystery Writers of America's Edgar Allan Poe Award, Best Motion Picture, 1980 – The Great Train Robbery
Named to the list of the "Fifty Most Beautiful People" by People magazine, 1992
Golden Plate Award of the American Academy of Achievement, 1992
Academy of Motion Picture Arts and Sciences Technical Achievement Award, 1994
Writers Guild of America Award, Best Long Form Television Script of 1995 (The Writer Guild list the award for 1996)
George Foster Peabody Award, 1994 – ER
Primetime Emmy Award for Outstanding Drama Series, 1996 – ER
Ankylosaur named Crichtonsaurus bohlini, 2002
American Association of Petroleum Geologists Journalism Award, 2006
Speeches
Crichton was also a popular public speaker. He delivered a number of notable speeches in his lifetime, particularly on the topic of global warming.
"Intelligence Squared debate"
On March 14, 2007, Intelligence Squared held a debate in New York City for which the motion was Global Warming Is Not a Crisis, moderated by Brian Lehrer. Crichton was for the motion, along with Richard Lindzen and Philip Stott, versus Gavin Schmidt, Richard Somerville, and Brenda Ekwurze, who were against the motion. Before the debate, the audience had voted largely against the motion (57% to 30%, with 13% undecided). At the end of the debate, there was a greater shift in the audience vote for the motion (46% to 42%, with 12% undecided), resulting in Crichton's group winning the debate. Although Crichton inspired numerous blog responses and his contribution to the debate was considered one of his best rhetorical performances, reception of his message was mixed.
Other speeches
"Mediasaurus: The Decline of Conventional Media"
In a speech delivered at the National Press Club in Washington, D.C., on April 7, 1993, Crichton predicted the decline of mainstream media.
"Ritual Abuse, Hot Air, and Missed Opportunities: Science Views Media"
The AAAS invited Crichton to address scientists concerns about how they are portrayed in the media, which was delivered to the American Association for the Advancement of Science in Anaheim, California on January 25, 1999.
"Environmentalism as Religion"
This was not the first discussion of environmentalism as a religion, but it caught on and was widely quoted. Crichton explains his view that religious approaches to the environment are inappropriate and cause damage to the natural world they intend to protect. The speech was delivered to the Commonwealth Club in San Francisco, California on September 15, 2003.
"Science Policy in the 21st century"
Crichton outlined several issues before a joint meeting of liberal and conservative think tanks. The speech was delivered at AEI–Brookings Institution in Washington, D.C., on January 25, 2005.
"The Case for Skepticism on Global Warming"
On January 25, 2005, at the National Press Club in Washington, D.C., Crichton delivered a detailed explanation of why he criticized the consensus view on global warming. Using published UN data, he argued that claims for catastrophic warming arouse doubt and that reducing CO2 is vastly more difficult than is commonly presumed. He spoke on why societies are morally unjustified in spending vast sums on a speculative issue when people around the world are dying of starvation and disease.
"Caltech Michelin Lecture"
"Aliens Cause Global Warming" January 17, 2003. In the spirit of his science fiction writing, Crichton details research on nuclear winter and SETI Drake equations relative to global warming science.
"Testimony before the United States Senate"
Crichton was invited to testify before the Senate in September 2005, as an "expert witness on global warming." The speech was delivered to the Committee on Environment and Public Works in Washington, D.C.
"Complexity Theory and Environmental Management"
In previous speeches, Crichton criticized environmental groups for failing to incorporate complexity theory. Here he explains in detail why complexity theory is essential to environmental management, using the history of Yellowstone Park as an example of what not to do. The speech was delivered to the Washington Center for Complexity and Public Policy in Washington, D.C., on November 6, 2005.
"Genetic Research and Legislative Needs"
While writing Next, Crichton concluded that laws covering genetic research desperately needed to be revised, and spoke to congressional staff members about problems ahead. The speech was delivered to a group of legislative staffers in Washington, D.C., on September 14, 2006.
Gell-Mann amnesia effect
In a speech in 2002, Crichton coined the term Gell-Mann amnesia effect to describe the phenomenon of experts reading articles within their fields of expertise and finding them to be error-ridden and full of misunderstanding, but seemingly forgetting those experiences when reading articles in the same publications written on topics outside of their fields of expertise, which they believe to be credible. He explained that he had chosen the name ironically, because he had once discussed the effect with physicist Murray Gell-Mann, "and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have."
The Gell-Mann amnesia effect is similar to Erwin Knoll's law of media accuracy, which states: "Everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge."
Legacy
In 2002, a genus of ankylosaurid, Crichtonsaurus bohlini, was named in his honor. This species was concluded to be dubious however, and some of the diagnostic fossil material was then transferred into the new binomial Crichtonpelta benxiensis, also named in his honor.
His literary works continue to be adapted into films, making him the 20th highest grossing story creator of all time.
Works
References
Bibliography
External links
Musings on Michael Crichton—News and Analysis on his Life and Works
Michael Crichton Obituary. Associated Press. Chicago Sun-Times
Mulholland house In the early 1980s Crichton was living in a Richard Neutra house in the Hollywood Hills.
Michael Crichton bibliography on the Internet Book List
Complete bibliography and cover gallery of the first editions
Comprehensive listing and info on Michael Crichton's complete works
1942 births
2008 deaths
20th-century American male writers
20th-century American non-fiction writers
20th-century American novelists
20th-century American screenwriters
20th-century pseudonymous writers
21st-century American male writers
21st-century American non-fiction writers
21st-century American novelists
Academics of the University of Cambridge
Academy Award for Technical Achievement winners
American futurologists
American male non-fiction writers
American male novelists
American male screenwriters
American medical writers
American men's basketball players
American science fiction writers
American science fiction film directors
American thriller writers
American travel writers
Deaths from lymphoma in California
Edgar Award winners
Environmental fiction writers
Film directors from Illinois
Film producers from Illinois
Film producers from New York (state)
Harvard College alumni
Harvard Crimson men's basketball players
Harvard Medical School alumni
Hugo Award-winning writers
Medical fiction writers
Mythopoeic writers
Novelists from Illinois
People from Roslyn, New York
Screenwriters from Illinois
Screenwriters from New York (state)
Techno-thriller writers
Television producers from Illinois
Television producers from New York (state)
Television show creators
Writers from Chicago
Writers Guild of America Award winners | 0.762176 | 0.999804 | 0.762027 |
Eyring equation | The Eyring equation (occasionally also known as Eyring–Polanyi equation) is an equation used in chemical kinetics to describe changes in the rate of a chemical reaction against temperature. It was developed almost simultaneously in 1935 by Henry Eyring, Meredith Gwynne Evans and Michael Polanyi. The equation follows from the transition state theory, also known as activated-complex theory. If one assumes a constant enthalpy of activation and constant entropy of activation, the Eyring equation is similar to the empirical Arrhenius equation, despite the Arrhenius equation being empirical and the Eyring equation based on statistical mechanical justification.
General form
The general form of the Eyring–Polanyi equation somewhat resembles the Arrhenius equation:
where is the rate constant, is the Gibbs energy of activation, is the transmission coefficient, is the Boltzmann constant, is the temperature, and is the Planck constant.
The transmission coefficient is often assumed to be equal to one as it reflects what fraction of the flux through the transition state proceeds to the product without recrossing the transition state. So, a transmission coefficient equal to one means that the fundamental no-recrossing assumption of transition state theory holds perfectly. However, is typically not one because (i) the reaction coordinate chosen for the process at hand is usually not perfect and (ii) many barrier-crossing processes are somewhat or even strongly diffusive in nature. For example, the transmission coefficient of methane hopping in a gas hydrate from one site to an adjacent empty site is between 0.25 and 0.5. Typically, reactive flux correlation function (RFCF) simulations are performed in order to explicitly calculate from the resulting plateau in the RFCF. This approach is also referred to as the Bennett-Chandler approach, which yields a dynamical correction to the standard transition state theory-based rate constant.
It can be rewritten as:
One can put this equation in the following form:
where:
= reaction rate constant
= absolute temperature
= enthalpy of activation
= gas constant
= transmission coefficient
= Boltzmann constant = R/NA, NA = Avogadro constant
= Planck constant
= entropy of activation
If one assumes constant enthalpy of activation, constant entropy of activation, and constant transmission coefficient, this equation can be used as follows: A certain chemical reaction is performed at different temperatures and the reaction rate is determined. The plot of versus gives a straight line with slope from which the enthalpy of activation can be derived and with intercept from which the entropy of activation is derived.
Accuracy
Transition state theory requires a value of the transmission coefficient, called in that theory. This value is often taken to be unity (i.e., the species passing through the transition state always proceed directly to products and never revert to reactants and ). To avoid specifying a value of , the rate constant can be compared to the value of the rate constant at some fixed reference temperature (i.e., ) which eliminates the factor in the resulting expression if one assumes that the transmission coefficient is independent of temperature.
Error propagation formulas
Error propagation formulas for and have been published.
Notes
References
Chapman, S. and Cowling, T.G. (1991). "The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases" (3rd Edition). Cambridge University Press,
External links
Eyring equation at the University of Regensburg (archived from the original)
Online-tool to calculate the reaction rate from an energy barrier (in kJ/mol) using the Eyring equation
Chemical kinetics
Eponymous equations of physics
Reaction mechanisms
Physical chemistry
de:Eyring-Theorie | 0.770157 | 0.989425 | 0.762012 |
Self-energy | In quantum field theory, the energy that a particle has as a result of changes that it causes in its environment defines self-energy , and represents the contribution to the particle's energy, or effective mass, due to interactions between the particle and its environment. In electrostatics, the energy required to assemble the charge distribution takes the form of self-energy by bringing in the constituent charges from infinity, where the electric force goes to zero. In a condensed matter context, self-energy is used to describe interaction induced renormalization of quasiparticle mass (dispersions) and lifetime. Self-energy is especially used to describe electron-electron interactions in Fermi liquids. Another example of self-energy is found in the context of phonon softening due to electron-phonon coupling.
Characteristics
Mathematically, this energy is equal to the so-called on mass shell value of the proper self-energy operator (or proper mass operator) in the momentum-energy representation (more precisely, to times this value). In this, or other representations (such as the space-time representation), the self-energy is pictorially (and economically) represented by means of Feynman diagrams, such as the one shown below. In this particular diagram, the three arrowed straight lines represent particles, or particle propagators, and the wavy line a particle-particle interaction; removing (or amputating) the left-most and the right-most straight lines in the diagram shown below (these so-called external lines correspond to prescribed values for, for instance, momentum and energy, or four-momentum), one retains a contribution to the self-energy operator (in, for instance, the momentum-energy representation). Using a small number of simple rules, each Feynman diagram can be readily expressed in its corresponding algebraic form.
In general, the on-the-mass-shell value of the self-energy operator in the momentum-energy representation is complex. In such cases, it is the real part of this self-energy that is identified with the physical self-energy (referred to above as particle's "self-energy"); the inverse of the imaginary part is a measure for the lifetime of the particle under investigation. For clarity, elementary excitations, or dressed particles (see quasi-particle), in interacting systems are distinct from stable particles in vacuum; their state functions consist of complicated superpositions of the eigenstates of the underlying many-particle system, which only momentarily, if at all, behave like those specific to isolated particles; the above-mentioned lifetime is the time over which a dressed particle behaves as if it were a single particle with well-defined momentum and energy.
The self-energy operator (often denoted by , and less frequently by ) is related to the bare and dressed propagators (often denoted by and respectively) via the Dyson equation (named after Freeman Dyson):
Multiplying on the left by the inverse of the operator
and on the right by yields
The photon and gluon do not get a mass through renormalization because gauge symmetry protects them from getting a mass. This is a consequence of the Ward identity. The W-boson and the Z-boson get their masses through the Higgs mechanism; they do undergo mass renormalization through the renormalization of the electroweak theory.
Neutral particles with internal quantum numbers can mix with each other through virtual pair production. The primary example of this phenomenon is the mixing of neutral kaons. Under appropriate simplifying assumptions this can be described without quantum field theory.
Other uses
In chemistry, the self-energy or Born energy of an ion is the energy associated with the field of the ion itself.
In solid state and condensed-matter physics self-energies and a myriad of related quasiparticle properties are calculated by Green's function methods and Green's function (many-body theory) of interacting low-energy excitations on the basis of electronic band structure calculations. Self-energies also find extensive application in the calculation of particle transport through open quantum systems and the embedding of sub-regions into larger systems (for example the surface of a semi-infinite crystal).
See also
Quantum field theory
QED vacuum
Renormalization
Self-force
GW approximation
Wheeler–Feynman absorber theory
References
A. L. Fetter, and J. D. Walecka, Quantum Theory of Many-Particle Systems (McGraw-Hill, New York, 1971); (Dover, New York, 2003)
J. W. Negele, and H. Orland, Quantum Many-Particle Systems (Westview Press, Boulder, 1998)
A. A. Abrikosov, L. P. Gorkov and I. E. Dzyaloshinski (1963): Methods of Quantum Field Theory in Statistical Physics Englewood Cliffs: Prentice-Hall.
A. N. Vasil'ev The Field Theoretic Renormalization Group in Critical Behavior Theory and Stochastic Dynamics (Routledge Chapman & Hall 2004); ;
Quantum electrodynamics
Quantum field theory
Renormalization group | 0.777439 | 0.980153 | 0.76201 |
Inductance | Inductance is the tendency of an electrical conductor to oppose a change in the electric current flowing through it. The electric current produces a magnetic field around the conductor. The magnetic field strength depends on the magnitude of the electric current, and follows any changes in the magnitude of the current. From Faraday's law of induction, any change in magnetic field through a circuit induces an electromotive force (EMF) (voltage) in the conductors, a process known as electromagnetic induction. This induced voltage created by the changing current has the effect of opposing the change in current. This is stated by Lenz's law, and the voltage is called back EMF.
Inductance is defined as the ratio of the induced voltage to the rate of change of current causing it. It is a proportionality constant that depends on the geometry of circuit conductors (e.g., cross-section area and length) and the magnetic permeability of the conductor and nearby materials. An electronic component designed to add inductance to a circuit is called an inductor. It typically consists of a coil or helix of wire.
The term inductance was coined by Oliver Heaviside in May 1884, as a convenient way to refer to "coefficient of self-induction". It is customary to use the symbol for inductance, in honour of the physicist Heinrich Lenz. In the SI system, the unit of inductance is the henry (H), which is the amount of inductance that causes a voltage of one volt, when the current is changing at a rate of one ampere per second. The unit is named for Joseph Henry, who discovered inductance independently of Faraday.
History
The history of electromagnetic induction, a facet of electromagnetism, began with observations of the ancients: electric charge or static electricity (rubbing silk on amber), electric current (lightning), and magnetic attraction (lodestone). Understanding the unity of these forces of nature, and the scientific theory of electromagnetism was initiated and achieved during the 19th century.
Electromagnetic induction was first described by Michael Faraday in 1831. In Faraday's experiment, he wrapped two wires around opposite sides of an iron ring. He expected that, when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. Using a galvanometer, he observed a transient current flow in the second coil of wire each time that a battery was connected or disconnected from the first coil. This current was induced by the change in magnetic flux that occurred when the battery was connected and disconnected. Faraday found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk").
Source of inductance
A current flowing through a conductor generates a magnetic field around the conductor, which is described by Ampere's circuital law. The total magnetic flux through a circuit is equal to the product of the perpendicular component of the magnetic flux density and the area of the surface spanning the current path. If the current varies, the magnetic flux through the circuit changes. By Faraday's law of induction, any change in flux through a circuit induces an electromotive force (EMF, in the circuit, proportional to the rate of change of flux
The negative sign in the equation indicates that the induced voltage is in a direction which opposes the change in current that created it; this is called Lenz's law. The potential is therefore called a back EMF. If the current is increasing, the voltage is positive at the end of the conductor through which the current enters and negative at the end through which it leaves, tending to reduce the current. If the current is decreasing, the voltage is positive at the end through which the current leaves the conductor, tending to maintain the current. Self-inductance, usually just called inductance, is the ratio between the induced voltage and the rate of change of the current
Thus, inductance is a property of a conductor or circuit, due to its magnetic field, which tends to oppose changes in current through the circuit. The unit of inductance in the SI system is the henry (H), named after Joseph Henry, which is the amount of inductance that generates a voltage of one volt when the current is changing at a rate of one ampere per second.
All conductors have some inductance, which may have either desirable or detrimental effects in practical electrical devices. The inductance of a circuit depends on the geometry of the current path, and on the magnetic permeability of nearby materials; ferromagnetic materials with a higher permeability like iron near a conductor tend to increase the magnetic field and inductance. Any alteration to a circuit which increases the flux (total magnetic field) through the circuit produced by a given current increases the inductance, because inductance is also equal to the ratio of magnetic flux to current
An inductor is an electrical component consisting of a conductor shaped to increase the magnetic flux, to add inductance to a circuit. Typically it consists of a wire wound into a coil or helix. A coiled wire has a higher inductance than a straight wire of the same length, because the magnetic field lines pass through the circuit multiple times, it has multiple flux linkages. The inductance is proportional to the square of the number of turns in the coil, assuming full flux linkage.
The inductance of a coil can be increased by placing a magnetic core of ferromagnetic material in the hole in the center. The magnetic field of the coil magnetizes the material of the core, aligning its magnetic domains, and the magnetic field of the core adds to that of the coil, increasing the flux through the coil. This is called a ferromagnetic core inductor. A magnetic core can increase the inductance of a coil by thousands of times.
If multiple electric circuits are located close to each other, the magnetic field of one can pass through the other; in this case the circuits are said to be inductively coupled. Due to Faraday's law of induction, a change in current in one circuit can cause a change in magnetic flux in another circuit and thus induce a voltage in another circuit. The concept of inductance can be generalized in this case by defining the mutual inductance of circuit and circuit as the ratio of voltage induced in circuit to the rate of change of current in circuit This is the principle behind a transformer. The property describing the effect of one conductor on itself is more precisely called self-inductance, and the properties describing the effects of one conductor with changing current on nearby conductors is called mutual inductance.
Self-inductance and magnetic energy
If the current through a conductor with inductance is increasing, a voltage is induced across the conductor with a polarity that opposes the current—in addition to any voltage drop caused by the conductor's resistance. The charges flowing through the circuit lose potential energy. The energy from the external circuit required to overcome this "potential hill" is stored in the increased magnetic field around the conductor. Therefore, an inductor stores energy in its magnetic field. At any given time the power flowing into the magnetic field, which is equal to the rate of change of the stored energy is the product of the current and voltage across the conductor
From (1) above
When there is no current, there is no magnetic field and the stored energy is zero. Neglecting resistive losses, the energy (measured in joules, in SI) stored by an inductance with a current through it is equal to the amount of work required to establish the current through the inductance from zero, and therefore the magnetic field. This is given by:
If the inductance is constant over the current range, the stored energy is
Inductance is therefore also proportional to the energy stored in the magnetic field for a given current. This energy is stored as long as the current remains constant. If the current decreases, the magnetic field decreases, inducing a voltage in the conductor in the opposite direction, negative at the end through which current enters and positive at the end through which it leaves. This returns stored magnetic energy to the external circuit.
If ferromagnetic materials are located near the conductor, such as in an inductor with a magnetic core, the constant inductance equation above is only valid for linear regions of the magnetic flux, at currents below the level at which the ferromagnetic material saturates, where the inductance is approximately constant. If the magnetic field in the inductor approaches the level at which the core saturates, the inductance begins to change with current, and the integral equation must be used.
Inductive reactance
When a sinusoidal alternating current (AC) is passing through a linear inductance, the induced back- is also sinusoidal. If the current through the inductance is , from (1) above the voltage across it is
where is the amplitude (peak value) of the sinusoidal current in amperes, is the angular frequency of the alternating current, with being its frequency in hertz, and is the inductance.
Thus the amplitude (peak value) of the voltage across the inductance is
Inductive reactance is the opposition of an inductor to an alternating current. It is defined analogously to electrical resistance in a resistor, as the ratio of the amplitude (peak value) of the alternating voltage to current in the component
Reactance has units of ohms. It can be seen that inductive reactance of an inductor increases proportionally with frequency so an inductor conducts less current for a given applied AC voltage as the frequency increases. Because the induced voltage is greatest when the current is increasing, the voltage and current waveforms are out of phase; the voltage peaks occur earlier in each cycle than the current peaks. The phase difference between the current and the induced voltage is radians or 90 degrees, showing that in an ideal inductor the current lags the voltage by 90°.
Calculating inductance
In the most general case, inductance can be calculated from Maxwell's equations. Many important cases can be solved using simplifications. Where high frequency currents are considered, with skin effect, the surface current densities and magnetic field may be obtained by solving the Laplace equation. Where the conductors are thin wires, self-inductance still depends on the wire radius and the distribution of the current in the wire. This current distribution is approximately constant (on the surface or in the volume of the wire) for a wire radius much smaller than other length scales.
Inductance of a straight single wire
As a practical matter, longer wires have more inductance, and thicker wires have less, analogous to their electrical resistance (although the relationships aren't linear, and are different in kind from the relationships that length and diameter bear to resistance).
Separating the wire from the other parts of the circuit introduces some unavoidable error in any formulas' results. These inductances are often referred to as “partial inductances”, in part to encourage consideration of the other contributions to whole-circuit inductance which are omitted.
Practical formulas
For derivation of the formulas below, see Rosa (1908).
The total low frequency inductance (interior plus exterior) of a straight wire is:
where
is the "low-frequency" or DC inductance in nanohenry (nH or 10−9H),
is the length of the wire in meters,
is the radius of the wire in meters (hence a very small decimal number),
the constant is the permeability of free space, commonly called , divided by ; in the absence of magnetically reactive insulation the value 200 is exact when using the classical definition of μ0 = , and correct to 7 decimal places when using the 2019-redefined SI value of μ0 = .
The constant 0.75 is just one parameter value among several; different frequency ranges, different shapes, or extremely long wire lengths require a slightly different constant (see below). This result is based on the assumption that the radius is much less than the length which is the common case for wires and rods. Disks or thick cylinders have slightly different formulas.
For sufficiently high frequencies skin effects cause the interior currents to vanish, leaving only the currents on the surface of the conductor; the inductance for alternating current, is then given by a very similar formula:
where the variables and are the same as above; note the changed constant term now 1, from 0.75 above.
In an example from everyday experience, just one of the conductors of a lamp cord long, made of 18 AWG wire, would only have an inductance of about if stretched out straight.
Mutual inductance of two parallel straight wires
There are two cases to consider:
Current travels in the same direction in each wire, and
current travels in opposing directions in the wires.
Currents in the wires need not be equal, though they often are, as in the case of a complete circuit, where one wire is the source and the other the return.
Mutual inductance of two wire loops
This is the generalized case of the paradigmatic two-loop cylindrical coil carrying a uniform low frequency current; the loops are independent closed circuits that can have different lengths, any orientation in space, and carry different currents. Nonetheless, the error terms, which are not included in the integral are only small if the geometries of the loops are mostly smooth and convex: They must not have too many kinks, sharp corners, coils, crossovers, parallel segments, concave cavities, or other topologically "close" deformations. A necessary predicate for the reduction of the 3-dimensional manifold integration formula to a double curve integral is that the current paths be filamentary circuits, i.e. thin wires where the radius of the wire is negligible compared to its length.
The mutual inductance by a filamentary circuit on a filamentary circuit is given by the double integral Neumann formula
where
and are the curves followed by the wires.
is the permeability of free space
is a small increment of the wire in circuit
is the position of in space
is a small increment of the wire in circuit
is the position of in space.
Derivation
where
is the current through the th wire, this current creates the magnetic flux through the th surface
is the magnetic flux through the ith surface due to the electrical circuit outlined by
where
Stokes' theorem has been used for the 3rd equality step. For the last equality step, we used the retarded potential expression for and we ignore the effect of the retarded time (assuming the geometry of the circuits is small enough compared to the wavelength of the current they carry). It is actually an approximation step, and is valid only for local circuits made of thin wires.
Self-inductance of a wire loop
Formally, the self-inductance of a wire loop would be given by the above equation with However, here becomes infinite, leading to a logarithmically divergent integral.
This necessitates taking the finite wire radius and the distribution of the current in the wire into account. There remains the contribution from the integral over all points and a correction term,
where
and are distances along the curves and respectively
is the radius of the wire
is the length of the wire
is a constant that depends on the distribution of the current in the wire:
when the current flows on the surface of the wire (total skin effect),
when the current is evenly over the cross-section of the wire.
is an error term whose size depends on the curve of the loop:
when the loop has sharp corners, and
when it is a smooth curve.
Both are small when the wire is long compared to its radius.
Inductance of a solenoid
A solenoid is a long, thin coil; i.e., a coil whose length is much greater than its diameter. Under these conditions, and without any magnetic material used, the magnetic flux density within the coil is practically constant and is given by
where is the magnetic constant, the number of turns, the current and the length of the coil. Ignoring end effects, the total magnetic flux through the coil is obtained by multiplying the flux density by the cross-section area
When this is combined with the definition of inductance it follows that the inductance of a solenoid is given by:
Therefore, for air-core coils, inductance is a function of coil geometry and number of turns, and is independent of current.
Inductance of a coaxial cable
Let the inner conductor have radius and permeability let the dielectric between the inner and outer conductor have permeability and let the outer conductor have inner radius outer radius and permeability However, for a typical coaxial line application, we are interested in passing (non-DC) signals at frequencies for which the resistive skin effect cannot be neglected. In most cases, the inner and outer conductor terms are negligible, in which case one may approximate
Inductance of multilayer coils
Most practical air-core inductors are multilayer cylindrical coils with square cross-sections to minimize average distance between turns (circular cross -sections would be better but harder to form).
Magnetic cores
Many inductors include a magnetic core at the center of or partly surrounding the winding. Over a large enough range these exhibit a nonlinear permeability with effects such as magnetic saturation. Saturation makes the resulting inductance a function of the applied current.
The secant or large-signal inductance is used in flux calculations. It is defined as:
The differential or small-signal inductance, on the other hand, is used in calculating voltage. It is defined as:
The circuit voltage for a nonlinear inductor is obtained via the differential inductance as shown by Faraday's Law and the chain rule of calculus.
Similar definitions may be derived for nonlinear mutual inductance.
Mutual inductance
Mutual inductance is defined as the ratio between the EMF induced in one loop or coil by the rate of change of current in another loop or coil. Mutual inductance is given the symbol .
Derivation of mutual inductance
The inductance equations above are a consequence of Maxwell's equations. For the important case of electrical circuits consisting of thin wires, the derivation is straightforward.
In a system of wire loops, each with one or several wire turns, the flux linkage of loop is given by
Here denotes the number of turns in loop is the magnetic flux through loop and are some constants described below. This equation follows from Ampere's law: magnetic fields and fluxes are linear functions of the currents. By Faraday's law of induction, we have
where denotes the voltage induced in circuit This agrees with the definition of inductance above if the coefficients are identified with the coefficients of inductance. Because the total currents contribute to it also follows that is proportional to the product of turns
Mutual inductance and magnetic field energy
Multiplying the equation for vm above with imdt and summing over m gives the energy transferred to the system in the time interval dt,
This must agree with the change of the magnetic field energy, W, caused by the currents. The integrability condition
requires Lm,n = Ln,m. The inductance matrix, Lm,n, thus is symmetric. The integral of the energy transfer is the magnetic field energy as a function of the currents,
This equation also is a direct consequence of the linearity of Maxwell's equations. It is helpful to associate changing electric currents with a build-up or decrease of magnetic field energy. The corresponding energy transfer requires or generates a voltage. A mechanical analogy in the K = 1 case with magnetic field energy (1/2)Li2 is a body with mass M, velocity u and kinetic energy (1/2)Mu2. The rate of change of velocity (current) multiplied with mass (inductance) requires or generates a force (an electrical voltage).
Mutual inductance occurs when the change in current in one inductor induces a voltage in another nearby inductor. It is important as the mechanism by which transformers work, but it can also cause unwanted coupling between conductors in a circuit.
The mutual inductance, is also a measure of the coupling between two inductors. The mutual inductance by circuit on circuit is given by the double integral Neumann formula, see calculation techniques
The mutual inductance also has the relationship:
where
Once the mutual inductance is determined, it can be used to predict the behavior of a circuit:
where
The minus sign arises because of the sense the current has been defined in the diagram. With both currents defined going into the dots the sign of will be positive (the equation would read with a plus sign instead).
Coupling coefficient
The coupling coefficient is the ratio of the open-circuit actual voltage ratio to the ratio that would be obtained if all the flux coupled from one magnetic circuit to the other. The coupling coefficient is related to mutual inductance and self inductances in the following way. From the two simultaneous equations expressed in the two-port matrix the open-circuit voltage ratio is found to be:
where
while the ratio if all the flux is coupled is the ratio of the turns, hence the ratio of the square root of the inductances
thus,
where
The coupling coefficient is a convenient way to specify the relationship between a certain orientation of inductors with arbitrary inductance. Most authors define the range as but some define it as Allowing negative values of captures phase inversions of the coil connections and the direction of the windings.
Matrix representation
Mutually coupled inductors can be described by any of the two-port network parameter matrix representations. The most direct are the z parameters, which are given by
The y parameters are given by
Where is the complex frequency variable, and are the inductances of the primary and secondary coil, respectively, and is the mutual inductance between the coils.
Multiple Coupled Inductors
Mutual inductance may be applied to multiple inductors simultaneously. The matrix representations for multiple mutually coupled inductors are given by
Equivalent circuits
T-circuit
Mutually coupled inductors can equivalently be represented by a T-circuit of inductors as shown. If the coupling is strong and the inductors are of unequal values then the series inductor on the step-down side may take on a negative value.
This can be analyzed as a two port network. With the output terminated with some arbitrary impedance the voltage gain is given by,
where is the coupling constant and is the complex frequency variable, as above.
For tightly coupled inductors where this reduces to
which is independent of the load impedance. If the inductors are wound on the same core and with the same geometry, then this expression is equal to the turns ratio of the two inductors because inductance is proportional to the square of turns ratio.
The input impedance of the network is given by,
For this reduces to
Thus, current gain is independent of load unless the further condition
is met, in which case,
and
π-circuit
Alternatively, two coupled inductors can be modelled using a π equivalent circuit with optional ideal transformers at each port. While the circuit is more complicated than a T-circuit, it can be generalized to circuits consisting of more than two coupled inductors. Equivalent circuit elements have physical meaning, modelling respectively magnetic reluctances of coupling paths and magnetic reluctances of leakage paths. For example, electric currents flowing through these elements correspond to coupling and leakage magnetic fluxes. Ideal transformers normalize all self-inductances to 1 Henry to simplify mathematical formulas.
Equivalent circuit element values can be calculated from coupling coefficients with
where coupling coefficient matrix and its cofactors are defined as
and
For two coupled inductors, these formulas simplify to
and
and for three coupled inductors (for brevity shown only for and )
and
Resonant transformer
When a capacitor is connected across one winding of a transformer, making the winding a tuned circuit (resonant circuit) it is called a single-tuned transformer. When a capacitor is connected across each winding, it is called a double tuned transformer. These resonant transformers can store oscillating electrical energy similar to a resonant circuit and thus function as a bandpass filter, allowing frequencies near their resonant frequency to pass from the primary to secondary winding, but blocking other frequencies. The amount of mutual inductance between the two windings, together with the Q factor of the circuit, determine the shape of the frequency response curve. The advantage of the double tuned transformer is that it can have a wider bandwidth than a simple tuned circuit. The coupling of double-tuned circuits is described as loose-, critical-, or over-coupled depending on the value of the coupling coefficient When two tuned circuits are loosely coupled through mutual inductance, the bandwidth is narrow. As the amount of mutual inductance increases, the bandwidth continues to grow. When the mutual inductance is increased beyond the critical coupling, the peak in the frequency response curve splits into two peaks, and as the coupling is increased the two peaks move further apart. This is known as overcoupling.
Stongly-coupled self-resonant coils can be used for wireless power transfer between devices in the mid range distances (up to two metres). Strong coupling is required for a high percentage of power transferred, which results in peak splitting of the frequency response.
Ideal transformers
When the inductor is referred to as being closely coupled. If in addition, the self-inductances go to infinity, the inductor becomes an ideal transformer. In this case the voltages, currents, and number of turns can be related in the following way:
where
Conversely the current:
where
The power through one inductor is the same as the power through the other. These equations neglect any forcing by current sources or voltage sources.
Self-inductance of thin wire shapes
The table below lists formulas for the self-inductance of various simple shapes made of thin cylindrical conductors (wires). In general these are only accurate if the wire radius is much smaller than the dimensions of the shape, and if no ferromagnetic materials are nearby (no magnetic core).
is an approximately constant value between 0 and 1 that depends on the distribution of the current in the wire: when the current flows only on the surface of the wire (complete skin effect), when the current is evenly spread over the cross-section of the wire (direct current). For round wires, Rosa (1908) gives a formula equivalent to:
where
is represents small term(s) that have been dropped from the formula, to make it simpler. Read the term as "plus small corrections that vary on the order of (see big O notation).
See also
Electromagnetic induction
Gyrator
Hydraulic analogy
Leakage inductance
LC circuit, RLC circuit, RL circuit
Kinetic inductance
Footnotes
References
General references
Küpfmüller K., Einführung in die theoretische Elektrotechnik, Springer-Verlag, 1959.
Heaviside O., Electrical Papers. Vol.1. – L.; N.Y.: Macmillan, 1892, p. 429-560.
Fritz Langford-Smith, editor (1953). Radiotron Designer's Handbook, 4th Edition, Amalgamated Wireless Valve Company Pty., Ltd. Chapter 10, "Calculation of Inductance" (pp. 429–448), includes a wealth of formulas and nomographs for coils, solenoids, and mutual inductance.
F. W. Sears and M. W. Zemansky 1964 University Physics: Third Edition (Complete Volume), Addison-Wesley Publishing Company, Inc. Reading MA, LCCC 63-15265 (no ISBN).
External links
Clemson Vehicular Electronics Laboratory: Inductance Calculator
Electrodynamics
Electromagnetic quantities | 0.76337 | 0.998207 | 0.762001 |
BALL | BALL (Biochemical Algorithms Library) is a C++ class framework and set of algorithms and data structures for molecular modelling and computational structural bioinformatics, a Python interface to this library, and a graphical user interface to BALL, the molecule viewer BALLView.
BALL has evolved from a commercial product into free-of-charge open-source software licensed under the GNU Lesser General Public License (LGPL). BALLView is licensed under the GNU General Public License (GPL) license.
BALL and BALLView have been ported to the operating systems Linux, macOS, Solaris, and Windows.
The molecule viewer BALLView, also developed by the BALL project team, is a C++ application of BALL using Qt, and OpenGL with the real-time ray tracer RTFact as render back-ends. For both, BALLView offers three-dimensional and stereoscopic visualizing in several different modes, and applying directly the algorithms of the BALL library via its graphical user interface.
The BALL project is developed and maintained by groups at Saarland University, Mainz University, and University of Tübingen. Both the library and the viewer are used for education and research. BALL packages have been made available in the Debian project.
Key features
Interactive molecular drawing and conformational editing
Reading and writing of molecular file formats (PDB, MOL2, MOL, HIN, XYZ, KCF, SD, AC)
Reading secondary data sources e.g. (DCD, DSN6, GAMESS, JCAMP, SCWRL, TRR)
Generating molecules from and matching of SMILES- and SMARTS expressions to molecules
Geometry optimization
Minimizer and molecular dynamics classes
Support for force fields (MMFF94, AMBER, CHARMM) for scoring and energy minimization
Python interface and scripting functionality
Plugin infrastructure (3D Space-Navigator)
Molecular graphics (3D, stereoscopic viewing)
comprehensive documentation (Wiki, code snippets, online class documentation, bug tracker)
comprehensive regression tests
BALL project format for presentations and collaborative data exchange
NMR
editable shortcuts
BALL library
BALL is a development framework for structural bioinformatics. Using BALL as a programming toolbox allows greatly reducing application development times and helps ensure stability and correctness by avoiding often error-prone reimplementation of complex algorithms and replacing them with calls into a library that has been tested by many developers.
File import-export
BALL supports molecular file formats including PDB, MOL2, MOL, HIN, XYZ, KCF, SD, AC, and secondary data sources like DCD, DSN6, GAMESS, JCAMP, SCWRL, and TRR. Molecules can also be created using BALL's peptide builder, or based on SMILES expressions.
General structure analysis
Further preparation and structure validation is enabled by, e.g., Kekuliser-, Aromaticity-, Bondorder-, HBond-, and Secondary Structure processors. A Fragment Library automatically infers missing information, e.g., a protein's hydrogens or bonds. A Rotamer Library allows determining, assigning, and switching between a protein's most likely side chain conformations. BALL's Transformation processors guide generation of valid 3D structures. Its selection mechanism enables to specify parts of a molecule by simple expressions (SMILES, SMARTS, element types). This selection can be used by all modeling classes like the processors or force fields.
Molecular mechanics
Implementations of the popular force fields CHARMM, Amber, and MMFF94 can be combined with BALL's minimizer and simulation classes (steepest descent, conjugate gradient, L-BFGS, and shifted L-VMM).
Python interface
SIP is used to automatically create Python classes for all relevant C++ classes in the BALL library to allow for the same class interfaces. The Python classes have the same name as the C++ classes, to aid in porting code that uses BALL from C++ to Python, and vice versa.
The Python interface is fully integrated into the viewer application BALLView and thus allows for direct visualization of results computed by python scripts. Also, BALLView can be operated from the scripting interface and recurring tasks can be automated.
BALLView
BALLView is BALL's standalone molecule modeling and visualization application. It is also a framework to develop molecular visualization functions.
BALLView offers standard visualization models for atoms, bonds, surfaces, and grid based visualization of e.g., electrostatic potentials. A large part of the functionality of the library BALL can be applied directly to the loaded molecule in BALLView. BALLView supports several visualization and input methods such as different stereo modes, space navigator, and VRPN-supported Input devices.
At CeBIT 2009, BALLView was prominently presented as the first complete integration of real-time ray tracing technology into a molecular viewer and modeling tool.
See also
List of molecular graphics systems
List of free and open-source software packages
Comparison of software for molecular mechanics modeling
Molecular design software
Molecular graphics
Molecule editor
References
Further reading
External links
BALLView web page
Code Library
Gallery
Tutorials
C++ libraries
Computational chemistry software
Molecular modelling software
Chemistry software for Linux
Science software that uses Qt
Articles with example C++ code | 0.764478 | 0.996743 | 0.761989 |
Lindbladian | In quantum mechanics, the Gorini–Kossakowski–Sudarshan–Lindblad equation (GKSL equation, named after Vittorio Gorini, Andrzej Kossakowski, George Sudarshan and Göran Lindblad), master equation in Lindblad form, quantum Liouvillian, or Lindbladian is one of the general forms of Markovian master equations describing open quantum systems. It generalizes the Schrödinger equation to open quantum systems; that is, systems in contacts with their surroundings. The resulting dynamics is no longer unitary, but still satisfies the property of being trace-preserving and completely positive for any initial condition.
The Schrödinger equation or, actually, the von Neumann equation, is a special case of the GKSL equation, which has led to some speculation that quantum mechanics may be productively extended and expanded through further application and analysis of the Lindblad equation. The Schrödinger equation deals with state vectors, which can only describe pure quantum states and are thus less general than density matrices, which can describe mixed states as well.
Motivation
In the canonical formulation of quantum mechanics, a system's time evolution is governed by unitary dynamics. This implies that there is no decay and phase coherence is maintained throughout the process, and is a consequence of the fact that all participating degrees of freedom are considered. However, any real physical system is not absolutely isolated, and will interact with its environment. This interaction with degrees of freedom external to the system results in dissipation of energy into the surroundings, causing decay and randomization of phase. More so, understanding the interaction of a quantum system with its environment is necessary for understanding many commonly observed phenomena like the spontaneous emission of light from excited atoms, or the performance of many quantum technological devices, like the laser.
Certain mathematical techniques have been introduced to treat the interaction of a quantum system with its environment. One of these is the use of the density matrix, and its associated master equation. While in principle this approach to solving quantum dynamics is equivalent to the Schrödinger picture or Heisenberg picture, it allows more easily for the inclusion of incoherent processes, which represent environmental interactions. The density operator has the property that it can represent a classical mixture of quantum states, and is thus vital to accurately describe the dynamics of so-called open quantum systems.
Definition
Diagonal form
The Lindblad master equation for system's density matrix can be written as (for a pedagogical introduction you may refer to)
where is the anticommutator.
is the system Hamiltonian, describing the unitary aspects of the dynamics.
are a set of jump operators, describing the dissipative part of the dynamics. The shape of the jump operators describes how the environment acts on the system, and must either be determined from microscopic models of the system-environment dynamics, or phenomenologically modelled.
are a set of non-negative real coefficients called damping rates. If all one recovers the von Neumann equation describing unitary dynamics, which is the quantum analog of the classical Liouville equation.
The entire equation can be written in superoperator form:which resembles the classical Liouville equation . For this reason, the superoperator is called the Lindbladian superoperator or the Liouvillian superoperator.
General form
More generally, the GKSL equation has the form
where are arbitrary operators and is a positive semidefinite matrix. The latter is a strict requirement to ensure the dynamics is trace-preserving and completely positive. The number of operators is arbitrary, and they do not have to satisfy any special properties. But if the system is -dimensional, it can be shown that the master equation can be fully described by a set of operators, provided they form a basis for the space of operators.
The general form is not in fact more general, and can be reduced to the special form. Since the matrix is positive semidefinite, it can be diagonalized with a unitary transformation :
where the eigenvalues are non-negative. If we define another orthonormal operator basis
This reduces the master equation to the same form as before:
Quantum dynamical semigroup
The maps generated by a Lindbladian for various times are collectively referred to as a quantum dynamical semigroup—a family of quantum dynamical maps on the space of density matrices indexed by a single time parameter that obey the semigroup property
The Lindblad equation can be obtained by
which, by the linearity of , is a linear superoperator. The semigroup can be recovered as
Invariance properties
The Lindblad equation is invariant under any unitary transformation of Lindblad operators and constants,
and also under the inhomogeneous transformation
where are complex numbers and is a real number.
However, the first transformation destroys the orthonormality of the operators (unless all the are equal) and the second transformation destroys the tracelessness. Therefore, up to degeneracies among the , the of the diagonal form of the Lindblad equation are uniquely determined by the dynamics so long as we require them to be orthonormal and traceless.
Heisenberg picture
The Lindblad-type evolution of the density matrix in the Schrödinger picture can be equivalently described in the Heisenberg picture
using the following (diagonalized) equation of motion for each quantum observable :
A similar equation describes the time evolution of the expectation values of observables, given by the Ehrenfest theorem.
Corresponding to the trace-preserving property of the Schrödinger picture Lindblad equation, the Heisenberg picture equation is unital, i.e. it preserves the identity operator.
Physical derivation
The Lindblad master equation describes the evolution of various types of open quantum systems, e.g. a system weakly coupled to a Markovian reservoir.
Note that the appearing in the equation is not necessarily equal to the bare system Hamiltonian, but may also incorporate effective unitary dynamics arising from the system-environment interaction.
A heuristic derivation, e.g., in the notes by Preskill, begins with a more general form of an open quantum system and converts it into Lindblad form by making the Markovian assumption and expanding in small time. A more physically motivated standard treatment covers three common types of derivations of the Lindbladian starting from a Hamiltonian acting on both the system and environment: the weak coupling limit (described in detail below), the low density approximation, and the singular coupling limit. Each of these relies on specific physical assumptions regarding, e.g., correlation functions of the environment. For example, in the weak coupling limit derivation, one typically assumes that (a) correlations of the system with the environment develop slowly, (b) excitations of the environment caused by system decay quickly, and (c) terms which are fast-oscillating when compared
to the system timescale of interest can be neglected. These three approximations are called Born,
Markov, and rotating wave, respectively.
The weak-coupling limit derivation assumes a quantum system with a finite number of degrees of freedom coupled to a bath containing an infinite number of degrees of freedom. The system and bath each possess a Hamiltonian written in terms of operators acting only on the respective subspace of the total Hilbert space. These Hamiltonians govern the internal dynamics of the uncoupled system and bath. There is a third Hamiltonian that contains products of system and bath operators, thus coupling the system and bath. The most general form of this Hamiltonian is
The dynamics of the entire system can be described by the Liouville equation of motion, . This equation, containing an infinite number of degrees of freedom, is impossible to solve analytically except in very particular cases. What's more, under certain approximations, the bath degrees of freedom need not be considered, and an effective master equation can be derived in terms of the system density matrix, . The problem can be analyzed more easily by moving into the interaction picture, defined by the unitary transformation , where is an arbitrary operator, and . Also note that is the total unitary operator of the entire system. It is straightforward to confirm that the Liouville equation becomes
where the Hamiltonian is explicitly time dependent. Also, according to the interaction picture, , where . This equation can be integrated directly to give
This implicit equation for can be substituted back into the Liouville equation to obtain an exact differo-integral equation
We proceed with the derivation by assuming the interaction is initiated at , and at that time there are no correlations between the system and the bath. This implies that the initial condition is factorable as , where is the density operator of the bath initially.
Tracing over the bath degrees of freedom, , of the aforementioned differo-integral equation yields
This equation is exact for the time dynamics of the system density matrix but requires full knowledge of the dynamics of the bath degrees of freedom. A simplifying assumption called the Born approximation rests on the largeness of the bath and the relative weakness of the coupling, which is to say the coupling of the system to the bath should not significantly alter the bath eigenstates. In this case the full density matrix is factorable for all times as . The master equation becomes
The equation is now explicit in the system degrees of freedom, but is very difficult to solve. A final assumption is the Born-Markov approximation that the time derivative of the density matrix depends only on its current state, and not on its past. This assumption is valid under fast bath dynamics, wherein correlations within the bath are lost extremely quickly, and amounts to replacing on the right hand side of the equation.
If the interaction Hamiltonian is assumed to have the form
for system operators and bath operators then . The master equation becomes
which can be expanded as
The expectation values are with respect to the bath degrees of freedom.
By assuming rapid decay of these correlations (ideally ), above form of the Lindblad superoperator L is achieved.
Examples
In the simplest case, there is just one jump operator and no unitary evolution. In this case, the Lindblad equation is
This case is often used in quantum optics to model either absorption or emission of photons from a reservoir.
To model both absorption and emission, one would need a jump operator for each. This leads to the most common Lindblad equation describing the damping of a quantum harmonic oscillator (representing e.g. a Fabry–Perot cavity) coupled to a thermal bath, with jump operators:
Here is the mean number of excitations in the reservoir damping the oscillator and is the decay rate.
To model the quantum harmonic oscillator Hamiltonian with frequency of the photons, we can add a further unitary evolution:
Additional Lindblad operators can be included to model various forms of dephasing and vibrational relaxation. These methods have been incorporated into grid-based density matrix propagation methods.
See also
Quantum master equation
Redfield equation
Open quantum system
Quantum jump method
References
Pearle, P. (2012). "Simple derivation of the Lindblad equation". European Journal of Physics, 33(4), 805.
External links
Quantum Optics Toolbox for Matlab
mcsolve Quantum jump (monte carlo) solver from QuTiP.
QuantumOptics.jl the quantum optics toolbox in Julia.
The Lindblad master equation
Quantum mechanics
Equations | 0.768454 | 0.991577 | 0.761981 |
Negentropy | In information theory and statistics, negentropy is used as a measure of distance to normality. The concept and phrase "negative entropy" was introduced by Erwin Schrödinger in his 1944 popular-science book What is Life? Later, French physicist Léon Brillouin shortened the phrase to néguentropie (negentropy). In 1974, Albert Szent-Györgyi proposed replacing the term negentropy with syntropy. That term may have originated in the 1940s with the Italian mathematician Luigi Fantappiè, who tried to construct a unified theory of biology and physics. Buckminster Fuller tried to popularize this usage, but negentropy remains common.
In a note to What is Life? Schrödinger explained his use of this phrase.
Information theory
In information theory and statistics, negentropy is used as a measure of distance to normality. Out of all distributions with a given mean and variance, the normal or Gaussian distribution is the one with the highest entropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishes if and only if the signal is Gaussian.
Negentropy is defined as
where is the differential entropy of the Gaussian density with the same mean and variance as and is the differential entropy of :
Negentropy is used in statistics and signal processing. It is related to network entropy, which is used in independent component analysis.
The negentropy of a distribution is equal to the Kullback–Leibler divergence between and a Gaussian distribution with the same mean and variance as (see for a proof). In particular, it is always nonnegative.
Correlation between statistical negentropy and Gibbs' free energy
There is a physical quantity closely linked to free energy (free enthalpy), with a unit of entropy and isomorphic to negentropy known in statistics and information theory. In 1873, Willard Gibbs created a diagram illustrating the concept of free energy corresponding to free enthalpy. On the diagram one can see the quantity called capacity for entropy. This quantity is the amount of entropy that may be increased without changing an internal energy or increasing its volume. In other words, it is a difference between maximum possible, under assumed conditions, entropy and its actual entropy. It corresponds exactly to the definition of negentropy adopted in statistics and information theory. A similar physical quantity was introduced in 1869 by Massieu for the isothermal process (both quantities differs just with a figure sign) and then Planck for the isothermal-isobaric process. More recently, the Massieu–Planck thermodynamic potential, known also as free entropy, has been shown to play a great role in the so-called entropic formulation of statistical mechanics, applied among the others in molecular biology and thermodynamic non-equilibrium processes.
where:
is entropy
is negentropy (Gibbs "capacity for entropy")
is the Massieu potential
is the partition function
the Boltzmann constant
In particular, mathematically the negentropy (the negative entropy function, in physics interpreted as free entropy) is the convex conjugate of LogSumExp (in physics interpreted as the free energy).
Brillouin's negentropy principle of information
In 1953, Léon Brillouin derived a general equation stating that the changing of an information bit value requires at least energy. This is the same energy as the work Leó Szilárd's engine produces in the idealistic case. In his book, he further explored this problem concluding that any cause of this bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount of energy.
See also
Exergy
Free entropy
Entropy in thermodynamics and information theory
Notes
Entropy and information
Statistical deviation and dispersion
Thermodynamic entropy | 0.768587 | 0.991384 | 0.761965 |
CK-12 Foundation | The CK-12 Foundation is a California-based non-profit organization which aims to increase access to low-cost K-12 education in the United States and abroad. CK-12 provides free and customizable K-12 open educational resources aligned to state curriculum standards. As of 2022, the foundation's tools were used by over 200,000,000 students worldwide.
CK-12 was set up to support K-12 Science, Technology, Engineering, and Math (STEM) education. It first produced content via a web-based platform called "FlexBook."
History
CK-12 was established in 2007 by Neeru Khosla and Murugan Pal as a not-for-profit educational organization. Teacher-generated content was initially made available under Creative Commons Attribution licenses so as to make it simpler, easier, and more affordable for children to access educational resources. However, they later switched to a Creative Commons Non Commercial licence, and then to their own "CK-12" license.
Originally, the "C" in CK-12 stood for "connect", indicating that the material was the missing connection in K-12 education. Subsequently, it took on a more open meaning, variously standing for "content, classroom, customizable, connections, collaboration".
In 2010, NASA teamed up with CK-12 to produce physics-related resources.
In March 2013, Microsoft announced a partnership with CK-12 to provide content to Microsoft's Windows 8 customers.
FlexBook System
The foundation's FlexBook website permits the assembly and creation of downloadable educational resources, which can be customized to meet classroom needs. Some FlexBooks are also available in Spanish and Hindi. Content is offered under a Creative Commons license, removing many of the restrictions that limit distribution of traditional textbooks, and are available in various formats.
Approach
The CK-12 Foundation's approach to supporting education in schools is by providing it as small, individual elements, rather than as large textbooks. As of 2012, some 5,000 individual elements were available in various formats such as textual descriptions, video lectures, multi-media simulations, photo galleries, practical experiments or flash cards.
Other products
In addition to its 88 FlexBooks, the CK-12 Foundation also offers the following online resources to K-12 students:
CK-12 Braingenie -a repository of math and science practice materials.
CK-12 FlexMath - an interactive, year-long Algebra 1 curriculum.
CK-12 INeedAPencil - a free SAT preparation website, founded in 2007 by then high school student, Jason Shah.
Recognition
CK-12 has been listed in the Top 25 Websites for Teaching by American Association of School Librarians
The National Tech Plan - The Office of Educational Technology, U.S. Department of Education – has mentioned the CK-12 model as “Transforming American Education – Learning Powered by Technology”
Tech Awards-2010 listed CK-12 in “15 innovations that could save the world”
In introducing Washington state bill, HB 2337: “Regarding open educational resources in K-12 education,” Representative Reuven Carlyle testifies to the benefit CK-12 materials can have for school districts around the country.
Fortune Magazine described CK-12 as a threat to the traditional textbook industry, and wrote about CK-12's push towards concept-based learning.
National Public Radio writes about CK-12, including its use of "Real World Applications" as teaching devices.
Neeru and CK-12 have been featured in the New York Times, the Gates Notes, Mercury News, TechCrunch, Education Week, EduKindle, The Patriot News, Getting Smart, and Teachinghistory.org
References
External links
CK-12 Community Site.
O'Reilly Radar blog, Feb 12, 2008 : "Remix and Share Your Own Text Books as FlexBooks"
Flexmath website
General Student Learning website
Book publishing companies based in the San Francisco Bay Area
Textbook publishing companies
American educational websites
Publishing companies established in 2007
Educational organizations based in the United States
Non-profit organizations based in California | 0.775471 | 0.982574 | 0.761957 |
Bragg's law | In many areas of science, Bragg's law, Wulff–Bragg's condition, or Laue–Bragg interference are a special case of Laue diffraction, giving the angles for coherent scattering of waves from a large crystal lattice. It describes how the superposition of wave fronts scattered by lattice planes leads to a strict relation between the wavelength and scattering angle. This law was initially formulated for X-rays, but it also applies to all types of matter waves including neutron and electron waves if there are a large number of atoms, as well as visible light with artificial periodic microscale lattices.
History
Bragg diffraction (also referred to as the Bragg formulation of X-ray diffraction) was first proposed by Lawrence Bragg and his father, William Henry Bragg, in 1913 after their discovery that crystalline solids produced surprising patterns of reflected X-rays (in contrast to those produced with, for instance, a liquid). They found that these crystals, at certain specific wavelengths and incident angles, produced intense peaks of reflected radiation.
Lawrence Bragg explained this result by modeling the crystal as a set of discrete parallel planes separated by a constant parameter . He proposed that the incident X-ray radiation would produce a Bragg peak if reflections off the various planes interfered constructively. The interference is constructive when the phase difference between the wave reflected off different atomic planes is a multiple of ; this condition (see Bragg condition section below) was first presented by Lawrence Bragg on 11 November 1912 to the Cambridge Philosophical Society. Although simple, Bragg's law confirmed the existence of real particles at the atomic scale, as well as providing a powerful new tool for studying crystals. Lawrence Bragg and his father, William Henry Bragg, were awarded the Nobel Prize in physics in 1915 for their work in determining crystal structures beginning with NaCl, ZnS, and diamond. They are the only father-son team to jointly win.
The concept of Bragg diffraction applies equally to neutron diffraction and approximately to electron diffraction. In both cases the wavelengths are comparable with inter-atomic distances (~ 150 pm). Many other types of matter waves have also been shown to diffract, and also light from objects with a larger ordered structure such as opals.
Bragg condition
Bragg diffraction occurs when radiation of a wavelength comparable to atomic spacings is scattered in a specular fashion (mirror-like reflection) by planes of atoms in a crystalline material, and undergoes constructive interference. When the scattered waves are incident at a specific angle, they remain in phase and constructively interfere. The glancing angle (see figure on the right, and note that this differs from the convention in Snell's law where is measured from the surface normal), the wavelength , and the "grating constant" of the crystal are connected by the relation:where is the diffraction order ( is first order, is second order, is third order). This equation, Bragg's law, describes the condition on θ for constructive interference.
A map of the intensities of the scattered waves as a function of their angle is called a diffraction pattern. Strong intensities known as Bragg peaks are obtained in the diffraction pattern when the scattering angles satisfy Bragg condition. This is a special case of the more general Laue equations, and the Laue equations can be shown to reduce to the Bragg condition with additional assumptions.
Heuristic derivation
Suppose that a plane wave (of any type) is incident on planes of lattice points, with separation , at an angle as shown in the Figure. Points A and C are on one plane, and B is on the plane below. Points ABCC' form a quadrilateral.
There will be a path difference between the ray that gets reflected along AC' and the ray that gets transmitted along AB, then reflected along BC. This path difference is
The two separate waves will arrive at a point (infinitely far from these lattice planes) with the same phase, and hence undergo constructive interference, if and only if this path difference is equal to any integer value of the wavelength, i.e.
where and are an integer and the wavelength of the incident wave respectively.
Therefore, from the geometry
from which it follows that
Putting everything together,
which simplifies to which is Bragg's law shown above.
If only two planes of atoms were diffracting, as shown in the Figure then the transition from constructive to destructive interference would be gradual as a function of angle, with gentle maxima at the Bragg angles. However, since many atomic planes are participating in most real materials, sharp peaks are typical.
A rigorous derivation from the more general Laue equations is available (see page: Laue equations).
Beyond Bragg's law
The Bragg condition is correct for very large crystals. Because the scattering of X-rays and neutrons is relatively weak, in many cases quite large crystals with sizes of 100 nm or more are used. While there can be additional effects due to crystal defects, these are often quite small. In contrast, electrons interact thousands of times more strongly with solids than X-rays, and also lose energy (inelastic scattering). Therefore samples used in transmission electron diffraction are much thinner. Typical diffraction patterns, for instance the Figure, show spots for different directions (plane waves) of the electrons leaving a crystal. The angles that Bragg's law predicts are still approximately right, but in general there is a lattice of spots which are close to projections of the reciprocal lattice that is at right angles to the direction of the electron beam. (In contrast, Bragg's law predicts that only one or perhaps two would be present, not simultaneously tens to hundreds.) With low-energy electron diffraction where the electron energies are typically 30-1000 electron volts, the result is similar with the electrons reflected back from a surface. Also similar is reflection high-energy electron diffraction which typically leads to rings of diffraction spots.
With X-rays the effect of having small crystals is described by the Scherrer equation. This leads to broadening of the Bragg peaks which can be used to estimate the size of the crystals.
Bragg scattering of visible light by colloids
A colloidal crystal is a highly ordered array of particles that forms over a long range (from a few millimeters to one centimeter in length); colloidal crystals have appearance and properties roughly analogous to their atomic or molecular counterparts. It has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations, with interparticle separation distances often being considerably greater than the individual particle diameter. Periodic arrays of spherical particles give rise to interstitial voids (the spaces between the particles), which act as a natural diffraction grating for visible light waves, when the interstitial spacing is of the same order of magnitude as the incident lightwave. In these cases brilliant iridescence (or play of colours) is attributed to the diffraction and constructive interference of visible lightwaves according to Bragg's law, in a matter analogous to the scattering of X-rays in crystalline solid. The effects occur at visible wavelengths because the interplanar spacing is much larger than for true crystals. Precious opal is one example of a colloidal crystal with optical effects.
Volume Bragg gratings
Volume Bragg gratings (VBG) or volume holographic gratings (VHG) consist of a volume where there is a periodic change in the refractive index. Depending on the orientation of the refractive index modulation, VBG can be used either to transmit or reflect a small bandwidth of wavelengths. Bragg's law (adapted for volume hologram) dictates which wavelength will be diffracted:
where is the Bragg order (a positive integer), the diffracted wavelength, Λ the fringe spacing of the grating, the angle between the incident beam and the normal of the entrance surface and the angle between the normal and the grating vector. Radiation that does not match Bragg's law will pass through the VBG undiffracted. The output wavelength can be tuned over a few hundred nanometers by changing the incident angle. VBG are being used to produce widely tunable laser source or perform global hyperspectral imagery (see Photon etc.).
Selection rules and practical crystallography
The measurement of the angles can be used to determine crystal structure, see x-ray crystallography for more details. As a simple example, Bragg's law, as stated above, can be used to obtain the lattice spacing of a particular cubic system through the following relation:
where is the lattice spacing of the cubic crystal, and , , and are the Miller indices of the Bragg plane. Combining this relation with Bragg's law gives:
One can derive selection rules for the Miller indices for different cubic Bravais lattices as well as many others, a few of the selection rules are given in the table below.
These selection rules can be used for any crystal with the given crystal structure. KCl has a face-centered cubic Bravais lattice. However, the K+ and the Cl− ion have the same number of electrons and are quite close in size, so that the diffraction pattern becomes essentially the same as for a simple cubic structure with half the lattice parameter. Selection rules for other structures can be referenced elsewhere, or derived. Lattice spacing for the other crystal systems can be found here.
See also
Bragg plane
Crystal lattice
Diffraction
Distributed Bragg reflector
Fiber Bragg grating
Dynamical theory of diffraction
Electron diffraction
Georg Wulff
Henderson limit
Laue conditions
Powder diffraction
Radar angels
Structure factor
X-ray crystallography
References
Further reading
Neil W. Ashcroft and N. David Mermin, Solid State Physics (Harcourt: Orlando, 1976).
External links
Nobel Prize in Physics – 1915
https://web.archive.org/web/20110608141639/http://www.physics.uoguelph.ca/~detong/phys3510_4500/xray.pdf
Learning crystallography
Diffraction
Neutron
X-rays
Crystallography | 0.764369 | 0.996844 | 0.761957 |
Physical education | Physical education, often abbreviated to Phys. Ed. or PE, and sometimes informally referred to as gym class or simply just gym, is a subject taught in schools around the world. PE is taught during primary and secondary education and encourages psychomotor, cognitive, and effective learning through physical activity and movement exploration to promote health and physical fitness. When taught correctly and in a positive manner, children and teens can receive a storm of health benefits. These include reduced metabolic disease risk, improved cardiorespiratory fitness, and better mental health. In addition, PE classes can produce positive effects on students' behavior and academic performance. Research has shown that there is a positive correlation between brain development and exercising. Researchers in 2007 found a profound gain in English Arts standardized test scores among students who had 56 hours of physical education in a year, compared to those who had 28 hours of physical education a year.
Many physical education programs also include health education as part of the curriculum. Health education is the teaching of information on the prevention, control, and treatment of diseases.
Curriculum in physical education
A highly effective physical education
program aims to develop physical literacy through the acquisition of skills, knowledge, physical fitness, and confidence. Physical education curricula promote healthy development of children, encourage interest in physical activity and sport, improve learning of health and physical education concepts, and accommodate for differences in student populations to ensure that every child receives health benefits. These core principles are implemented through sport participation, sports skill development, knowledge of physical fitness and health, as well as mental health and social adaptation.
Physical education curriculum at the secondary level includes a variety of team and individual sports, as well as leisure activities. Some examples of physical activities include basketball, soccer, volleyball, track and field, badminton, tennis, walking, cycling, and swimming. Chess is another activity that is included in the PE curriculum in some parts of the world. Chess helps students to develop their cognitive thinking skills and improves focus, while also teaching about sportsmanship and fair play. Gymnastics and wrestling activities offer additional opportunities for students to improve the different areas of physical fitness including flexibility, strength, aerobic endurance, balance, and coordination. Additional activities in PE include football, netball, hockey, rounders, cricket, four square, racing, and numerous other children's games. Physical education also teaches nutrition, healthy habits, and individuality of needs.
Pedagogy
The main goals in teaching modern physical education are:
To expose children and teens to a wide variety of exercise and healthy activities. Because P.E. can be accessible to nearly all children, it is one of the only opportunities that can guarantee beneficial and healthy activity in children.
To teach skills to maintain a lifetime of fitness as well as health.
To encourage self-reporting and monitoring of exercise.
To individualize duration, intensity, and type of activity.
To focus feedback on the work, rather than the result.
To provide active role models.
It is critical for physical educators to foster and strengthen developing motor skills and to provide children and teens with a basic skill set that builds their movement repertoire, which allows students to engage in various forms of games, sports, and other physical activities throughout their lifetime.
These goals can be achieved in a variety of ways. National, state, and local guidelines often dictate which standards must be taught in regards to physical education. These standards determine what content is covered, the qualifications educators must meet, and the textbooks and materials which must be used. These various standards include teaching sports education, or the use of sports as exercise; fitness education, relating to overall health and fitness; and movement education, which deals with movement in a non-sport context.
These approaches and curricula are based on pioneers in PE, namely, Francois Delsarte, Liselott Diem, and Rudolf von Laban, who, in the 1800s focused on using a child's ability to use their body for self-expression. This, in combination with approaches in the 1960s, (which featured the use of the body, spatial awareness, effort, and relationships) gave birth to the modern teaching of physical education.
Recent research has also explored the role of physical education for moral development in support of social inclusion and social justice agendas, where it is under-researched, especially in the context of disability, and the social inclusion of disabled people.
Technology use in physical education
Many physical education classes utilize technology to assist their pupils in effective exercise. One of the most affordable and popular tools is a simple video recorder. With this, students record themselves, and, upon playback, can see mistakes they are making in activities like throwing or swinging. Studies show that students find this more effective than having someone try to explain what they are doing wrong, and then trying to correct it.
Educators may also use technology such as pedometers and heart rate monitors to make step and heart rate goals for students. Implementing pedometers in physical education can improve physical activity participation, motivation and enjoyment.
Other technologies that can be used in a physical education setting include video projectors and GPS systems. Gaming systems and their associated games, such as the Kinect, Wii, and Wii Fit can also be used. Projectors are used to show students proper form or how to play certain games. GPS systems can be used to get students active in an outdoor setting, and active exergames can be used by teachers to show students a good way to stay fit in and out of a classroom setting. Exergames, or digital games that require the use of physical movement to participate, can be used as a tool to encourage physical activity and health in young children.
Technology integration can increase student motivation and engagement in the Physical Education setting. However, the ability of educators to effectively use technology in the classroom is reliant on a teacher's perceived competence in their ability to integrate technology into the curriculum.
Beyond traditional tools, recent AI advancements are introducing new methods for personalizing physical education, especially for adolescents. AI applications like adaptive coaching are starting to show promise in enhancing student motivation and program effectiveness in physical education settings.
By location
According to the World Health Organization (WHO), it is suggested that young children should be participating in 60-minutes of exercise per day at least 3 times per week in order to maintain a healthy body. This 60-minute recommendation can be achieved by completing different forms of physical activity, including participation in physical education programs at school. A majority of children around the world participate in Physical Education programs in general education settings. According to data collected from a worldwide survey, 79% of countries require legal implementation of PE in school programming. Physical education programming can vary all over the world.
Asia
Philippines
In the Philippines, P.E. is mandatory for all years in school, unless the school gives the option for a student to do the Leaving Certificate Vocational Programme instead for their fifth and sixth year. Some schools have integrated martial arts training into their physical education curriculum.
Singapore
A Biennial compulsory fitness exam, NAPFA, is conducted in every school to assess pupils' physical fitness in Singapore. This includes a series of fitness tests. Students are graded by a system of gold, silver, bronze, or as a fail. NAPFA for pre-enlistees serves as an indicator for an additional two months in the country's compulsory national service training if they attain bronze or fail.
Europe
Ireland
In Ireland, one is expected to do two semesters worth of 80-minute PE classes. This also includes showering and changing times. So, on average, classes are composed of 60–65 minutes of activity.
Poland
In Poland, pupils are expected to do at least three hours of PE a week during primary and secondary education. Universities must also organise at least 60 hours of physical education classes in undergraduate courses.
Sweden
In Sweden, the time school students spend in P.E. lessons per week varies between municipalities, but generally, years 0 to 2 have 55 minutes of PE a week; years 3 to 6 have 110 minutes a week, and years 7 to 9 have 220 minutes. In upper secondary school, all national programs have an obligatory course, containing 100 points of PE, which corresponds to 90–100 hours of PE during the course (one point per hour). Schools can regulate these hours as they like during the three years of school students attend. Most schools have students take part in this course during the first year and offer a follow-up course, which also contains 100 points/hours.
United Kingdom
In England, pupils in years 7, 8, and 9 are expected to do two hours of exercise per week. Pupils in years 10 and 11 are expected to do one hour of exercise per week.
In Wales, pupils are expected to do two hours of PE a week.
In Scotland, Scottish pupils are expected to have at least two hours of PE per week during primary and lower secondary education.
In Northern Ireland, pupils are expected to participate in at least two hours of physical education (PE) per week during years 8 to 10. PE remains part of the curriculum for years 11 and 12, though the time allocated may vary.
North America
Canada
In British Columbia, the government has mandated in the grade one curriculum that students must participate in physical activity daily five times a week. The educator is also responsible for planning Daily Physical Activity (DPA), which is thirty minutes of mild to moderate physical activity a day (not including curriculum physical education classes). The curriculum also requires students in grade one to be knowledgeable about healthy living. For example, they must be able to describe the benefits of regular exercise, identify healthy choices in activities, and describe the importance of choosing healthy food.
Ontario, Canada has a similar procedure in place. On October 6, 2005, the Ontario Ministry of Education (OME) implemented a DPA policy in elementary schools, for those in grades 1 through 8. The government also requires that all students in grades 1 through 8, including those with special needs, be provided with opportunities to participate in a minimum of twenty minutes of sustained, moderate to vigorous physical activity each school day during instructional time.
United States
The 2012 "Shape Of The Nation Report" by the National Association for Sport and Physical Education (part of SHAPE America) and the American Heart Association found that while nearly 75% of states require physical education in elementary through high school, over half of the states permit students to substitute other activities for their required physical education credit, or otherwise fail to mandate a specific amount of instructional time. According to the report, only six states (Illinois, Hawaii, Massachusetts, Mississippi, New York, and Vermont) require physical education at every grade level. A majority of states in 2016 did not require a specific amount of instructional time, and more than half allow exemptions or substitution. These loopholes can lead to reduced effectiveness of the physical education programs.
Zero Hour is a before-school physical education class first implemented by Naperville Central High School. In the state of Illinois, this program is known as Learning Readiness P.E. (LRPE). The program was based on research indicating that students who are physically fit are more academically alert, experience growth in brain cells, and enhancement in brain development. NCHS pairs a P.E. class that incorporates cardiovascular exercise, core strength training, cross-lateral movements, as well as literacy and math strategies which enhance learning and improve achievement.
See also
Recreation
Exercise
Lack of physical education
Sports day
Worldwide Day of Play
References
External links
Education
Sports science
Education by subject | 0.76335 | 0.998174 | 0.761956 |
Physical vapor deposition | Physical vapor deposition (PVD), sometimes called physical vapor transport (PVT), describes a variety of vacuum deposition methods which can be used to produce thin films and coatings on substrates including metals, ceramics, glass, and polymers. PVD is characterized by a process in which the material transitions from a condensed phase to a vapor phase and then back to a thin film condensed phase. The most common PVD processes are sputtering and evaporation. PVD is used in the manufacturing of items which require thin films for optical, mechanical, electrical, acoustic or chemical functions. Examples include semiconductor devices such as thin-film solar cells, microelectromechanical devices such as thin film bulk acoustic resonator, aluminized PET film for food packaging and balloons, and titanium nitride coated cutting tools for metalworking. Besides PVD tools for fabrication, special smaller tools used mainly for scientific purposes have been developed.
The source material is unavoidably also deposited on most other surfaces interior to the vacuum chamber, including the fixturing used to hold the parts. This is called overshoot.
Examples
Cathodic arc deposition: a high-power electric arc discharged at the target (source) material blasts away some into highly ionized vapor to be deposited onto the workpiece.
Electron-beam physical vapor deposition: the material to be deposited is heated to a high vapor pressure by electron bombardment in "high" vacuum and is transported by diffusion to be deposited by condensation on the (cooler) workpiece.
Evaporative deposition: the material to be deposited is heated to a high vapor pressure by electrical resistance heating in "high" vacuum.
Close-space sublimation, the material, and substrate are placed close to one another and radiatively heated.
Pulsed laser deposition: a high-power laser ablates material from the target into a vapor.
Thermal laser epitaxy: a continuous-wave laser evaporates individual, free-standing elemental sources which then condense upon a substrate.
Sputter deposition: a glow plasma discharge (usually localized around the "target" by a magnet) bombards the material sputtering some away as a vapor for subsequent deposition.
Pulsed electron deposition: a highly energetic pulsed electron beam ablates material from the target generating a plasma stream under nonequilibrium conditions.
Sublimation sandwich method: used for growing human-made crystals (silicon carbide, SiC).
Metrics and testing
Various thin film characterization techniques can be used to measure the physical properties of PVD coatings, such as:
Calo tester: coating thickness test
Nanoindentation: hardness test for thin-film coatings
Pin-on-disc tester: wear and friction coefficient test
Scratch tester: coating adhesion test
X-ray micro-analyzer: investigation of structural features and heterogeneity of elemental composition for the growth surfaces
Comparison to other deposition techniques
Advantages
PVD coatings are sometimes harder and more corrosion-resistant than coatings applied by electroplating processes. Most coatings have high temperature and good impact strength, excellent abrasion resistance and are so durable that protective topcoats are rarely necessary.
PVD coatings have the ability to utilize virtually any type of inorganic and some organic coating materials on an equally diverse group of substrates and surfaces using a wide variety of finishes.
PVD processes are often more environmentally friendly than traditional coating processes such as electroplating and painting.
More than one technique can be used to deposit a given film.
PVD can be performed at lower temperatures compared to chemical vapor deposition (CVD) and other thermal processes. This makes it suitable for coating temperature-sensitive substrates, such as plastics and certain metals, without causing damage or deformation.
PVD technologies can be scaled from small laboratory setups to large industrial systems, offering flexibility for different production volumes and sizes. This scalability makes it accessible for both research and commercial applications.
Disadvantages
Specific technologies can impose constraints; for example, the line-of-sight transfer is typical of most PVD coating techniques, however, some methods allow full coverage of complex geometries.
Some PVD technologies operate at high temperatures and vacuums, requiring special attention by operating personnel and sometimes a cooling water system to dissipate large heat loads.
Applications
Anisotropic glasses
PVD can be used as an application to make anisotropic glasses of low molecular weight for organic semiconductors. The parameter needed to allow the formation of this type of glass is molecular mobility and anisotropic structure at the free surface of the glass. The configuration of the polymer is important where it needs to be positioned in a lower energy state before the added molecules bury the material through a deposition. This process of adding molecules to the structure starts to equilibrate and gain mass and bulk out to have more kinetic stability. The packing of molecules here through PVD is face-on, meaning not at the long tail end, allows further overlap of pi orbitals as well which also increases the stability of added molecules and the bonds. The orientation of these added materials is dependent mainly on temperature for when molecules will be deposited or extracted from the molecule. The equilibration of the molecules is what provides the glass with its anisotropic characteristics. The anisotropy of these glasses is valuable as it allows a higher charge carrier mobility. This process of packing in glass in an anisotropic way is valuable due to its versatility and the fact that glass provides added benefits beyond crystals, such as homogeneity and flexibility of composition.
Decorative applications
By varying the composition and duration of the process, a range of colors can be produced by PVD on stainless steel. The resulting colored stainless steel product can appear as brass, bronze, and other metals or alloys.
This PVD-colored stainless steel can be used as exterior cladding for buildings and structures, such as the Vessel sculpture in New York City and The Bund in Shanghai. It is also used for interior hardware, paneling, and fixtures, and is even used on some consumer electronics, like the Space Gray and Gold finishes of the iPhone and Apple Watch.
Cutting tools
PVD is used to enhance the wear resistance of steel cutting tools' surfaces and decrease the risk of adhesion and sticking between tools and a workpiece. This includes tools used in metalworking or plastic injection molding. The coating is typically a thin ceramic layer less than 4 μm that has very high hardness and low friction. It is necessary to have high hardness of workpieces to ensure dimensional stability of coating to avoid brittling. It is possible to combine PVD with a plasma nitriding treatment of steel to increase the load bearing capacity of the coating. Chromium nitride (CrN), titanium nitride (TiN), and Titanium Carbonitride (TiCN) may be used for PVD coating for plastic molding dies.
Other applications
PVD coatings are generally used to improve hardness, increase wear resistance, and prevent oxidation. They can also be used for aesthetic purposes. Thus, such coatings are used in a wide range of applications such as:
Aerospace industry
Architectural ironmongery, panels, and sheets
Automotive industry
Dyes and molds
Firearms
Optics
Watches
Jewelry
Thin film applications (window tint, food packaging, etc.)
See also
– Deposition of a thin layer of material
References
Further reading
External links
Coatings
Physical vapor deposition
Plasma processing
Semiconductor device fabrication
Thin film deposition | 0.76631 | 0.994314 | 0.761953 |
Fick's laws of diffusion | Fick's laws of diffusion describe diffusion and were first posited by Adolf Fick in 1855 on the basis of largely experimental results. They can be used to solve for the diffusion coefficient, . Fick's first law can be used to derive his second law which in turn is identical to the diffusion equation.
Fick's first law: Movement of particles from high to low concentration (diffusive flux) is directly proportional to the particle's concentration gradient.
Fick's second law: Prediction of change in concentration gradient with time due to diffusion.
A diffusion process that obeys Fick's laws is called normal or Fickian diffusion; otherwise, it is called anomalous diffusion or non-Fickian diffusion.
History
In 1855, physiologist Adolf Fick first reported his now well-known laws governing the transport of mass through diffusive means. Fick's work was inspired by the earlier experiments of Thomas Graham, which fell short of proposing the fundamental laws for which Fick would become famous. Fick's law is analogous to the relationships discovered at the same epoch by other eminent scientists: Darcy's law (hydraulic flow), Ohm's law (charge transport), and Fourier's law (heat transport).
Fick's experiments (modeled on Graham's) dealt with measuring the concentrations and fluxes of salt, diffusing between two reservoirs through tubes of water. It is notable that Fick's work primarily concerned diffusion in fluids, because at the time, diffusion in solids was not considered generally possible. Today, Fick's laws form the core of our understanding of diffusion in solids, liquids, and gases (in the absence of bulk fluid motion in the latter two cases). When a diffusion process does not follow Fick's laws (which happens in cases of diffusion through porous media and diffusion of swelling penetrants, among others), it is referred to as non-Fickian.
Fick's first law
Fick's first law relates the diffusive flux to the gradient of the concentration. It postulates that the flux goes from regions of high concentration to regions of low concentration, with a magnitude that is proportional to the concentration gradient (spatial derivative), or in simplistic terms the concept that a solute will move from a region of high concentration to a region of low concentration across a concentration gradient. In one (spatial) dimension, the law can be written in various forms, where the most common form (see) is in a molar basis:
where
is the diffusion flux, of which the dimension is the amount of substance per unit area per unit time. measures the amount of substance that will flow through a unit area during a unit time interval,
is the diffusion coefficient or diffusivity. Its dimension is area per unit time,
is the concentration gradient,
(for ideal mixtures) is the concentration, with a dimension of amount of substance per unit volume,
is position, the dimension of which is length.
is proportional to the squared velocity of the diffusing particles, which depends on the temperature, viscosity of the fluid and the size of the particles according to the Stokes–Einstein relation. In dilute aqueous solutions the diffusion coefficients of most ions are similar and have values that at room temperature are in the range of . For biological molecules the diffusion coefficients normally range from 10−10 to 10−11 m2/s.
In two or more dimensions we must use , the del or gradient operator, which generalises the first derivative, obtaining
where denotes the diffusion flux vector.
The driving force for the one-dimensional diffusion is the quantity , which for ideal mixtures is the concentration gradient.
Variations of the first law
Another form for the first law is to write it with the primary variable as mass fraction (, given for example in kg/kg), then the equation changes to:
where
the index denotes the th species,
is the diffusion flux vector of the th species (for example in mol/m2-s),
is the molar mass of the th species,
is the mixture density (for example in kg/m3).
The is outside the gradient operator. This is because:
where is the partial density of the th species.
Beyond this, in chemical systems other than ideal solutions or mixtures, the driving force for diffusion of each species is the gradient of chemical potential of this species. Then Fick's first law (one-dimensional case) can be written
where
the index denotes the th species,
is the concentration (mol/m3),
is the universal gas constant (J/K/mol),
is the absolute temperature (K),
is the chemical potential (J/mol).
The driving force of Fick's law can be expressed as a fugacity difference:
where is the fugacity in Pa. is a partial pressure of component in a vapor or liquid phase. At vapor liquid equilibrium the evaporation flux is zero because .
Derivation of Fick's first law for gases
Four versions of Fick's law for binary gas mixtures are given below. These assume: thermal diffusion is negligible; the body force per unit mass is the same on both species; and either pressure is constant or both species have the same molar mass. Under these conditions, Ref. shows in detail how the diffusion equation from the kinetic theory of gases reduces to this version of Fick's law:
where is the diffusion velocity of species . In terms of species flux this is
If, additionally, , this reduces to the most common form of Fick's law,
If (instead of or in addition to ) both species have the same molar mass, Fick's law becomes
where is the mole fraction of species .
Fick's second law
Fick's second law predicts how diffusion causes the concentration to change with respect to time. It is a partial differential equation which in one dimension reads:
where
is the concentration in dimensions of , example mol/m3; is a function that depends on location and time ,
is time, example s,
is the diffusion coefficient in dimensions of , example m2/s,
is the position, example m.
In two or more dimensions we must use the Laplacian , which generalises the second derivative, obtaining the equation
Fick's second law has the same mathematical form as the Heat equation and its fundamental solution is the same as the Heat kernel, except switching thermal conductivity with diffusion coefficient :
Derivation of Fick's second law
Fick's second law can be derived from Fick's first law and the mass conservation in absence of any chemical reactions:
Assuming the diffusion coefficient to be a constant, one can exchange the orders of the differentiation and multiply by the constant:
and, thus, receive the form of the Fick's equations as was stated above.
For the case of diffusion in two or more dimensions Fick's second law becomes
which is analogous to the heat equation.
If the diffusion coefficient is not a constant, but depends upon the coordinate or concentration, Fick's second law yields
An important example is the case where is at a steady state, i.e. the concentration does not change by time, so that the left part of the above equation is identically zero. In one dimension with constant , the solution for the concentration will be a linear change of concentrations along . In two or more dimensions we obtain
which is Laplace's equation, the solutions to which are referred to by mathematicians as harmonic functions.
Example solutions and generalization
Fick's second law is a special case of the convection–diffusion equation in which there is no advective flux and no net volumetric source. It can be derived from the continuity equation:
where is the total flux and is a net volumetric source for . The only source of flux in this situation is assumed to be diffusive flux:
Plugging the definition of diffusive flux to the continuity equation and assuming there is no source, we arrive at Fick's second law:
If flux were the result of both diffusive flux and advective flux, the convection–diffusion equation is the result.
Example solution 1: constant concentration source and diffusion length
A simple case of diffusion with time in one dimension (taken as the -axis) from a boundary located at position , where the concentration is maintained at a value is
where is the complementary error function. This is the case when corrosive gases diffuse through the oxidative layer towards the metal surface (if we assume that concentration of gases in the environment is constant and the diffusion space – that is, the corrosion product layer – is semi-infinite, starting at 0 at the surface and spreading infinitely deep in the material). If, in its turn, the diffusion space is infinite (lasting both through the layer with , and that with , ), then the solution is amended only with coefficient in front of (as the diffusion now occurs in both directions). This case is valid when some solution with concentration is put in contact with a layer of pure solvent. (Bokstein, 2005) The length is called the diffusion length and provides a measure of how far the concentration has propagated in the -direction by diffusion in time (Bird, 1976).
As a quick approximation of the error function, the first two terms of the Taylor series can be used:
If is time-dependent, the diffusion length becomes
This idea is useful for estimating a diffusion length over a heating and cooling cycle, where varies with temperature.
Example solution 2: Brownian particle and mean squared displacement
Another simple case of diffusion is the Brownian motion of one particle. The particle's Mean squared displacement from its original position is:
where is the dimension of the particle's Brownian motion. For example, the diffusion of a molecule across a cell membrane 8 nm thick is 1-D diffusion because of the spherical symmetry; However, the diffusion of a molecule from the membrane to the center of a eukaryotic cell is a 3-D diffusion. For a cylindrical cactus, the diffusion from photosynthetic cells on its surface to its center (the axis of its cylindrical symmetry) is a 2-D diffusion.
The square root of MSD, , is often used as a characterization of how far the particle has moved after time has elapsed. The MSD is symmetrically distributed over the 1D, 2D, and 3D space. Thus, the probability distribution of the magnitude of MSD in 1D is Gaussian and 3D is a Maxwell-Boltzmann distribution.
Generalizations
In non-homogeneous media, the diffusion coefficient varies in space, . This dependence does not affect Fick's first law but the second law changes:
In anisotropic media, the diffusion coefficient depends on the direction. It is a symmetric tensor . Fick's first law changes to it is the product of a tensor and a vector: For the diffusion equation this formula gives The symmetric matrix of diffusion coefficients should be positive definite. It is needed to make the right-hand side operator elliptic.
For inhomogeneous anisotropic media these two forms of the diffusion equation should be combined in
The approach based on Einstein's mobility and Teorell formula gives the following generalization of Fick's equation for the multicomponent diffusion of the perfect components: where are concentrations of the components and is the matrix of coefficients. Here, indices and are related to the various components and not to the space coordinates.
The Chapman–Enskog formulae for diffusion in gases include exactly the same terms. These physical models of diffusion are different from the test models which are valid for very small deviations from the uniform equilibrium. Earlier, such terms were introduced in the Maxwell–Stefan diffusion equation.
For anisotropic multicomponent diffusion coefficients one needs a rank-four tensor, for example , where refer to the components and correspond to the space coordinates.
Applications
Equations based on Fick's law have been commonly used to model transport processes in foods, neurons, biopolymers, pharmaceuticals, porous soils, population dynamics, nuclear materials, plasma physics, and semiconductor doping processes. The theory of voltammetric methods is based on solutions of Fick's equation. On the other hand, in some cases a "Fickian (another common approximation of the transport equation is that of the diffusion theory)" description is inadequate. For example, in polymer science and food science a more general approach is required to describe transport of components in materials undergoing a glass transition. One more general framework is the Maxwell–Stefan diffusion equations
of multi-component mass transfer, from which Fick's law can be obtained as a limiting case, when the mixture is extremely dilute and every chemical species is interacting only with the bulk mixture and not with other species. To account for the presence of multiple species in a non-dilute mixture, several variations of the Maxwell–Stefan equations are used. See also non-diagonal coupled transport processes (Onsager relationship).
Fick's flow in liquids
When two miscible liquids are brought into contact, and diffusion takes place, the macroscopic (or average) concentration evolves following Fick's law. On a mesoscopic scale, that is, between the macroscopic scale described by Fick's law and molecular scale, where molecular random walks take place, fluctuations cannot be neglected. Such situations can be successfully modeled with Landau-Lifshitz fluctuating hydrodynamics. In this theoretical framework, diffusion is due to fluctuations whose dimensions range from the molecular scale to the macroscopic scale.
In particular, fluctuating hydrodynamic equations include a Fick's flow term, with a given diffusion coefficient, along with hydrodynamics equations and stochastic terms describing fluctuations. When calculating the fluctuations with a perturbative approach, the zero order approximation is Fick's law. The first order gives the fluctuations, and it comes out that fluctuations contribute to diffusion. This represents somehow a tautology, since the phenomena described by a lower order approximation is the result of a higher approximation: this problem is solved only by renormalizing the fluctuating hydrodynamics equations.
Sorption rate and collision frequency of diluted solute
Adsorption, absorption, and collision of molecules, particles, and surfaces are important problems in many fields. These fundamental processes regulate chemical, biological, and environmental reactions. Their rate can be calculated using the diffusion constant and Fick's laws of diffusion especially when these interactions happen in diluted solutions.
Typically, the diffusion constant of molecules and particles defined by Fick's equation can be calculated using the Stokes–Einstein equation. In the ultrashort time limit, in the order of the diffusion time a2/D, where a is the particle radius, the diffusion is described by the Langevin equation. At a longer time, the Langevin equation merges into the Stokes–Einstein equation. The latter is appropriate for the condition of the diluted solution, where long-range diffusion is considered. According to the fluctuation-dissipation theorem based on the Langevin equation in the long-time limit and when the particle is significantly denser than the surrounding fluid, the time-dependent diffusion constant is:
where (all in SI units)
kB is the Boltzmann constant,
T is the absolute temperature,
μ is the mobility of the particle in the fluid or gas, which can be calculated using the Einstein relation (kinetic theory),
m is the mass of the particle,
t is time.
For a single molecule such as organic molecules or biomolecules (e.g. proteins) in water, the exponential term is negligible due to the small product of mμ in the ultrafast picosecond region, thus irrelevant to the relatively slower adsorption of diluted solute.
The adsorption or absorption rate of a dilute solute to a surface or interface in a (gas or liquid) solution can be calculated using Fick's laws of diffusion. The accumulated number of molecules adsorbed on the surface is expressed by the Langmuir-Schaefer equation by integrating the diffusion flux equation over time as shown in the simulated molecular diffusion in the first section of this page:
is the surface area (m2).
is the number concentration of the adsorber molecules (solute) in the bulk solution (#/m3).
is diffusion coefficient of the adsorber (m2/s).
is elapsed time (s).
is the accumulated number of molecules in unit # molecules adsorbed during the time .
The equation is named after American chemists Irving Langmuir and Vincent Schaefer.
Briefly as explained in,
the concentration gradient profile near a newly created (from ) absorptive surface (placed at ) in a once uniform bulk solution is solved in the above sections from Fick's equation,
where is the number concentration of adsorber molecules at (#/m3).
The concentration gradient at the subsurface at is simplified to the pre-exponential factor of the distribution
And the rate of diffusion (flux) across area of the plane is
Integrating over time,
The Langmuir–Schaefer equation can be extended to the Ward–Tordai Equation to account for the "back-diffusion" of rejected molecules from the surface:
where is the bulk concentration, is the sub-surface concentration (which is a function of time depending on the reaction model of the adsorption), and is a dummy variable.
Monte Carlo simulations show that these two equations work to predict the adsorption rate of systems that form predictable concentration gradients near the surface but have troubles for systems without or with unpredictable concentration gradients, such as typical biosensing systems or when flow and convection are significant.
A brief history of diffusive adsorption is shown in the right figure. A noticeable challenge of understanding the diffusive adsorption at the single-molecule level is the fractal nature of diffusion. Most computer simulations pick a time step for diffusion which ignores the fact that there are self-similar finer diffusion events (fractal) within each step. Simulating the fractal diffusion shows that a factor of two corrections should be introduced for the result of a fixed time-step adsorption simulation, bringing it to be consistent with the above two equations.
A more problematic result of the above equations is they predict the lower limit of adsorption under ideal situations but is very difficult to predict the actual adsorption rates. The equations are derived at the long-time-limit condition when a stable concentration gradient has been formed near the surface. But real adsorption is often done much faster than this infinite time limit i.e. the concentration gradient, decay of concentration at the sub-surface, is only partially formed before the surface has been saturated or flow is on to maintain a certain gradient, thus the adsorption rate measured is almost always faster than the equations have predicted for low or none energy barrier adsorption (unless there is a significant adsorption energy barrier that slows down the absorption significantly), for example, thousands to millions time faster in the self-assembly of monolayers at the water-air or water-substrate interfaces. As such, it is necessary to calculate the evolution of the concentration gradient near the surface and find out a proper time to stop the imagined infinite evolution for practical applications. While it is hard to predict when to stop but it is reasonably easy to calculate the shortest time that matters, the critical time when the first nearest neighbor from the substrate surface feels the building-up of the concentration gradient. This yields the upper limit of the adsorption rate under an ideal situation when there are no other factors than diffusion that affect the absorber dynamics:
where:
is the adsorption rate assuming under adsorption energy barrier-free situation, in unit #/s,
is the area of the surface of interest on an "infinite and flat" substrate (m2),
is the concentration of the absorber molecule in the bulk solution (#/m3),
is the diffusion constant of the absorber (solute) in the solution (m2/s) defined with Fick's law.
This equation can be used to predict the initial adsorption rate of any system; It can be used to predict the steady-state adsorption rate of a typical biosensing system when the binding site is just a very small fraction of the substrate surface and a near-surface concentration gradient is never formed; It can also be used to predict the adsorption rate of molecules on the surface when there is a significant flow to push the concentration gradient very shallowly in the sub-surface.
This critical time is significantly different from the first passenger arriving time or the mean free-path time. Using the average first-passenger time and Fick's law of diffusion to estimate the average binding rate will significantly over-estimate the concentration gradient because the first passenger usually comes from many layers of neighbors away from the target, thus its arriving time is significantly longer than the nearest neighbor diffusion time. Using the mean free path time plus the Langmuir equation will cause an artificial concentration gradient between the initial location of the first passenger and the target surface because the other neighbor layers have no change yet, thus significantly lower estimate the actual binding time, i.e., the actual first passenger arriving time itself, the inverse of the above rate, is difficult to calculate. If the system can be simplified to 1D diffusion, then the average first passenger time can be calculated using the same nearest neighbor critical diffusion time for the first neighbor distance to be the MSD,
where:
(unit m) is the average nearest neighbor distance approximated as cubic packing, where is the solute concentration in the bulk solution (unit # molecule / m3),
is the diffusion coefficient defined by Fick's equation (unit m2/s),
is the critical time (unit s).
In this critical time, it is unlikely the first passenger has arrived and adsorbed. But it sets the speed of the layers of neighbors to arrive. At this speed with a concentration gradient that stops around the first neighbor layer, the gradient does not project virtually in the longer time when the actual first passenger arrives. Thus, the average first passenger coming rate (unit # molecule/s) for this 3D diffusion simplified in 1D problem,
where is a factor of converting the 3D diffusive adsorption problem into a 1D diffusion problem whose value depends on the system, e.g., a fraction of adsorption area over solute nearest neighbor sphere surface area assuming cubic packing each unit has 8 neighbors shared with other units. This example fraction converges the result to the 3D diffusive adsorption solution shown above with a slight difference in pre-factor due to different packing assumptions and ignoring other neighbors.
When the area of interest is the size of a molecule (specifically, a long cylindrical molecule such as DNA), the adsorption rate equation represents the collision frequency of two molecules in a diluted solution, with one molecule a specific side and the other no steric dependence, i.e., a molecule (random orientation) hit one side of the other. The diffusion constant need to be updated to the relative diffusion constant between two diffusing molecules. This estimation is especially useful in studying the interaction between a small molecule and a larger molecule such as a protein. The effective diffusion constant is dominated by the smaller one whose diffusion constant can be used instead.
The above hitting rate equation is also useful to predict the kinetics of molecular self-assembly on a surface. Molecules are randomly oriented in the bulk solution. Assuming 1/6 of the molecules has the right orientation to the surface binding sites, i.e. 1/2 of the z-direction in x, y, z three dimensions, thus the concentration of interest is just 1/6 of the bulk concentration. Put this value into the equation one should be able to calculate the theoretical adsorption kinetic curve using the Langmuir adsorption model. In a more rigid picture, 1/6 can be replaced by the steric factor of the binding geometry.
The bimolecular collision frequency related to many reactions including protein coagulation/aggregation is initially described by Smoluchowski coagulation equation proposed by Marian Smoluchowski in a seminal 1916 publication, derived from Brownian motion and Fick's laws of diffusion. Under an idealized reaction condition for A + B → product in a diluted solution, Smoluchovski suggested that the molecular flux at the infinite time limit can be calculated from Fick's laws of diffusion yielding a fixed/stable concentration gradient from the target molecule, e.g. B is the target molecule holding fixed relatively, and A is the moving molecule that creates a concentration gradient near the target molecule B due to the coagulation reaction between A and B. Smoluchowski calculated the collision frequency between A and B in the solution with unit #/s/m3:
where:
is the radius of the collision,
is the relative diffusion constant between A and B (m2/s),
and are number concentrations of A and B respectively (#/m3).
The reaction order of this bimolecular reaction is 2 which is the analogy to the result from collision theory by replacing the moving speed of the molecule with diffusive flux. In the collision theory, the traveling time between A and B is proportional to the distance which is a similar relationship for the diffusion case if the flux is fixed.
However, under a practical condition, the concentration gradient near the target molecule is evolving over time with the molecular flux evolving as well, and on average the flux is much bigger than the infinite time limit flux Smoluchowski has proposed. Before the first passenger arrival time, Fick's equation predicts a concentration gradient over time which does not build up yet in reality. Thus, this Smoluchowski frequency represents the lower limit of the real collision frequency.
In 2022, Chen calculates the upper limit of the collision frequency between A and B in a solution assuming the bulk concentration of the moving molecule is fixed after the first nearest neighbor of the target molecule. Thus the concentration gradient evolution stops at the first nearest neighbor layer given a stop-time to calculate the actual flux. He named this the critical time and derived the diffusive collision frequency in unit #/s/m3:
where:
is the area of the cross-section of the collision (m2),
is the relative diffusion constant between A and B (m2/s),
and are number concentrations of A and B respectively (#/m3),
represents 1/<d>, where d is the average distance between two molecules.
This equation assumes the upper limit of a diffusive collision frequency between A and B is when the first neighbor layer starts to feel the evolution of the concentration gradient, whose reaction order is instead of 2. Both the Smoluchowski equation and the JChen equation satisfy dimensional checks with SI units. But the former is dependent on the radius and the latter is on the area of the collision sphere. From dimensional analysis, there will be an equation dependent on the volume of the collision sphere but eventually, all equations should converge to the same numerical rate of the collision that can be measured experimentally. The actual reaction order for a bimolecular unit reaction could be between 2 and , which makes sense because the diffusive collision time is squarely dependent on the distance between the two molecules.
These new equations also avoid the singularity on the adsorption rate at time zero for the Langmuir-Schaefer equation. The infinity rate is justifiable under ideal conditions because when you introduce target molecules magically in a solution of probe molecule or vice versa, there always be a probability of them overlapping at time zero, thus the rate of that two molecules association is infinity. It does not matter that other millions of molecules have to wait for their first mate to diffuse and arrive. The average rate is thus infinity. But statistically this argument is meaningless. The maximum rate of a molecule in a period of time larger than zero is 1, either meet or not, thus the infinite rate at time zero for that molecule pair really should just be one, making the average rate 1/millions or more and statistically negligible. This does not even count in reality no two molecules can magically meet at time zero.
Biological perspective
The first law gives rise to the following formula:
where
is the permeability, an experimentally determined membrane "conductance" for a given gas at a given temperature,
is the difference in concentration of the gas across the membrane for the direction of flow (from to ).
Fick's first law is also important in radiation transfer equations. However, in this context, it becomes inaccurate when the diffusion constant is low and the radiation becomes limited by the speed of light rather than by the resistance of the material the radiation is flowing through. In this situation, one can use a flux limiter.
The exchange rate of a gas across a fluid membrane can be determined by using this law together with Graham's law.
Under the condition of a diluted solution when diffusion takes control, the membrane permeability mentioned in the above section can be theoretically calculated for the solute using the equation mentioned in the last section (use with particular care because the equation is derived for dense solutes, while biological molecules are not denser than water. Also, this equation assumes ideal concentration gradient forms near the membrane and evolves):
where:
is the total area of the pores on the membrane (unit m2),
transmembrane efficiency (unitless), which can be calculated from the stochastic theory of chromatography,
D is the diffusion constant of the solute unit m2⋅s−1,
t is time unit s,
c2, c1 concentration should use unit mol m−3, so flux unit becomes mol s−1.
The flux is decay over the square root of time because a concentration gradient builds up near the membrane over time under ideal conditions. When there is flow and convection, the flux can be significantly different than the equation predicts and show an effective time t with a fixed value, which makes the flux stable instead of decay over time. A critical time has been estimated under idealized flow conditions when there is no gradient formed. This strategy is adopted in biology such as blood circulation.
Semiconductor fabrication applications
The semiconductor is a collective term for a series of devices. It mainly includes three categories:two-terminal devices, three-terminal devices, and four-terminal devices. The combination of the semiconductors is called an integrated circuit.
The relationship between Fick's law and semiconductors: the principle of the semiconductor is transferring chemicals or dopants from a layer to a layer. Fick's law can be used to control and predict the diffusion by knowing how much the concentration of the dopants or chemicals move per meter and second through mathematics.
Therefore, different types and levels of semiconductors can be fabricated.
Integrated circuit fabrication technologies, model processes like CVD, thermal oxidation, wet oxidation, doping, etc. use diffusion equations obtained from Fick's law.
CVD method of fabricate semiconductor
The wafer is a kind of semiconductor whose silicon substrate is coated with a layer of CVD-created polymer chain and films. This film contains n-type and p-type dopants and takes responsibility for dopant conductions. The principle of CVD relies on the gas phase and gas-solid chemical reaction to create thin films.
The viscous flow regime of CVD is driven by a pressure gradient. CVD also includes a diffusion component distinct from the surface diffusion of adatoms. In CVD, reactants and products must also diffuse through a boundary layer of stagnant gas that exists next to the substrate. The total number of steps required for CVD film growth are gas phase diffusion of reactants through the boundary layer, adsorption and surface diffusion of adatoms, reactions on the substrate, and gas phase diffusion of products away through the boundary layer.
The velocity profile for gas flow is:
where:
is the thickness,
is the Reynolds number,
is the length of the substrate,
at any surface,
is viscosity,
is density.
Integrated the from to , it gives the average thickness:
To keep the reaction balanced, reactants must diffuse through the stagnant boundary layer to reach the substrate. So a thin boundary layer is desirable. According to the equations, increasing vo would result in more wasted reactants. The reactants will not reach the substrate uniformly if the flow becomes turbulent. Another option is to switch to a new carrier gas with lower viscosity or density.
The Fick's first law describes diffusion through the boundary layer. As a function of pressure (p) and temperature (T) in a gas, diffusion is determined.
where:
is the standard pressure,
is the standard temperature,
is the standard diffusitivity.
The equation tells that increasing the temperature or decreasing the pressure can increase the diffusivity.
Fick's first law predicts the flux of the reactants to the substrate and product away from the substrate:
where:
is the thickness ,
is the first reactant's concentration.
In ideal gas law , the concentration of the gas is expressed by partial pressure.
where
is the gas constant,
is the partial pressure gradient.
As a result, Fick's first law tells us we can use a partial pressure gradient to control the diffusivity and control the growth of thin films of semiconductors.
In many realistic situations, the simple Fick's law is not an adequate formulation for the semiconductor problem. It only applies to certain conditions, for example, given the semiconductor boundary conditions: constant source concentration diffusion, limited source concentration, or moving boundary diffusion (where junction depth keeps moving into the substrate).
Invalidity of Fickian diffusion
Even though Fickian diffusion has been used to model diffusion processes in semiconductor manufacturing (including CVD reactors) in early days, it often fails to validate the diffusion in advanced semiconductor nodes (< 90 nm). This mostly stems from the inability of Fickian diffusion to model diffusion processes accurately at molecular level and smaller. In advanced semiconductor manufacturing, it is important to understand the movement at atomic scales, which is failed by continuum diffusion. Today, most semiconductor manufacturers use random walk to study and model diffusion processes. This allows us to study the effects of diffusion in a discrete manner to understand the movement of individual atoms, molecules, plasma etc.
In such a process, the movements of diffusing species (atoms, molecules, plasma etc.) are treated as a discrete entity, following a random walk through the CVD reactor, boundary layer, material structures etc. Sometimes, the movements might follow a biased-random walk depending on the processing conditions. Statistical analysis is done to understand variation/stochasticity arising from the random walk of the species, which in-turn affects the overall process and electrical variations.
Food production and cooking
The formulation of Fick's first law can explain a variety of complex phenomena in the context of food and cooking: Diffusion of molecules such as ethylene promotes plant growth and ripening, salt and sugar molecules promotes meat brining and marinating, and water molecules promote dehydration. Fick's first law can also be used to predict the changing moisture profiles across a spaghetti noodle as it hydrates during cooking. These phenomena are all about the spontaneous movement of particles of solutes driven by the concentration gradient. In different situations, there is different diffusivity which is a constant.
By controlling the concentration gradient, the cooking time, shape of the food, and salting can be controlled.
See also
Advection
Churchill–Bernstein equation
Diffusion
False diffusion
Gas exchange
Mass flux
Maxwell–Stefan diffusion
Nernst–Planck equation
Osmosis
Citations
Further reading
– reprinted in
External links
Fick's equations, Boltzmann's transformation, etc. (with figures and animations)
Fick's Second Law on OpenStax
Diffusion
Eponymous laws of physics
Mathematics in medicine
Physical chemistry
Statistical mechanics
de:Diffusion#Erstes Fick'sches Gesetz | 0.763214 | 0.998341 | 0.761948 |
Standard gravity | The standard acceleration of gravity or standard acceleration of free fall, often called simply standard gravity and denoted by or , is the nominal gravitational acceleration of an object in a vacuum near the surface of the Earth. It is a constant defined by standard as . This value was established by the 3rd General Conference on Weights and Measures (1901, CR 70) and used to define the standard weight of an object as the product of its mass and this nominal acceleration. The acceleration of a body near the surface of the Earth is due to the combined effects of gravity and centrifugal acceleration from the rotation of the Earth (but the latter is small enough to be negligible for most purposes); the total (the apparent gravity) is about 0.5% greater at the poles than at the Equator.
Although the symbol is sometimes used for standard gravity, (without a suffix) can also mean the local acceleration due to local gravity and centrifugal acceleration, which varies depending on one's position on Earth (see Earth's gravity). The symbol should not be confused with , the gravitational constant, or g, the symbol for gram. The is also used as a unit for any form of acceleration, with the value defined as above.
The value of defined above is a nominal midrange value on Earth, originally based on the acceleration of a body in free fall at sea level at a geodetic latitude of 45°. Although the actual acceleration of free fall on Earth varies according to location, the above standard figure is always used for metrological purposes. In particular, since it is the ratio of the kilogram-force and the kilogram, its numeric value when expressed in coherent SI units is the ratio of the kilogram-force and the newton, two units of force.
History
Already in the early days of its existence, the International Committee for Weights and Measures (CIPM) proceeded to define a standard thermometric scale, using the boiling point of water. Since the boiling point varies with the atmospheric pressure, the CIPM needed to define a standard atmospheric pressure. The definition they chose was based on the weight of a column of mercury of 760 mm. But since that weight depends on the local gravity, they now also needed a standard gravity. The 1887 CIPM meeting decided as follows:
All that was needed to obtain a numerical value for standard gravity was now to measure the gravitational strength at the International Bureau. This task was given to Gilbert Étienne Defforges of the Geographic Service of the French Army. The value he found, based on measurements taken in March and April 1888, was 9.80991(5) m⋅s−2.
This result formed the basis for determining the value still used today for standard gravity. The third General Conference on Weights and Measures, held in 1901, adopted a resolution declaring as follows:
The numeric value adopted for was, in accordance with the 1887 CIPM declaration, obtained by dividing Defforges's result – 980.991 cm⋅s−2 in the cgs system then en vogue – by 1.0003322 while not taking more digits than are warranted considering the uncertainty in the result.
Conversions
See also
Gravity of Earth
Gravity map
Seconds pendulum
Theoretical gravity
References
Physical quantities
Gravity
Units of acceleration
Constants | 0.765373 | 0.995505 | 0.761932 |
Yerkes–Dodson law | The Yerkes–Dodson law is an empirical relationship between arousal and performance, originally developed by psychologists Robert M. Yerkes and John Dillingham Dodson in 1908. The law dictates that performance increases with physiological or mental arousal, but only up to a point. When levels of arousal become too high, performance decreases. The process is often illustrated graphically as a bell-shaped curve which increases and then decreases with higher levels of arousal. The original paper (a study of the Japanese house mouse, described as the "dancing mouse") was only referenced ten times over the next half century, yet in four of the citing articles, these findings were described as a psychological "law".
Levels of arousal
Researchers have found that different tasks require different levels of arousal for optimal performance. For example, difficult or intellectually demanding tasks may require a lower level of arousal (to facilitate concentration), whereas tasks demanding stamina or persistence may be performed better with higher levels of arousal (to increase motivation).
Because of task differences, the shape of the curve can be highly variable. For simple or well-learned tasks, the relationship is monotonic, and performance improves as arousal increases. For complex, unfamiliar, or difficult tasks, the relationship between arousal and performance reverses after a point, and performance thereafter declines as arousal increases.
The effect of task difficulty led to the hypothesis that the Yerkes–Dodson Law can be decomposed into two distinct factors as in a bathtub curve. The upward part of the inverted U can be thought of as the energizing effect of arousal. The downward part is caused by negative effects of arousal (or stress) on cognitive processes like attention (e.g., "tunnel vision"), memory, and problem-solving.
There has been research indicating that the correlation suggested by Yerkes and Dodson exists (such as that of Broadhurst (1959), Duffy (1957), and Anderson et al (1988)), but a cause of the correlation has not yet successfully been established (Anderson, Revelle, & Lynch, 1989).
Alternative models
Other theories and models of arousal do not affirm the Hebb or Yerkes-Dodson curve. The widely supported theory of optimal flow presents a less simplistic understanding of arousal and skill-level match. Reversal theory actively opposes the Yerkes-Dodson law by demonstrating how the psyche operates on the principle bistability rather than homeostasis.
Relationship to glucocorticoids
A 2007 review by Lupien at al of the effects of stress hormones (glucocorticoids, GC) and human cognition revealed that memory performance vs. circulating levels of glucocorticoids does manifest an upside-down U-shaped curve, and the authors noted the resemblance to the Yerkes–Dodson curve. For example, long-term potentiation (LTP) (the process of forming long-term memories) is optimal when glucocorticoid levels are mildly elevated, whereas significant decreases of LTP are observed after adrenalectomy (low GC state) or after exogenous glucocorticoid administration (high GC state).
This review also revealed that in order for a situation to induce a stress response, it has to be interpreted as one or more of the following:
novel
unpredictable
not controllable by the individual
a social evaluative threat (negative social evaluation possibly leading to social rejection).
It has also been shown that elevated levels of glucocorticoids enhance memory for emotionally arousing events but lead more often than not to poor memory for material unrelated to the source of stress/emotional arousal.
See also
Drive theory
Emotion
Emotion and memory
Flashbulb memory
Low arousal theory
References
External links
Behavioral concepts | 0.766982 | 0.993401 | 0.761921 |
Magnetism | Magnetism is the class of physical attributes that occur through a magnetic field, which allows objects to attract or repel each other. Because both electric currents and magnetic moments of elementary particles give rise to a magnetic field, magnetism is one of two aspects of electromagnetism.
The most familiar effects occur in ferromagnetic materials, which are strongly attracted by magnetic fields and can be magnetized to become permanent magnets, producing magnetic fields themselves. Demagnetizing a magnet is also possible. Only a few substances are ferromagnetic; the most common ones are iron, cobalt, nickel, and their alloys.
All substances exhibit some type of magnetism. Magnetic materials are classified according to their bulk susceptibility. Ferromagnetism is responsible for most of the effects of magnetism encountered in everyday life, but there are actually several types of magnetism. Paramagnetic substances, such as aluminium and oxygen, are weakly attracted to an applied magnetic field; diamagnetic substances, such as copper and carbon, are weakly repelled; while antiferromagnetic materials, such as chromium, have a more complex relationship with a magnetic field. The force of a magnet on paramagnetic, diamagnetic, and antiferromagnetic materials is usually too weak to be felt and can be detected only by laboratory instruments, so in everyday life, these substances are often described as non-magnetic.
The strength of a magnetic field always decreases with distance from the magnetic source, though the exact mathematical relationship between strength and distance varies. Many factors can influence the magnetic field of an object including the magnetic moment of the material, the physical shape of the object, both the magnitude and direction of any electric current present within the object, and the temperature of the object.
History
Magnetism was first discovered in the ancient world when people noticed that lodestones, naturally magnetized pieces of the mineral magnetite, could attract iron. The word magnet comes from the Greek term μαγνῆτις λίθος magnētis lithos, "the Magnesian stone, lodestone". In ancient Greece, Aristotle attributed the first of what could be called a scientific discussion of magnetism to the philosopher Thales of Miletus, who lived from about 625 BC to about 545 BC. The ancient Indian medical text Sushruta Samhita describes using magnetite to remove arrows embedded in a person's body.
In ancient China, the earliest literary reference to magnetism lies in a 4th-century BC book named after its author, Guiguzi.
The 2nd-century BC annals, Lüshi Chunqiu, also notes:
"The lodestone makes iron approach; some (force) is attracting it."
The earliest mention of the attraction of a needle is in a 1st-century work Lunheng (Balanced Inquiries): "A lodestone attracts a needle."
The 11th-century Chinese scientist Shen Kuo was the first person to write—in the Dream Pool Essays—of the magnetic needle compass and that it improved the accuracy of navigation by employing the astronomical concept of true north.
By the 12th century, the Chinese were known to use the lodestone compass for navigation. They sculpted a directional spoon from lodestone in such a way that the handle of the spoon always pointed south.
Alexander Neckam, by 1187, was the first in Europe to describe the compass and its use for navigation. In 1269, Peter Peregrinus de Maricourt wrote the Epistola de magnete, the first extant treatise describing the properties of magnets. In 1282, the properties of magnets and the dry compasses were discussed by Al-Ashraf Umar II, a Yemeni physicist, astronomer, and geographer.
Leonardo Garzoni's only extant work, the Due trattati sopra la natura, e le qualità della calamita (Two treatises on the nature and qualities of the magnet), is the first known example of a modern treatment of magnetic phenomena. Written in years near 1580 and never published, the treatise had a wide diffusion. In particular, Garzoni is referred to as an expert in magnetism by Niccolò Cabeo, whose Philosophia Magnetica (1629) is just a re-adjustment of Garzoni's work. Garzoni's treatise was known also to Giovanni Battista Della Porta.
In 1600, William Gilbert published his De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure (On the Magnet and Magnetic Bodies, and on the Great Magnet the Earth). In this work he describes many of his experiments with his model earth called the terrella. From his experiments, he concluded that the Earth was itself magnetic and that this was the reason compasses pointed north whereas, previously, some believed that it was the pole star Polaris or a large magnetic island on the north pole that attracted the compass.
An understanding of the relationship between electricity and magnetism began in 1819 with work by Hans Christian Ørsted, a professor at the University of Copenhagen, who discovered, by the accidental twitching of a compass needle near a wire, that an electric current could create a magnetic field. This landmark experiment is known as Ørsted's Experiment. Jean-Baptiste Biot and Félix Savart, both of whom in 1820 came up with the Biot–Savart law giving an equation for the magnetic field from a current-carrying wire. Around the same time, André-Marie Ampère carried out numerous systematic experiments and discovered that the magnetic force between two DC current loops of any shape is equal to the sum of the individual forces that each current element of one circuit exerts on each other current element of the other circuit.
In 1831, Michael Faraday discovered that a time-varying magnetic flux induces a voltage through a wire loop. In 1835, Carl Friedrich Gauss hypothesized, based on Ampère's force law in its original form, that all forms of magnetism arise as a result of elementary point charges moving relative to each other. Wilhelm Eduard Weber advanced Gauss's theory to Weber electrodynamics.
From around 1861, James Clerk Maxwell synthesized and expanded many of these insights into Maxwell's equations, unifying electricity, magnetism, and optics into the field of electromagnetism. However, Gauss's interpretation of magnetism is not fully compatible with Maxwell's electrodynamics. In 1905, Albert Einstein used Maxwell's equations in motivating his theory of special relativity, requiring that the laws held true in all inertial reference frames. Gauss's approach of interpreting the magnetic force as a mere effect of relative velocities thus found its way back into electrodynamics to some extent.
Electromagnetism has continued to develop into the 21st century, being incorporated into the more fundamental theories of gauge theory, quantum electrodynamics, electroweak theory, and finally the standard model.
Sources
Magnetism, at its root, arises from three sources:
Electric current
Spin magnetic moments of elementary particles
Changing electric fields
The magnetic properties of materials are mainly due to the magnetic moments of their atoms' orbiting electrons. The magnetic moments of the nuclei of atoms are typically thousands of times smaller than the electrons' magnetic moments, so they are negligible in the context of the magnetization of materials. Nuclear magnetic moments are nevertheless very important in other contexts, particularly in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI).
Ordinarily, the enormous number of electrons in a material are arranged such that their magnetic moments (both orbital and intrinsic) cancel out. This is due, to some extent, to electrons combining into pairs with opposite intrinsic magnetic moments as a result of the Pauli exclusion principle (see electron configuration), and combining into filled subshells with zero net orbital motion. In both cases, the electrons preferentially adopt arrangements in which the magnetic moment of each electron is canceled by the opposite moment of another electron. Moreover, even when the electron configuration is such that there are unpaired electrons and/or non-filled subshells, it is often the case that the various electrons in the solid will contribute magnetic moments that point in different, random directions so that the material will not be magnetic.
Sometimeseither spontaneously, or owing to an applied external magnetic fieldeach of the electron magnetic moments will be, on average, lined up. A suitable material can then produce a strong net magnetic field.
The magnetic behavior of a material depends on its structure, particularly its electron configuration, for the reasons mentioned above, and also on the temperature. At high temperatures, random thermal motion makes it more difficult for the electrons to maintain alignment.
Types
Diamagnetism
Diamagnetism appears in all materials and is the tendency of a material to oppose an applied magnetic field, and therefore, to be repelled by a magnetic field. However, in a material with paramagnetic properties (that is, with a tendency to enhance an external magnetic field), the paramagnetic behavior dominates. Thus, despite its universal occurrence, diamagnetic behavior is observed only in a purely diamagnetic material. In a diamagnetic material, there are no unpaired electrons, so the intrinsic electron magnetic moments cannot produce any bulk effect. In these cases, the magnetization arises from the electrons' orbital motions, which can be understood classically as follows:
This description is meant only as a heuristic; the Bohr–Van Leeuwen theorem shows that diamagnetism is impossible according to classical physics, and that a proper understanding requires a quantum-mechanical description.
All materials undergo this orbital response. However, in paramagnetic and ferromagnetic substances, the diamagnetic effect is overwhelmed by the much stronger effects caused by the unpaired electrons.
Paramagnetism
In a paramagnetic material there are unpaired electrons; i.e., atomic or molecular orbitals with exactly one electron in them. While paired electrons are required by the Pauli exclusion principle to have their intrinsic ('spin') magnetic moments pointing in opposite directions, causing their magnetic fields to cancel out, an unpaired electron is free to align its magnetic moment in any direction. When an external magnetic field is applied, these magnetic moments will tend to align themselves in the same direction as the applied field, thus reinforcing it.
Ferromagnetism
A ferromagnet, like a paramagnetic substance, has unpaired electrons. However, in addition to the electrons' intrinsic magnetic moment's tendency to be parallel to an applied field, there is also in these materials a tendency for these magnetic moments to orient parallel to each other to maintain a lowered-energy state. Thus, even in the absence of an applied field, the magnetic moments of the electrons in the material spontaneously line up parallel to one another.
Every ferromagnetic substance has its own individual temperature, called the Curie temperature, or Curie point, above which it loses its ferromagnetic properties. This is because the thermal tendency to disorder overwhelms the energy-lowering due to ferromagnetic order.
Ferromagnetism only occurs in a few substances; common ones are iron, nickel, cobalt, their alloys, and some alloys of rare-earth metals.
Magnetic domains
The magnetic moments of atoms in a ferromagnetic material cause them to behave something like tiny permanent magnets. They stick together and align themselves into small regions of more or less uniform alignment called magnetic domains or Weiss domains. Magnetic domains can be observed with a magnetic force microscope to reveal magnetic domain boundaries that resemble white lines in the sketch. There are many scientific experiments that can physically show magnetic fields.
When a domain contains too many molecules, it becomes unstable and divides into two domains aligned in opposite directions so that they stick together more stably.
When exposed to a magnetic field, the domain boundaries move, so that the domains aligned with the magnetic field grow and dominate the structure (dotted yellow area), as shown at the left. When the magnetizing field is removed, the domains may not return to an unmagnetized state. This results in the ferromagnetic material's being magnetized, forming a permanent magnet.
When magnetized strongly enough that the prevailing domain overruns all others to result in only one single domain, the material is magnetically saturated. When a magnetized ferromagnetic material is heated to the Curie point temperature, the molecules are agitated to the point that the magnetic domains lose the organization, and the magnetic properties they cause cease. When the material is cooled, this domain alignment structure spontaneously returns, in a manner roughly analogous to how a liquid can freeze into a crystalline solid.
Antiferromagnetism
In an antiferromagnet, unlike a ferromagnet, there is a tendency for the intrinsic magnetic moments of neighboring valence electrons to point in opposite directions. When all atoms are arranged in a substance so that each neighbor is anti-parallel, the substance is antiferromagnetic. Antiferromagnets have a zero net magnetic moment because adjacent opposite moment cancels out, meaning that no field is produced by them. Antiferromagnets are less common compared to the other types of behaviors and are mostly observed at low temperatures. In varying temperatures, antiferromagnets can be seen to exhibit diamagnetic and ferromagnetic properties.
In some materials, neighboring electrons prefer to point in opposite directions, but there is no geometrical arrangement in which each pair of neighbors is anti-aligned. This is called a canted antiferromagnet or spin ice and is an example of geometrical frustration.
Ferrimagnetism
Like ferromagnetism, ferrimagnets retain their magnetization in the absence of a field. However, like antiferromagnets, neighboring pairs of electron spins tend to point in opposite directions. These two properties are not contradictory, because in the optimal geometrical arrangement, there is more magnetic moment from the sublattice of electrons that point in one direction, than from the sublattice that points in the opposite direction.
Most ferrites are ferrimagnetic. The first discovered magnetic substance, magnetite, is a ferrite and was originally believed to be a ferromagnet; Louis Néel disproved this, however, after discovering ferrimagnetism.
Superparamagnetism
When a ferromagnet or ferrimagnet is sufficiently small, it acts like a single magnetic spin that is subject to Brownian motion. Its response to a magnetic field is qualitatively similar to the response of a paramagnet, but much larger.
Nagaoka magnetism
Japanese physicist Yosuke Nagaoka conceived of a type of magnetism in a square, two-dimensional lattice where every lattice node had one electron. If one electron was removed under specific conditions, the lattice's energy would be minimal only when all electrons' spins were parallel.
A variation on this was achieved experimentally by arranging the atoms in a triangular moiré lattice of molybdenum diselenide and tungsten disulfide monolayers. Applying a weak magnetic field and a voltage led to ferromagnetic behavior when 100-150% more electrons than lattice nodes were present. The extra electrons delocalized and paired with lattice electrons to form doublons. Delocalization was prevented unless the lattice electrons had aligned spins. The doublons thus created localized ferromagnetic regions. The phenomenon took place at 140 millikelvins.
Other types of magnetism
Metamagnetism
Molecule-based magnets
Single-molecule magnet
Amorphous magnet
Electromagnet
An electromagnet is a type of magnet in which the magnetic field is produced by an electric current. The magnetic field disappears when the current is turned off. Electromagnets usually consist of a large number of closely spaced turns of wire that create the magnetic field. The wire turns are often wound around a magnetic core made from a ferromagnetic or ferrimagnetic material such as iron; the magnetic core concentrates the magnetic flux and makes a more powerful magnet.
The main advantage of an electromagnet over a permanent magnet is that the magnetic field can be quickly changed by controlling the amount of electric current in the winding. However, unlike a permanent magnet that needs no power, an electromagnet requires a continuous supply of current to maintain the magnetic field.
Electromagnets are widely used as components of other electrical devices, such as motors, generators, relays, solenoids, loudspeakers, hard disks, MRI machines, scientific instruments, and magnetic separation equipment. Electromagnets are also employed in industry for picking up and moving heavy iron objects such as scrap iron and steel. Electromagnetism was discovered in 1820.
Magnetism, electricity, and special relativity
As a consequence of Einstein's theory of special relativity, electricity and magnetism are fundamentally interlinked. Both magnetism lacking electricity, and electricity without magnetism, are inconsistent with special relativity, due to such effects as length contraction, time dilation, and the fact that the magnetic force is velocity-dependent. However, when both electricity and magnetism are taken into account, the resulting theory (electromagnetism) is fully consistent with special relativity. In particular, a phenomenon that appears purely electric or purely magnetic to one observer may be a mix of both to another, or more generally the relative contributions of electricity and magnetism are dependent on the frame of reference. Thus, special relativity "mixes" electricity and magnetism into a single, inseparable phenomenon called electromagnetism, analogous to how general relativity "mixes" space and time into spacetime.
All observations on electromagnetism apply to what might be considered to be primarily magnetism, e.g. perturbations in the magnetic field are necessarily accompanied by a nonzero electric field, and propagate at the speed of light.
Magnetic fields in a material
In vacuum,
where is the vacuum permeability.
In a material,
The quantity is called magnetic polarization.
If the field is small, the response of the magnetization in a diamagnet or paramagnet is approximately linear:
the constant of proportionality being called the magnetic susceptibility. If so,
In a hard magnet such as a ferromagnet, is not proportional to the field and is generally nonzero even when is zero (see Remanence).
Magnetic force
The phenomenon of magnetism is "mediated" by the magnetic field. An electric current or magnetic dipole creates a magnetic field, and that field, in turn, imparts magnetic forces on other particles that are in the fields.
Maxwell's equations, which simplify to the Biot–Savart law in the case of steady currents, describe the origin and behavior of the fields that govern these forces. Therefore, magnetism is seen whenever electrically charged particles are in motion—for example, from movement of electrons in an electric current, or in certain cases from the orbital motion of electrons around an atom's nucleus. They also arise from "intrinsic" magnetic dipoles arising from quantum-mechanical spin.
The same situations that create magnetic fields—charge moving in a current or in an atom, and intrinsic magnetic dipoles—are also the situations in which a magnetic field has an effect, creating a force. Following is the formula for moving charge; for the forces on an intrinsic dipole, see magnetic dipole.
When a charged particle moves through a magnetic field B, it feels a Lorentz force F given by the cross product:
where
is the electric charge of the particle, and
v is the velocity vector of the particle
Because this is a cross product, the force is perpendicular to both the motion of the particle and the magnetic field. It follows that the magnetic force does no work on the particle; it may change the direction of the particle's movement, but it cannot cause it to speed up or slow down. The magnitude of the force is
where is the angle between v and B.
One tool for determining the direction of the velocity vector of a moving charge, the magnetic field, and the force exerted is labeling the index finger "V", the middle finger "B", and the thumb "F" with your right hand. When making a gun-like configuration, with the middle finger crossing under the index finger, the fingers represent the velocity vector, magnetic field vector, and force vector, respectively. See also right-hand rule.
Magnetic dipoles
A very common source of magnetic field found in nature is a dipole, with a "South pole" and a "North pole", terms dating back to the use of magnets as compasses, interacting with the Earth's magnetic field to indicate North and South on the globe. Since opposite ends of magnets are attracted, the north pole of a magnet is attracted to the south pole of another magnet. The Earth's North Magnetic Pole (currently in the Arctic Ocean, north of Canada) is physically a south pole, as it attracts the north pole of a compass.
A magnetic field contains energy, and physical systems move toward configurations with lower energy. When diamagnetic material is placed in a magnetic field, a magnetic dipole tends to align itself in opposed polarity to that field, thereby lowering the net field strength. When ferromagnetic material is placed within a magnetic field, the magnetic dipoles align to the applied field, thus expanding the domain walls of the magnetic domains.
Magnetic monopoles
Since a bar magnet gets its ferromagnetism from electrons distributed evenly throughout the bar, when a bar magnet is cut in half, each of the resulting pieces is a smaller bar magnet. Even though a magnet is said to have a north pole and a south pole, these two poles cannot be separated from each other. A monopole—if such a thing exists—would be a new and fundamentally different kind of magnetic object. It would act as an isolated north pole, not attached to a south pole, or vice versa. Monopoles would carry "magnetic charge" analogous to electric charge. Despite systematic searches since 1931, , they have never been observed, and could very well not exist.
Nevertheless, some theoretical physics models predict the existence of these magnetic monopoles. Paul Dirac observed in 1931 that, because electricity and magnetism show a certain symmetry, just as quantum theory predicts that individual positive or negative electric charges can be observed without the opposing charge, isolated South or North magnetic poles should be observable. Using quantum theory Dirac showed that if magnetic monopoles exist, then one could explain the quantization of electric charge—that is, why the observed elementary particles carry charges that are multiples of the charge of the electron.
Certain grand unified theories predict the existence of monopoles which, unlike elementary particles, are solitons (localized energy packets). The initial results of using these models to estimate the number of monopoles created in the Big Bang contradicted cosmological observations—the monopoles would have been so plentiful and massive that they would have long since halted the expansion of the universe. However, the idea of inflation (for which this problem served as a partial motivation) was successful in solving this problem, creating models in which monopoles existed but were rare enough to be consistent with current observations.
Units
SI
Other
gauss – the centimeter-gram-second (CGS) unit of magnetic field (denoted B).
oersted – the CGS unit of magnetizing field (denoted H)
maxwell – the CGS unit for magnetic flux
gamma – a unit of magnetic flux density that was commonly used before the tesla came into use (1.0 gamma = 1.0 nanotesla)
μ0 – common symbol for the permeability of free space ( newton/(ampere-turn)2)
Living things
Some organisms can detect magnetic fields, a phenomenon known as magnetoception. Some materials in living things are ferromagnetic, though it is unclear if the magnetic properties serve a special function or are merely a byproduct of containing iron. For instance, chitons, a type of marine mollusk, produce magnetite to harden their teeth, and even humans produce magnetite in bodily tissue.
Magnetobiology studies the effects of magnetic fields on living organisms; fields naturally produced by an organism are known as biomagnetism. Many biological organisms are mostly made of water, and because water is diamagnetic, extremely strong magnetic fields can repel these living things.
Interpretation of magnetism by means of relative velocities
In the years after 1820, André-Marie Ampère carried out numerous experiments in which he measured the forces between direct currents. In particular, he also studied the magnetic forces between non-parallel wires. The final result of his work was a force law that is now named after him. In 1835, Carl Friedrich Gauss realized that Ampere's force law in its original form can be explained by a generalization of Coulomb's law.
Gauss's force law states that the electromagnetic force experienced by a point charge, with trajectory , in the vicinity of another point charge, with trajectory , in a vacuum is equal to the central force
,
where is the distance between the charges and is the relative velocity. Wilhelm Eduard Weber confirmed Gauss's hypothesis in numerous experiments. By means of Weber electrodynamics it is possible to explain the static and quasi-static effects in the non-relativistic regime of classical electrodynamics without magnetic field and Lorentz force.
Since 1870, Maxwell electrodynamics has been developed, which postulates that electric and magnetic fields exist. In Maxwell's electrodynamics, the actual electromagnetic force can be calculated using the Lorentz force, which, like the Weber force, is speed-dependent. However, Maxwell's electrodynamics is not fully compatible with the work of Ampère, Gauss and Weber in the quasi-static regime. In particular, Ampère's original force law and the Biot-Savart law are only equivalent if the field-generating conductor loop is closed. Maxwell's electrodynamics therefore represents a break with the interpretation of magnetism by Gauss and Weber, since in Maxwell's electrodynamics it is no longer possible to deduce the magnetic force from a central force.
Quantum-mechanical origin of magnetism
While heuristic explanations based on classical physics can be formulated, diamagnetism, paramagnetism and ferromagnetism can be fully explained only using quantum theory.
A successful model was developed already in 1927, by Walter Heitler and Fritz London, who derived, quantum-mechanically, how hydrogen molecules are formed from hydrogen atoms, i.e. from the atomic hydrogen orbitals and centered at the nuclei A and B, see below. That this leads to magnetism is not at all obvious, but will be explained in the following.
According to the Heitler–London theory, so-called two-body molecular -orbitals are formed, namely the resulting orbital is:
Here the last product means that a first electron, r1, is in an atomic hydrogen-orbital centered at the second nucleus, whereas the second electron runs around the first nucleus. This "exchange" phenomenon is an expression for the quantum-mechanical property that particles with identical properties cannot be distinguished. It is specific not only for the formation of chemical bonds, but also for magnetism. That is, in this connection the term exchange interaction arises, a term which is essential for the origin of magnetism, and which is stronger, roughly by factors 100 and even by 1000, than the energies arising from the electrodynamic dipole-dipole interaction.
As for the spin function , which is responsible for the magnetism, we have the already mentioned Pauli's principle, namely that a symmetric orbital (i.e. with the + sign as above) must be multiplied with an antisymmetric spin function (i.e. with a − sign), and vice versa. Thus:
,
I.e., not only and must be substituted by α and β, respectively (the first entity means "spin up", the second one "spin down"), but also the sign + by the − sign, and finally ri by the discrete values si (= ±); thereby we have and . The "singlet state", i.e. the − sign, means: the spins are antiparallel, i.e. for the solid we have antiferromagnetism, and for two-atomic molecules one has diamagnetism. The tendency to form a (homoeopolar) chemical bond (this means: the formation of a symmetric molecular orbital, i.e. with the + sign) results through the Pauli principle automatically in an antisymmetric spin state (i.e. with the − sign). In contrast, the Coulomb repulsion of the electrons, i.e. the tendency that they try to avoid each other by this repulsion, would lead to an antisymmetric orbital function (i.e. with the − sign) of these two particles, and complementary to a symmetric spin function (i.e. with the + sign, one of the so-called "triplet functions"). Thus, now the spins would be parallel (ferromagnetism in a solid, paramagnetism in two-atomic gases).
The last-mentioned tendency dominates in the metals iron, cobalt and nickel, and in some rare earths, which are ferromagnetic. Most of the other metals, where the first-mentioned tendency dominates, are nonmagnetic (e.g. sodium, aluminium, and magnesium) or antiferromagnetic (e.g. manganese). Diatomic gases are also almost exclusively diamagnetic, and not paramagnetic. However, the oxygen molecule, because of the involvement of π-orbitals, is an exception important for the life-sciences.
The Heitler-London considerations can be generalized to the Heisenberg model of magnetism (Heisenberg 1928).
The explanation of the phenomena is thus essentially based on all subtleties of quantum mechanics, whereas the electrodynamics covers mainly the phenomenology.
See also
Coercivity
Gravitomagnetism
Magnetic hysteresis
Magnetar
Magnetic bearing
Magnetic circuit
Magnetic cooling
Magnetic field viewing film
Magnetic stirrer
Switched-mode power supply
Magnetic structure
Micromagnetism
Neodymium magnet
Plastic magnet
Rare-earth magnet
Spin wave
Spontaneous magnetization
Vibrating-sample magnetometer
Textbooks in electromagnetism
References
Further reading
Bibliography
The Exploratorium Science Snacks – Subject:Physics/Electricity & Magnetism
A collection of magnetic structures – MAGNDATA | 0.763344 | 0.998118 | 0.761907 |
Tempora mutantur | Tempora mutantur is a Latin adage that refers to the changes brought about by the passage of time. It also appears in various longer hexametric forms, most commonly Tempora mutantur, nos et mutamur in illis, meaning "Times are changed; we also are changed with them". This hexameter is not found in Classical Latin, but is a variant of phrases of Ovid, to whom it is sometimes mis-attributed. In fact, it dates to 16th-century Germany, the time of the Protestant Reformation, and it subsequently was popularised in various forms.
Wording
Tempora mutantur, nos et mutamur in illis
can be strictly translated as:
"Times are changed; we, too, are changed within them."
Like many adages and proverbial maxims drawn from the Latin cultural tradition, this line is in the hexameter verse used in Greek and Latin epic poetry. All other Latin verses cited in this page are hexameters as well.
The fact that et follows nos and is accented in the hexameter's rhythm gives an emphasis to it. In this position et, normally meaning "and," can take an emphatic meaning and signify "also, too," or "even".
Grammar
"Tempora," a neuter plural and the subject of the first clause, means "times". "Mutantur" is a third person plural present passive, meaning "are changed." "Nos" is the personal pronoun and subject of the second clause, meaning "we," with emphatic force. "Mutamur" is the first person plural present passive, meaning "are changed" as well. "In illis" is an ablative plural referring back to "tempora" and so means "within them". The sentence is also a hexameter verse.
History
Change is an ancient theme in Western philosophy, in which the contribution of the pre-Socratic Heraclitus has been influential. It is summarized in Ancient Greek as panta rhei (πάντα ῥεῖ, "everything flows"). The Latin formulation tempora mutantur is not classical, and does not have a generally accepted attribution – it is often identified as "traditional" – though it is frequently misattributed, particularly to Ovid. It is typically considered a variant of omnia mutantur "everything is changed", specifically from Ovid's Metamorphoses, in the phrase omnia mutantur, nihil interit "everything is changed, nothing perishes". However, the earliest attestation is from the German theologian (1500–1553), who instead uses tempora mutantur as a variant of tempora labuntur "time slips away", from Ovid's Fasti. But the phrase tempora mutantur is in the passive, where as labuntur is form of a deponent verb; its passive form conveys an active meaning.
Various longer Latin forms and vernacular translations appear in 16th and early 17th century; these are discussed below.
German
The earliest attestations are in German Latin literature of the 16th century:
Prior to 1554, the Protestant Reformer Caspar Huberinus completes Ovid's verse in Fasti with tempora mutantur. Ovid's Fasti, VI, 771–772 reads:
Tempora labuntur, tacitisque senescimus annis,
et fugiunt freno non remorante dies.
The times slip away, and we grow old with the silent years,
and the days flee unchecked by a rein.
Fasti was popular in the 16th century, and this passage, near the end of the last extant book of the Fasti, is interpreted as expressing the poet's own old age.
Huberinus rewrites the second line as:
Tempora labuntur, tacitisque senescimus annis;
Tempora mutantur, nosque mutamur in illis.
"Times are slipping away, and we get older by (through, during, with, because of) the silent years"
(nosque = the same as nos et, with different hexameter rhythm)
The German translation is added in 1565 by Johannes Nas:
Tempora mutantur et nos mutamur in ipsis;
Die zeit wirdt verendert / und wir in der zeit.
(ipsis = "themselves")
Finally a couplet dedicated by Matthew Borbonius in 1595 to emperor Lothair I. Also selected for the anthology Delitiae Poetarum Germanorum, 1612, vol. 1, p. 685 (GIF).
{|
|Omnia mutantur, nos et mutamur in illisIlla vices quasdam res habet, illa vices.
|
|style="padding-left:2em;"|"All things are changed, and we are changed with themthat matter has some changes, it (does have) changes".
|}
English
In English vernacular literature it is quoted as "proverbial" in William Harrison's Description of England, 1577, p. 170, part of Holinshed's Chronicles, in the form:
Tempora mutantur, et nos mutamur in illis
with the translation:
"The times change, and we change with them."
It appears in John Lyly's Euphues I 276, 1578, as cited in Dictionary of Proverbs, by George Latimer Apperson, Martin Manser, p. 582 as
"The tymes are chaunged as Ouid sayeth, and wee are chaunged in the times."
in modern spelling:
"The times are changed, as Ovid says, and we are changed in the times."
It gained popularity as a couplet by John Owen, in his popular Epigrammata, 1613 Lib. I. ad Edoardum Noel, epigram 58 O Tempora!:
Tempora mutantur, nos et mutamur in illis;
Quo modo? fit semper tempore pejor homo.
in direct translation (of second line):
"How's that? The man (mankind) always gets worse with time"
Translated by Harvey, 1677, as:
"The Times are Chang'd, and in them Chang'd are we:
How? Man as Times grow worse, grows worse we see."
Incorrect attributions
It is incorrectly attributed to Cicero, presumably a confusion with his O tempora o mores! It is sometimes attributed to Borbonius (1595), though he was predated by over 50 years by others.
Georg Büchmann, Geflügelte Worte: Der Citatenschatz des deutschen Volkes, ed. K. Weidling, 1898 edition, p. 506, confuses historical and poetical reality naming emperor Lothair I as the source and the couplet by Matthias Borbonius printed in 1612 as the quote.
Brewer's Dictionary 1898 edition confuses Borbonius' first name (Matthew) with another poet (Nicholas), the entry reading:
"Omnia mutantur, nos et mutamur in illis," is by Nicholas Borbonius, a Latin poet of the sixteenth century. Dr. Sandys says that the Emperor Lothair, of the Holy Roman Empire, had already said, "Tempora mutantur, nos et mutamur in illis."
Cultural references
Joseph Haydn gave his Symphony No. 64 the title Tempora mutantur.
In James Joyce's novel A Portrait of the Artist as a Young Man, the cronies of the protagonist's (Stephen Dedalus's) father ask him to prove his ability in Latin by asking him "whether it was correct to say: tempora mutantur nos et mutamur or tempora mutantur et nos mutamur in illis." The phrase is meant to be an ironic reference to the decline in fortunes of the Dedalus family at this point in the novel.
In Pierson v. Post, dissenting judge and future US Supreme Court Justice Henry Brockholst Livingston argued "If any thing, therefore, in the digests or pandects shall appear to militate against the defendant in error, who, on this occasion, was foxhunter, we have only to say tempora mutantur, and if men themselves change with the times, why should not laws also undergo an alteration?"
The English print-maker William Washington (1885-1956) added the adage as an inscription to his 1929 engraving, St Olave's, Southwark, which depicts the demolition of St Olave's Church, Southwark, London, in 1928 to make way for modern development.
The adage is inscribed on the Convention Center at Caesars Palace in Las Vegas.
In July 2017 "Tempora mutantur, et nos mutamur in illis" was the first tweet of UK Conservative politician Jacob Rees-Mogg.
In the Yes, Prime Minister episode ‘The National Education Service’, Cabinet Secretary Sir Humphrey Appleby recites the phrase after Prime Minister Jim Hacker claims that "hardly anybody knows [Latin] nowadays".
See also
Impermanence
References
External links
Latin philosophical phrases
Philosophy of time | 0.774284 | 0.98401 | 0.761903 |
Bohr–Sommerfeld model | The Bohr–Sommerfeld model (also known as the Sommerfeld model or Bohr–Sommerfeld theory) was an extension of the Bohr model to allow elliptical orbits of electrons around an atomic nucleus. Bohr–Sommerfeld theory is named after Danish physicist Niels Bohr and German physicist Arnold Sommerfeld. Sommerfeld showed that, if electronic orbits are elliptical instead of circular (as in Bohr's model of the atom), the fine-structure of the hydrogen atom can be described.
The Bohr–Sommerfeld model added to the quantized angular momentum condition of the Bohr model with a radial quantization (condition by William Wilson, the Wilson–Sommerfeld quantization condition):
where pr is the radial momentum canonically conjugate to the coordinate q, which is the radial position, and T is one full orbital period. The integral is the action of action-angle coordinates. This condition, suggested by the correspondence principle, is the only one possible, since the quantum numbers are adiabatic invariants.
History
In 1913, Niels Bohr displayed rudiments of the later defined correspondence principle and used it to formulate a model of the hydrogen atom which explained its line spectrum. In the next few years Arnold Sommerfeld extended the quantum rule to arbitrary integrable systems making use of the principle of adiabatic invariance of the quantum numbers introduced by Hendrik Lorentz and Albert Einstein. Sommerfeld made a crucial contribution by quantizing the z-component of the angular momentum, which in the old quantum era was called "space quantization" (German: Richtungsquantelung). This allowed the orbits of the electron to be ellipses instead of circles, and introduced the concept of quantum degeneracy. The theory would have correctly explained the Zeeman effect, except for the issue of electron spin. Sommerfeld's model was much closer to the modern quantum mechanical picture than Bohr's.
In the 1950s Joseph Keller updated Bohr–Sommerfeld quantization using Einstein's interpretation of 1917, now known as Einstein–Brillouin–Keller method. In 1971, Martin Gutzwiller took into account that this method only works for integrable systems and derived a semiclassical way of quantizing chaotic systems from path integrals.
Predictions
The Sommerfeld model predicted that the magnetic moment of an atom measured along an axis will only take on discrete values, a result which seems to contradict rotational invariance but which was confirmed by the Stern–Gerlach experiment. This was a significant step in the development of quantum mechanics. It also described the possibility of atomic energy levels being split by a magnetic field (called the Zeeman effect). Walther Kossel worked with Bohr and Sommerfeld on the Bohr–Sommerfeld model of the atom introducing two electrons in the first shell and eight in the second.
Issues
The Bohr–Sommerfeld model was fundamentally inconsistent and led to many paradoxes. The magnetic quantum number measured the tilt of the orbital plane relative to the xy plane, and it could only take a few discrete values. This contradicted the obvious fact that an atom could be turned this way and that relative to the coordinates without restriction. The Sommerfeld quantization can be performed in different canonical coordinates and sometimes gives different answers. The incorporation of radiation corrections was difficult, because it required finding action-angle coordinates for a combined radiation/atom system, which is difficult when the radiation is allowed to escape. The whole theory did not extend to non-integrable motions, which meant that many systems could not be treated even in principle. In the end, the model was replaced by the modern quantum-mechanical treatment of the hydrogen atom, which was first given by Wolfgang Pauli in 1925, using Heisenberg's matrix mechanics. The current picture of the hydrogen atom is based on the atomic orbitals of wave mechanics, which Erwin Schrödinger developed in 1926.
However, this is not to say that the Bohr–Sommerfeld model was without its successes. Calculations based on the Bohr–Sommerfeld model were able to accurately explain a number of more complex atomic spectral effects. For example, up to first-order perturbations, the Bohr model and quantum mechanics make the same predictions for the spectral line splitting in the Stark effect. At higher-order perturbations, however, the Bohr model and quantum mechanics differ, and measurements of the Stark effect under high field strengths helped confirm the correctness of quantum mechanics over the Bohr model. The prevailing theory behind this difference lies in the shapes of the orbitals of the electrons, which vary according to the energy state of the electron.
The Bohr–Sommerfeld quantization conditions lead to questions in modern mathematics. Consistent semiclassical quantization condition requires a certain type of structure on the phase space, which places topological limitations on the types of symplectic manifolds which can be quantized. In particular, the symplectic form should be the curvature form of a connection of a Hermitian line bundle, which is called a prequantization.
Relativistic orbit
Arnold Sommerfeld derived the relativistic solution of atomic energy levels. We will start this derivation with the relativistic equation for energy in the electric potential
After substitution we get
For momentum , and their ratio the equation of motion is (see Binet equation)
with solution
The angular shift of periapsis per revolution is given by
With the quantum conditions
and
we will obtain energies
where is the fine-structure constant. This solution (using substitutions for quantum numbers) is equivalent to the solution of the Dirac equation. Nevertheless, both solutions fail to predict the Lamb shifts.
See also
Bohr model
Old quantum theory
References
Atomic physics
Hydrogen physics
Foundational quantum physics
History of physics
Niels Bohr
Arnold Sommerfeld
Old quantum theory | 0.771043 | 0.988145 | 0.761903 |
Cyclotron resonance | Cyclotron resonance describes the interaction of external forces with charged particles experiencing a magnetic field, thus moving on a circular path. It is named after the cyclotron, a cyclic particle accelerator that utilizes an oscillating electric field tuned to this resonance to add kinetic energy to charged particles.
Cyclotron resonance frequency
The cyclotron frequency or gyrofrequency is the frequency of a charged particle moving perpendicular to the direction of a uniform magnetic field B (constant magnitude and direction).
Derivation
Since the motion in an orthogonal and constant magnetic field is always circular, the cyclotron frequency is given by equality of centripetal force and magnetic Lorentz force
with the particle mass m, its charge q, velocity v, and the circular path radius r, also called gyroradius.
The angular speed is then:
.
Giving the rotational frequency (being the cyclotron frequency) as:
,
It is notable that the cyclotron frequency is independent of the radius and velocity and therefore independent of the particle's kinetic energy; all particles with the same charge-to-mass ratio rotate around magnetic field lines with the same frequency. This is only true in the non-relativistic limit, and underpins the principle of operation of the cyclotron.
The cyclotron frequency is also useful in non-uniform magnetic fields, in which (assuming slow variation of magnitude of the magnetic field) the movement is approximately helical - in the direction parallel to the magnetic field, the motion is uniform, whereas in the plane perpendicular to the magnetic field the movement is, as previously circular. The sum of these two motions gives a trajectory in the shape of a helix.
When the charged particle begins to approach relativistic speeds, the centripetal force should be multiplied by the Lorentz factor, yielding a corresponding factor in the angular frequency:
.
Gaussian units
The above is for SI units. In some cases, the cyclotron frequency is given in Gaussian units. In Gaussian units, the Lorentz force differs by a factor of 1/c, the speed of light, which leads to:
.
For materials with little or no magnetism (i.e. ) , so we can use the easily measured magnetic field intensity H instead of B:
.
Note that converting this expression to SI units introduces a factor of the vacuum permeability.
Effective mass
For some materials, the motion of electrons follows loops that depend on the applied magnetic field, but not exactly the same way. For these materials, we define a cyclotron effective mass, so that:
.
See also
Ion cyclotron resonance
Electron cyclotron resonance
References
External links
Calculate Cyclotron frequency with Wolfram Alpha
Condensed matter physics
Electric and magnetic fields in matter
Accelerator physics
Scientific techniques | 0.774971 | 0.983094 | 0.76187 |
Neural radiance field | A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. The NeRF model enables downstream applications of novel view synthesis, scene geometry reconstruction, and obtaining the reflectance properties of the scene. Additional scene properties such as camera poses may also be jointly learned. First introduced in 2020, it has since gained significant attention for its potential applications in computer graphics and content creation.
Algorithm
The NeRF algorithm represents a scene as a radiance field parametrized by a deep neural network (DNN). The network predicts a volume density and view-dependent emitted radiance given the spatial location (x, y, z) and viewing direction in Euler angles (θ, Φ) of the camera. By sampling many points along camera rays, traditional volume rendering techniques can produce an image.
Data collection
A NeRF needs to be retrained for each unique scene. The first step is to collect images of the scene from different angles and their respective camera pose. These images are standard 2D images and do not require a specialized camera or software. Any camera is able to generate datasets, provided the settings and capture method meet the requirements for SfM (Structure from Motion).
This requires tracking of the camera position and orientation, often through some combination of SLAM, GPS, or inertial estimation. Researchers often use synthetic data to evaluate NeRF and related techniques. For such data, images (rendered through traditional non-learned methods) and respective camera poses are reproducible and error-free.
Training
For each sparse viewpoint (image and camera pose) provided, camera rays are marched through the scene, generating a set of 3D points with a given radiance direction (into the camera). For these points, volume density and emitted radiance are predicted using the multi-layer perceptron (MLP). An image is then generated through classical volume rendering. Because this process is fully differentiable, the error between the predicted image and the original image can be minimized with gradient descent over multiple viewpoints, encouraging the MLP to develop a coherent model of the scene.
Variations and improvements
Early versions of NeRF were slow to optimize and required that all input views were taken with the same camera in the same lighting conditions. These performed best when limited to orbiting around individual objects, such as a drum set, plants or small toys. Since the original paper in 2020, many improvements have been made to the NeRF algorithm, with variations for special use cases.
Fourier feature mapping
In 2020, shortly after the release of NeRF, the addition of Fourier Feature Mapping improved training speed and image accuracy. Deep neural networks struggle to learn high frequency functions in low dimensional domains; a phenomenon known as spectral bias. To overcome this shortcoming, points are mapped to a higher dimensional feature space before being fed into the MLP.
Where is the input point, are the frequency vectors, and are coefficients.
This allows for rapid convergence to high frequency functions, such as pixels in a detailed image.
Bundle-adjusting neural radiance fields
One limitation of NeRFs is the requirement of knowing accurate camera poses to train the model. Often times, pose estimation methods are not completely accurate, nor is the camera pose even possible to know. These imperfections result in artifacts and suboptimal convergence. So, a method was developed to optimize the camera pose along with the volumetric function itself. Called Bundle-Adjusting Neural Radiance Field (BARF), the technique uses a dynamic low-pass filter to go from coarse to fine adjustment, minimizing error by finding the geometric transformation to the desired image. This corrects imperfect camera poses and greatly improves the quality of NeRF renders.
Multiscale representation
Conventional NeRFs struggle to represent detail at all viewing distances, producing blurry images up close and overly aliased images from distant views. In 2021, researchers introduced a technique to improve the sharpness of details at different viewing scales known as mip-NeRF (comes from mipmap). Rather than sampling a single ray per pixel, the technique fits a gaussian to the conical frustum cast by the camera. This improvement effectively anti-aliases across all viewing scales. mip-NeRF also reduces overall image error and is faster to converge at ~half the size of ray-based NeRF.
Learned initializations
In 2021, researchers applied meta-learning to assign initial weights to the MLP. This rapidly speeds up convergence by effectively giving the network a head start in gradient descent. Meta-learning also allowed the MLP to learn an underlying representation of certain scene types. For example, given a dataset of famous tourist landmarks, an initialized NeRF could partially reconstruct a scene given one image.
NeRF in the wild
Conventional NeRFs are vulnerable to slight variations in input images (objects, lighting) often resulting in ghosting and artifacts. As a result, NeRFs struggle to represent dynamic scenes, such as bustling city streets with changes in lighting and dynamic objects. In 2021, researchers at Google developed a new method for accounting for these variations, named NeRF in the Wild (NeRF-W). This method splits the neural network (MLP) into three separate models. The main MLP is retained to encode the static volumetric radiance. However, it operates in sequence with a separate MLP for appearance embedding (changes in lighting, camera properties) and an MLP for transient embedding (changes in scene objects). This allows the NeRF to be trained on diverse photo collections, such as those taken by mobile phones at different times of day.
Relighting
In 2021, researchers added more outputs to the MLP at the heart of NeRFs. The output now included: volume density, surface normal, material parameters, distance to the first surface intersection (in any direction), and visibility of the external environment in any direction. The inclusion of these new parameters lets the MLP learn material properties, rather than pure radiance values. This facilitates a more complex rendering pipeline, calculating direct and global illumination, specular highlights, and shadows. As a result, the NeRF can render the scene under any lighting conditions with no re-training.
Plenoctrees
Although NeRFs had reached high levels of fidelity, their costly compute time made them useless for many applications requiring real-time rendering, such as VR/AR and interactive content. Introduced in 2021, Plenoctrees (plenoptic octrees) enabled real-time rendering of pre-trained NeRFs through division of the volumetric radiance function into an octree. Rather than assigning a radiance direction into the camera, viewing direction is taken out of the network input and spherical radiance is predicted for each region. This makes rendering over 3000x faster than conventional NeRFs.
Sparse Neural Radiance Grid
Similar to Plenoctrees, this method enabled real-time rendering of pretrained NeRFs. To avoid querying the large MLP for each point, this method bakes NeRFs into Sparse Neural Radiance Grids (SNeRG). A SNeRG is a sparse voxel grid containing opacity and color, with learned feature vectors to encode view-dependent information. A lightweight, more efficient MLP is then used to produce view-dependent residuals to modify the color and opacity. To enable this compressive baking, small changes to the NeRF architecture were made, such as running the MLP once per pixel rather than for each point along the ray. These improvements make SNeRG extremely efficient, outperforming Plenoctrees.
Instant NeRFs
In 2022, researchers at Nvidia enabled real-time training of NeRFs through a technique known as Instant Neural Graphics Primitives. An innovative input encoding reduces computation, enabling real-time training of a NeRF, an improvement orders of magnitude above previous methods. The speedup stems from the use of spatial hash functions, which have access times, and parallelized architectures which run fast on modern GPUs.
Related techniques
Plenoxels
Plenoxel (plenoptic volume element) uses a sparse voxel representation instead of a volumetric approach as seen in NeRFs. Plenoxel also completely removes the MLP, instead directly performing gradient descent on the voxel coefficients. Plenoxel can match the fidelity of a conventional NeRF in orders of magnitude less training time. Published in 2022, this method disproved the importance of the MLP, showing that the differentiable rendering pipeline is the critical component.
Gaussian splatting
Gaussian splatting is a newer method that can outperform NeRF in render time and fidelity. Rather than representing the scene as a volumetric function, it uses a sparse cloud of 3D gaussians. First, a point cloud is generated (through structure from motion) and converted to gaussians of initial covariance, color, and opacity. The gaussians are directly optimized through stochastic gradient descent to match the input image. This saves computation by removing empty space and foregoing the need to query a neural network for each point. Instead, simply "splat" all the gaussians onto the screen and they overlap to produce the desired image.
Photogrammetry
Traditional photogrammetry is not neural, instead using robust geometric equations to obtain 3D measurements. NeRFs, unlike photogrammetric methods, do not inherently produce dimensionally accurate 3D geometry. While their results are often sufficient for extracting accurate geometry (ex: via cube marching), the process is fuzzy, as with most neural methods. This limits NeRF to cases where the output image is valued, rather than raw scene geometry. However, NeRFs excel in situations with unfavorable lighting. For example, photogrammetric methods completely break down when trying to reconstruct reflective or transparent objects in a scene, while a NeRF is able to infer the geometry.
Applications
NeRFs have a wide range of applications, and are starting to grow in popularity as they become integrated into user-friendly applications.
Content creation
NeRFs have huge potential in content creation, where on-demand photorealistic views are extremely valuable. The technology democratizes a space previously only accessible by teams of VFX artists with expensive assets. Neural radiance fields now allow anyone with a camera to create compelling 3D environments. NeRF has been combined with generative AI, allowing users with no modelling experience to instruct changes in photorealistic 3D scenes. NeRFs have potential uses in video production, computer graphics, and product design.
Interactive content
The photorealism of NeRFs make them appealing for applications where immersion is important, such as virtual reality or videogames. NeRFs can be combined with classical rendering techniques to insert synthetic objects and create believable virtual experiences.
Medical imaging
NeRFs have been used to reconstruct 3D CT scans from sparse or even single X-ray views. The model demonstrated high fidelity renderings of chest and knee data. If adopted, this method can save patients from excess doses of ionizing radiation, allowing for safer diagnosis.
Robotics and autonomy
The unique ability of NeRFs to understand transparent and reflective objects makes them useful for robots interacting in such environments. The use of NeRF allowed a robot arm to precisely manipulate a transparent wine glass; a task where traditional computer vision would struggle.
NeRFs can also generate photorealistic human faces, making them valuable tools for human-computer interaction. Traditionally rendered faces can be uncanny, while other neural methods are too slow to run in real-time.
References
Machine learning algorithms | 0.766941 | 0.993384 | 0.761866 |
Einstein–Oppenheimer relationship | Albert Einstein and J. Robert Oppenheimer were twentieth century physicists who made pioneering contributions to physics. From 1947 to 1955 they had been colleagues at the Institute for Advanced Study (IAS). Belonging to different generations, Einstein and Oppenheimer became representative figures for the relationship between "science and power", as well as for "contemplation and utility" in science.
Overview
In 1919, after the successful verification of the phenomenon of light from faraway stars bending near the sun — as predicted earlier by Einstein's theory of gravity — became an observable fact, Albert Einstein was acclaimed as “the most revolutionary innovator in physics” since Isaac Newton. J. Robert Oppenheimer, called the American physics community's "boy-wonder" in the 1930s, became a popular figure from 1945 onwards after overseeing the first ever successful test of nuclear weapons.
Both Einstein and Oppenheimer were born into nonobservant Jewish families.
Belonging to different generations, Einstein (1879–1955) and Oppenheimer (1904–1967), with the full development of quantum mechanics by 1925 marking a delineation, represented the shifted approach in being either a theoretical physicist or an experimental physicist since the mid-1920s when being both became rare due to the division of labor.
Einstein and Oppenheimer, who incorporated different modes of approach for their achievements, became emblematic for the relationship between "science and power", as well as for "contemplation and utility" in science. When in 1945 the first ever nuclear weapons were successfully tested, Oppenheimer was acknowledged for bringing forth to the world the astounding "instrumental power of science". Einstein, after facing criticism for having "participated" in the creation of the atomic bomb, answered in 1950 that, when he contemplated on the relationship between mass and energy in 1905, he had no idea that it could have been used for military purposes in anyway, and maintained that he had always been a "convinced pacifist".
While Einstein engaged in the pursuit of what he called as "Unity" in the complex phenomena of the Universe, Oppenheimer engaged in the establishment of an "Unified" framework at the Institute for Advanced Study, which would comprise all the academic disciplines of knowledge that can be pursued. Einstein was markedly individualistic in his approach to physics. He had only few students, and was disinterested if not adversarial in his relation with formal institutions and politics. Oppenheimer was more collaborative and embraced collective scientific work. He had been a better successful teacher and more immersed in political and institutional realms. Oppenheimer emerged as a powerful political 'insider', a role that Einstein never embraced but instead wondered why Oppenheimer desired such power. Despite their differences in stances, both Oppenheimer and Einstein were regarded as "deeply suspicious" figures by the authorities, specifically by J. Edgar Hoover.
With the advent of modern physics in the twentieth century changing the world radically, both Einstein and Oppenheimer grappled with metaphysics that can provide an ethical framework for human actions. Einstein turned to the philosophical works of Spinoza and Schopenhauer, along with an attachment to the European enlightenment heritage. Oppenheimer became engrossed in the eastern philosophy, with particular interest in the Bhagavad Gita, and an affinity with the American philosophical tradition of pragmatism.
Association with each other
Oppenheimer met Einstein for the first time in January 1932 when the latter visited Caltech as part of his round-the-world trip during 1931-32.
In 1939, Einstein published a paper that argued against the existence of Black holes. Einstein used his own general theory of relativity to arrive at this conclusion. A few months after Einstein rejected the existence of Black holes, Oppenheimer and his student Hartland Snyder published a paper that revealed, for the first time, using Einstein's general theory of relativity, how Black holes would form. Though Oppenheimer and Einstein later met, there's no record of them having discussed Black holes.
When in 1939, the general public became aware of the Einstein–Szilard letter that urged the US government to initiate the Manhattan project, for the development of nuclear weapons, Einstein was credited for foreseeing the destructive power of the atom with his mass–energy equivalence formula. Einstein played an active role in the development of US nuclear weapons by being an advisor to the research that ensued; this was in contrast to the common belief that his role was limited to only signing a letter. During this time, the public linked Einstein with Oppenheimer, who then happened to be the scientific director of the Manhattan project.
In 1945, when Oppenheimer and Pauli were being considered for a professorial position at an institute, Einstein and Hermann Weyl wrote a letter that recommended Pauli over Oppenheimer.
After the end of World War II, both Einstein and Oppenheimer lived and worked in Princeton at the Institute for Advanced Study, Einstein became a professor there while Oppenheimer its director and a professor of physics from 1947 to 1966. They had their offices down the hall from each other. Einstein and Oppenheimer became colleagues and conversed with each other occasionally. They saw each other socially, with Einstein once attending dinner at the Oppenheimers in 1948. At the Institute, Oppenheimer considered general relativity to be an area of physics that wouldn't be of much benefit to the efforts of physicists, partly due to lack of observational data and due to conceptual and technical difficulties. He actively prohibited people from taking up these problems at the institute. Furthermore he forbade Institute members from having contacts with Einstein. For one of Einstein's birthdays, Oppenheimer gifted him a new FM radio and had an antenna installed on his house so that he may listen to New York Philharmonic concerts from Manhattan, about 50 miles away from Princeton. Oppenheimer did not provide an article to the July 1949 issue of Reviews of Modern Physics, which was dedicated to the seventieth birthday of Einstein.
In October 1954, when an honorary doctorate was to be conferred to Einstein at Princeton, Oppenheimer made himself unavailable at the last moment (despite being "begged" to attend the event); he informed the convocation committee that he had to be out of town on the day of convocation. Earlier, in May 1954 when the Emergency Civil Liberties Committee decided to honour Einstein on his seventy-fifth birthday, the American Committee for Cultural Freedom, concerned about the Communist ties of the honouring committee requested Oppenheimer to stop Einstein from attending the event lest it may cause people to associate Judaism with Communism, and think of scientists as naive about politics. Oppenheimer, who was then busy with his security clearance hearings, persuaded Einstein to dissociate with the honouring committee.
Views about each other
In January 1935, Oppenheimer visited Princeton University as a visiting faculty member on an invitation. After staying there and interacting with Einstein, Oppenheimer wrote to his brother Frank Oppenheimer in a letter thus, "Princeton is a madhouse: its solipsistic luminaries shining in separate & helpless desolation. Einstein is completely cuckoo. ... I could be of absolutely no use at such a place, but it took a lot of conversation & arm waving to get Weyl to take a no”. Oppenheimer's initial harsh assessment was attributed to the fact that he found Einstein highly skeptical about the quantum field theory. Einstein never accepted the quantum theory; in 1945 he said: "The quantum theory is without a doubt a useful theory, but it does not reach to the bottom of things. I never believed that it constitutes the true conception of nature". Oppenheimer also noted that Einstein became very much a loner in his working style.
After the death of Einstein in April 1955, in a public eulogy Oppenheimer wrote that "physicists lost their greatest colleague". He noted that of all the great accomplishments in Physics, the theory of general relativity is the work of one man, and it would have remained undiscovered for a long time had it not been for the work of Einstein. He ascertained that the public image of Einstein as a simple and kindhearted man “with warm humor,... wholly without pretense” was indeed right, and remembered what Einstein once said to him before his death, "You know, when it once has been given to a man to do something sensible, afterwards life is a little strange." Oppenheimer wrote that it was given to Einstein to do "something reasonable". He stated that general theory of relativity is "perhaps the single greatest theoretical synthesis in the whole of science". Oppenheimer wrote that more than anything, the one special quality, that made Einstein unique was “his faith that there exists in the natural world an order and a harmony and that this may be apprehended by the mind of man”, and that Einstein had given not just an evidence of that faith, but also its heritage.
Oppenheimer was less graceful about Einstein in private. He said Einstein had no interest in or did not understand modern physics and wasted his time in trying to unify gravity and electromagnetism. He stated that Einstein's methods in his final years had in "a certain sense failed him". Einstein in his last twenty-five years of life focused solely on working out the unified field theory without considering its reliability nor questioning his own approach. This led him to lose connections with the wider physics community. Einstein's urge to find unity had been constant throughout his life. In 1900, while still a student at ETH, he wrote in a letter to his friend Marcel Grossmann that, "It is a glorious feeling to recognize the unity of a complex of phenomena, which appear to direct sense perceptions as quite distinct things." In 1932, when questioned about his goal of work, Einstein replied, "The real goal of my research has always been the simplification and unification of the system of theoretical physics. I attained this goal satisfactorily for macroscopic phenomena, but not for the phenomena of quanta and atomic structure." And added, "I believe that despite considerable success, the modern quantum theory is still far from a satisfactory solution of the latter group of problems." Einstein was never convinced with quantum field theory, which Oppenheimer advocated. Oppenheimer noted that Einstein tried in vain to prove the existence of inconsistencies in quantum field theory, but there were none. In the 1960s Oppenheimer became skeptical about Einstein's general theory of relativity as the correct theory of gravitation. He thought Brans–Dicke theory to be a better theory. Oppenheimer also complained that Einstein did not leave any papers to the institute (IAS) in his will despite the support he received from it for twenty-five years. All of Einstein's papers went to Israel.
In December 1965, Oppenheimer visited Paris on an invitation from UNESCO to speak at the tenth anniversary of Einstein's death. He spoke on the first day of the commemoration as he had known Einstein for more than thirty years and at the IAS, they "were close colleagues and something of friends". Oppenheimer made his views of Einstein public there. He praised Einstein for his stand against violence and described his attitude towards humanity by the Sanskrit word "Ahimsa". The speech received considerable media attention, New York Times reported the story headlined “Oppenheimer View of Einstein Warm But Not Uncritical”. However, after the speech, in an interview with the French magazine L'Express, Oppenheimer said, "During all the end of his life, Einstein did no good. He worked all alone with an assistant who was there to correct his calculations... He turned his back on experiments, he even tried to rid himself of the facts that he himself had contributed to establish ... He wanted to realize the unity of knowledge. At all cost. In our days, this is impossible." But nevertheless Oppenheimer said, he was "convinced that still today, as in Einstein’s time, a solitary researcher can effect a startling discovery. He will only need more strength of character". The interviewer concluded asking Oppenheimer if he had any longing or nostalgia, to which he replied "Of course, I would have liked to be the young Einstein. This goes without saying."
Einstein appreciated Oppenheimer for his role in the drafting and advocacy of the Acheson–Lilienthal Report, and for his subsequent work to contain the nuclear arms race between the United States and Soviet Union.
At the IAS, Einstein acquired profound respect for Oppenheimer on his administration skills, and described him as an “unusually capable man of many sided education”.
In popular culture
A semifictional account of the relationship between Albert Einstein and J. Robert Oppenheimer was portrayed in the feature film Oppenheimer directed by Christopher Nolan.
Notes
See also
Einstein versus Oppenheimer
References
Citations
Sources
Quantum physicists
20th-century American physicists
20th-century American Jews
American people of German-Jewish descent
American relativity theorists
Directors of the Institute for Advanced Study
Jewish scientists
Jewish physicists
Jewish American physicists
Jewish anti-fascists | 0.768345 | 0.991568 | 0.761866 |
Blake Crouch | William Blake Crouch (born October 15, 1978) is an American author known for books such as Dark Matter, Recursion, Upgrade, and his Wayward Pines Trilogy, which was adapted into a television series in 2015. Dark Matter was adapted for television in 2024.
Early life and education
Crouch was born near the town of Statesville, North Carolina. He attended North Iredell High School and the University of North Carolina at Chapel Hill, graduating in 2000 with degrees in English and creative writing.
Career
Crouch published his first two novels, Desert Places and Locked Doors, in 2004 and 2005, respectively. His stories have appeared in Ellery Queen's Mystery Magazine, Alfred Hitchcock's Mystery Magazine, Thriller 2, and other anthologies. In 2016, he released the sci-fi novel Dark Matter.
Crouch's Wayward Pines Trilogy (2012–14) was adapted into the 2015 television series Wayward Pines. Another work, Good Behavior, premiered as a television series in November 2016.
In 2019, Crouch released another sci-fi novel, titled Recursion, to critical success.
In 2020, Crouch began working on a screenplay to adapt Dark Matter into a television series for Sony Pictures, which was released on May 8, 2024, on Apple TV+.
Personal life
Crouch married Rebecca Greene on June 20, 1998. They have three children together. The couple divorced in 2017. As of 2019, Crouch was dating Jacque Ben-Zekry.
In 2021, Crouch filed a court case appealing to modify his and Greene's joint agreement for medical decision-making authority for their children, to allow him to vaccinate them over Greene's religious-based objections.
Bibliography
Andrew Z. Thomas / Luther Kite series
Desert Places (2004)
Locked Doors (2005)
Break You (2011)
Stirred (with J. A. Konrath, 2011)
Wayward Pines Trilogy
Pines (2012)
Wayward (2013)
The Last Town (2014)
Stand-alone novels
Abandon (2009)
Famous (2010)
Snowbound (2010)
Run (2011)
Eerie (with Jordan Crouch) (2012)
Dark Matter (2016)
Good Behavior (2016)
Summer Frost (2019)
Recursion (2019)
Upgrade (2022)
References
External links
Bookseller news article on Blake Crouch
BlogTalkRadio interview with Modern Signed Books host Rodger Nichols October 27, 2017
1978 births
Living people
21st-century American male writers
21st-century American novelists
21st-century American short story writers
American horror novelists
American male novelists
American male short story writers
American mystery writers
American science fiction writers
American thriller writers
Novelists from North Carolina
People from Statesville, North Carolina
University of North Carolina at Chapel Hill alumni | 0.763768 | 0.997506 | 0.761864 |
Swing equation | A power system consists of a number of synchronous machines operating synchronously under all operating conditions. Under normal operating conditions, the relative position of the rotor axis and the resultant magnetic field axis is fixed. The angle between the two is known as the power angle, torque angle, or rotor angle. During any disturbance, the rotor decelerates or accelerates with respect to the synchronously rotating air gap magnetomotive force, creating relative motion. The equation describing the relative motion is known as the swing equation, which is a non-linear second order differential equation that describes the swing of the rotor of synchronous machine. The power exchange between the mechanical rotor and the electrical grid due to the rotor swing (acceleration and deceleration) is called Inertial response.
Derivation
A synchronous generator is driven by a prime mover. The equation governing the rotor motion is given by:
N-m
Where:
is the total moment of inertia of the rotor mass in kg-m2
is the angular position of the rotor with respect to a stationary axis in (rad)
is time in seconds (s)
is the mechanical torque supplied by the prime mover in N-m
is the electrical torque output of the alternator in N-m
is the net accelerating torque, in N-m
Neglecting losses, the difference between the mechanical and electrical torque gives the net accelerating torque Ta. In the steady state, the electrical torque is equal to the mechanical torque and hence the accelerating power is zero. During this period the rotor moves at synchronous speed ωs in rad/s. The electric torque Te corresponds to the net air-gap power in the machine and thus accounts for the total output power of the generator plus I2R losses in the armature winding.
The angular position θ is measured with a stationary reference frame. Representing it with respect to the synchronously rotating frame gives:
where, δm is the angular position in rad with respect to the synchronously rotating reference frame. The derivative of the above equation with respect to time is:
The above equations show that the rotor angular speed is equal to the synchronous speed only when dδm/dt is equal to zero. Therefore, the term dδm/dt represents the deviation of the rotor speed from synchronism in rad/s.
By taking the second order derivative of the above equation it becomes:
Substituting the above equation in the equation of rotor motion gives:
N-m
Introducing the angular velocity ωm of the rotor for the notational purpose, and multiplying both sides by ωm,
W
where, Pm , Pe and Pa respectively are the mechanical, electrical and accelerating power in MW.
The coefficient Jωm is the angular momentum of the rotor: at synchronous speed ωs, it is denoted by M and called the inertia constant of the machine. Normalizing it as
MJ/MVA
where Srated is the three phase rating of the machine in MVA. Substituting in the above equation
.
In steady state, the machine angular speed is equal to the synchronous speed and hence ωm can be replaced in the above equation by ωs. Since P, P and P are given in MW, dividing them by the generator MVA rating Srated gives these quantities in per unit. Dividing the above equation on both sides by Srated gives
The above equation describes the behaviour of the rotor dynamics and hence is known as the swing equation. The angle δ is the angle of the internal EMF of the generator and it dictates the amount of power that can be transferred. This angle is therefore called the load angle.
References
Equations
Electric power transmission systems | 0.781362 | 0.975029 | 0.76185 |
Power-to-X | Power-to-X (also P2X and P2Y) are electricity conversion, energy storage, and reconversion pathways from surplus renewable energy.
Power-to-X conversion technologies allow for the decoupling of power from the electricity sector for use in other sectors (such as transport or chemicals), possibly using power that has been provided by additional investments in generation. The term is widely used in Germany and may have originated there.
The X in the terminology can refer to one of the following: power-to-ammonia, power-to-chemicals, power-to-fuel, power-to-gas (power-to-hydrogen, power-to-methane) power-to-liquid (synthetic fuel), power to food, power-to-heat. Electric vehicle charging, space heating and cooling, and water heating can be shifted in time to match generation, forms of demand response that can be called power-to-mobility and power-to-heat.
Collectively power-to-X schemes which use surplus power fall under the heading of flexibility measures and are particularly useful in energy systems with high shares of renewable generation and/or with strong decarbonization targets. A large number of pathways and technologies are encompassed by the term. In 2016 the German government funded a €30million first-phase research project into power-to-X options.
Power-to-fuel
Surplus electric power can be converted to gas fuel energy for storage and reconversion.
Direct current electrolysis of water (efficiency 80–85% at best) can be used to produce hydrogen which can, in turn, be converted to methane (CH4) via methanation. Another possibility is converting the hydrogen, along with CO2 to methanol. Both these fuels can be stored and used to produce electricity again, hours to months later.
Storage and reconversion of power-to-fuel
Hydrogen and methane can be used as downstream fuels, fed into the natural gas grid, or used to make synthetic fuel. Alternatively they can be used as a chemical feedstock, as can ammonia.
Reconversion technologies include gas turbines, combined cycle plants, reciprocating engines and fuel cells.
Power-to-power refers to the round-trip reconversion efficiency. For hydrogen storage, the round-trip efficiency remains limited at 35–50%. Electrolysis is expensive and power-to-gas processes need substantial full-load hours to be economic.
However, while round-trip conversion efficiency of power-to-power is lower than with batteries and electrolysis can be expensive, storage of the fuels themselves is quite inexpensive. This means that large amounts of energy can be stored for long periods of time with power-to-power, which is ideal for seasonal storage. This could be particularly useful for systems with high variable renewable energy penetration, since many areas have significant seasonal variability of solar, wind, and run-of-the-river-hydroelectric generation.
Batteries
Despite it also being based fundamentally on electrolytic chemical reactions, battery storage is not normally considered a power-to-fuel concept.
Power-to-heat
The purpose of power-to-heat systems is to utilize excess electricity generated by renewable energy sources which would otherwise be wasted. Depending on the context, the power-to-heat can either be stored as heat, or delivered as heat to meet a need.
Heating systems
In contrast to simple electric heating systems such as night storage heating which covers the complete heating requirements, power-to-heat systems are hybrid systems, which additionally have traditional heating systems using chemical fuels like wood or natural gas. When there are excess energy the heat production can result from electric energy otherwise the traditional heating system will be used. In order to increase flexibility power-to-heat systems are often coupled with heat accumulators. The power supply occurs for the most part in the local and district heating networks. Power-to-heat systems are also able to supply buildings or industrial systems with heat.
Power-to-heat involves contributing to the heat sector, either by resistance heating or via a heat pump. Resistance heaters have unity efficiency, and the corresponding coefficient of performance (COP) of heat pumps is 2–5. Back-up immersion heating of both domestic hot water and district heating offers a cheap way of using surplus renewable energy and will often displace carbon-intensive fossil fuels for the task. Large-scale heat pumps in district heating systems with thermal energy storage are an especially attractive option for power-to-heat: they offer exceptionally high efficiency for balancing excess wind and solar power, and they can be profitable investments.
Heat storage systems
Other forms of power-to-X
Power-to-mobility refers to the charging of battery electric vehicles (BEV). Given the expected uptake of EVs, dedicated dispatch will be required. As vehicles are idle for most of the time, shifting the charging time can offer considerable flexibility: the charging window is a relatively long 8–12hours, whereas the charging duration is around 90minutes. The EV batteries can also be discharged to the grid to make them work as electricity storage devices, but this causes additional wear to the battery.
Impact
According to the German concept of sector coupling interconnecting all the energy-using sectors will require the digitalisation and automation of numerous processes to synchronise supply and demand.
A 2023 study examined to role that powertoX could play in a highlyrenewable future energy system for Japan. The P2X technologies considered include water electrolysis, methanation, Fischer–Tropsch synthesis, and Haber–Bosch synthesis and the study used linear programming to determine leastcost system structure and operation. Results indicate that these various P2X technologies can effectively shift electricity loads and reduce curtailment by 80% or more.
See also
Grid energy storage
Flywheel
References
Energy policy
Energy policy of Germany
Energy storage
Power engineering | 0.770867 | 0.988276 | 0.76183 |
Theory of impetus | The theory of impetus is an auxiliary or secondary theory of Aristotelian dynamics, put forth initially to explain projectile motion against gravity. It was introduced by John Philoponus in the 6th century, and elaborated by Nur ad-Din al-Bitruji at the end of the 12th century. The theory was modified by Avicenna in the 11th century and Abu'l-Barakāt al-Baghdādī in the 12th century, before it was later established in Western scientific thought by Jean Buridan in the 14th century. It is the intellectual precursor to the concepts of inertia, momentum and acceleration in classical mechanics.
Aristotelian theory
Aristotelian physics is the form of natural philosophy described in the works of the Greek philosopher Aristotle (384–322 BC). In his work Physics, Aristotle intended to establish general principles of change that govern all natural bodies, both living and inanimate, celestial and terrestrial – including all motion, quantitative change, qualitative change, and substantial change.
Aristotle describes two kinds of motion: "violent" or "unnatural motion", such as that of a thrown stone, in Physics (254b10), and "natural motion", such as of a falling object, in On the Heavens (300a20). In violent motion, as soon as the agent stops causing it, the motion stops also: in other words, the natural state of an object is to be at rest, since Aristotle does not address friction.
Hipparchus' theory
In the 2nd century, Hipparchus assumed that the throwing force is transferred to the body at the time of the throw, and that the body dissipates it during the subsequent up-and-down motion of free fall. This is according to the Neoplatonist Simplicius of Cilicia, who quotes Hipparchus in his book Aristotelis De Caelo commentaria 264, 25 as follows: "Hipparchus says in his book On Bodies Carried Down by Their Weight that the throwing force is the cause of the upward motion of [a lump of] earth thrown upward as long as this force is stronger than that of the thrown body; the stronger the throwing force, the faster the upward motion. Then, when the force decreases, the upward motion continues at a decreased speed until the body begins to move downward under the influence of its own weight, while the throwing force still continues in some way. As this decreases, the velocity of the fall increases and reaches its highest value when this force is completely dissipated." Thus, Hipparchus does not speak of a continuous contact between the moving force and the moving body, or of the function of air as an intermediate carrier of motion, as Aristotle claims.
Philoponan theory
In the 6th century, John Philoponus partly accepted Aristotle's theory that "continuation of motion depends on continued action of a force," but modified it to include his idea that the hurled body acquires a motive power or inclination for forced movement from the agent producing the initial motion and that this power secures the continuation of such motion. However, he argued that this impressed virtue was temporary: that it was a self-expending inclination, and thus the violent motion produced comes to an end, changing back into natural motion.
In his book On Aristotle Physics 641, 12; 641, 29; 642, 9 Philoponus first argues explicitly against Aristotle's explanation that a thrown stone, after leaving the hand, cannot be propelled any further by the air behind it. Then he continues: "Instead, some immaterial kinetic force must be imparted to the projectile by the thrower. Whereby the pushed air contributes either nothing or only very little to this motion. But if moving bodies are necessarily moved in this way, it is clear that the same process will take place much more easily if an arrow or a stone is thrown necessarily and against its tendency into empty space, and that nothing is necessary for this except the thrower." This last sentence is intended to show that in empty space—which Aristotle rejects—and contrary to Aristotle's opinion, a moving body would continue to move. It should be pointed out that Philoponus in his book uses two different expressions for impetus: kinetic capacity (dynamis) and kinetic force (energeia). Both expressions designate in his theory a concept, which is close to the today's concept of energy, but they are far away from the Aristotelian conceptions of potentiality and actuality.
Philoponus' theory of imparted force cannot yet be understood as a principle of inertia. For while he rightly says that the driving quality is no longer imparted externally but has become an internal property of the body, he still accepts the Aristotelian assertion that the driving quality is a force (power) that now acts internally and to which velocity is proportional. In modern physics since Newton, however, velocity is a quality that persists in the absence of forces. The first one to grasp this persistent motion by itself was William of Ockham, who said in his Commentary on the Sentences, Book 2, Question 26, M: "I say therefore that that which moves (ipsum movens) ... after the separation of the moving body from the original projector, is the body moved by itself (ipsum motum secundum se) and not by any power in it or relative to it (virtus absoluta in eo vel respectiva), ... ." It has been claimed by some historians that by rejecting the basic Aristotelian principle "Everything that moves is moved by something else." (Omne quod moventur ab alio movetur.), Ockham took the first step toward the principle of inertia.
Iranian theorie
In the 11th century, Avicenna (Ibn Sīnā) discussed Philoponus' theory in The Book of Healing, in Physics IV.14 he says:
Ibn Sīnā agreed that an impetus is imparted to a projectile by the thrower, but unlike Philoponus, who believed that it was a temporary virtue that would decline even in a vacuum, he viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. Therefore, he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, which is consistent with Newton's concept of inertia. This idea (which dissented from the Aristotelian view) was later described as "impetus" by Jean Buridan, who may have been influenced by Ibn Sina.
Arabic theories
In the 12th century, Hibat Allah Abu'l-Barakat al-Baghdaadi adopted Philoponus' theory of impetus. In his Kitab al-Mu'tabar, Abu'l-Barakat stated that the mover imparts a violent inclination (mayl qasri) on the moved and that this diminishes as the moving object distances itself from the mover. Like Philoponus, and unlike Ibn Sina, al-Baghdaadi believed that the mayl self-extinguishes itself.
He also proposed an explanation of the acceleration of falling bodies where "one mayl after another" is successively applied, because it is the falling body itself which provides the mayl, as opposed to shooting a bow, where only one violent mayl is applied. According to Shlomo Pines, al-Baghdaadi's theory was
the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration].
Jean Buridan and Albert of Saxony later refer to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus.
Buridanist impetus
In the 14th century, Jean Buridan postulated the notion of motive force, which he named impetus.
Buridan gives his theory a mathematical value: impetus = weight x velocity
Buridan's pupil Dominicus de Clavasio in his 1357 De Caelo, as follows:
Buridan's position was that a moving object would only be arrested by the resistance of the air and the weight of the body which would oppose its impetus. Buridan also maintained that impetus was proportional to speed; thus, his initial idea of impetus was similar in many ways to the modern concept of momentum. Buridan saw his theory as only a modification to Aristotle's basic philosophy, maintaining many other peripatetic views, including the belief that there was still a fundamental difference between an object in motion and an object at rest. Buridan also maintained that impetus could be not only linear, but also circular in nature, causing objects (such as celestial bodies) to move in a circle.
Buridan pointed out that neither Aristotle's unmoved movers nor Plato's souls are in the Bible, so he applied impetus theory to the eternal rotation of the celestial spheres by extension of a terrestrial example of its application to rotary motion in the form of a rotating millwheel that continues rotating for a long time after the originally propelling hand is withdrawn, driven by the impetus impressed within it. He wrote on the celestial impetus of the spheres as follows:
However, by discounting the possibility of any resistance either due to a contrary inclination to move in any opposite direction or due to any external resistance, he concluded their impetus was therefore not corrupted by any resistance. Buridan also discounted any inherent resistance to motion in the form of an inclination to rest within the spheres themselves, such as the inertia posited by Averroes and Aquinas. For otherwise that resistance would destroy their impetus, as the anti-Duhemian historian of science Annaliese Maier maintained the Parisian impetus dynamicists were forced to conclude because of their belief in an inherent inclinatio ad quietem or inertia in all bodies.
This raised the question of why the motive force of impetus does not therefore move the spheres with infinite speed. One impetus dynamics answer seemed to be that it was a secondary kind of motive force that produced uniform motion rather than infinite speed, rather than producing uniformly accelerated motion like the primary force did by producing constantly increasing amounts of impetus. However, in his Treatise on the heavens and the world in which the heavens are moved by inanimate inherent mechanical forces, Buridan's pupil Oresme offered an alternative Thomist inertial response to this problem. His response was to posit a resistance to motion inherent in the heavens (i.e. in the spheres), but which is only a resistance to acceleration beyond their natural speed, rather than to motion itself, and was thus a tendency to preserve their natural speed.
Buridan's thought was followed up by his pupil Albert of Saxony (1316–1390), by writers in Poland such as John Cantius, and the Oxford Calculators. Their work in turn was elaborated by Nicole Oresme who pioneered the practice of demonstrating laws of motion in the form of graphs.
The tunnel experiment and oscillatory motion
The Buridan impetus theory developed one of the most important thought experiments in the history of science, the 'tunnel-experiment'. This experiment incorporated oscillatory and pendulum motion into dynamical analysis and the science of motion for the first time. It also established one of the important principles of classical mechanics. The pendulum was crucially important to the development of mechanics in the 17th century. The tunnel experiment also gave rise to the more generally important axiomatic principle of Galilean, Huygenian and Leibnizian dynamics, namely that a body rises to the same height from which it has fallen, a principle of gravitational potential energy. As Galileo Galilei expressed this fundamental principle of his dynamics in his 1632 Dialogo:
This imaginary experiment predicted that a cannonball dropped down a tunnel going straight through the Earth's centre and out the other side would pass the centre and rise on the opposite surface to the same height from which it had first fallen, driven upwards by the gravitationally created impetus it had continually accumulated in falling to the centre. This impetus would require a violent motion correspondingly rising to the same height past the centre for the now opposing force of gravity to destroy it all in the same distance which it had previously required to create it. At this turning point the ball would then descend again and oscillate back and forth between the two opposing surfaces about the centre infinitely in principle. The tunnel experiment provided the first dynamical model of oscillatory motion, specifically in terms of A-B impetus dynamics.
This thought-experiment was then applied to the dynamical explanation of a real world oscillatory motion, namely that of the pendulum. The oscillating motion of the cannonball was compared to the motion of a pendulum bob by imagining it to be attached to the end of an immensely long cord suspended from the vault of the fixed stars centred on the Earth. The relatively short arc of its path through the distant Earth was practically a straight line along the tunnel. Real world pendula were then conceived of as just micro versions of this 'tunnel pendulum', but with far shorter cords and bobs oscillating above the Earth's surface in arcs corresponding to the tunnel as their oscillatory midpoint was dynamically assimilated to the tunnel's centre.
Through such 'lateral thinking', its lateral horizontal motion that was conceived of as a case of gravitational free-fall followed by violent motion in a recurring cycle, with the bob repeatedly travelling through and beyond the motion's vertically lowest but horizontally middle point that substituted for the Earth's centre in the tunnel pendulum. The lateral motions of the bob first towards and then away from the normal in the downswing and upswing become lateral downward and upward motions in relation to the horizontal rather than to the vertical.
The orthodox Aristotelians saw pendulum motion as a dynamical anomaly, as 'falling to rest with difficulty.' Thomas Kuhn wrote in his 1962 The Structure of Scientific Revolutions on the impetus theory's novel analysis it was not falling with any dynamical difficulty at all in principle, but was rather falling in repeated and potentially endless cycles of alternating downward gravitationally natural motion and upward gravitationally violent motion. Galileo eventually appealed to pendulum motion to demonstrate that the speed of gravitational free-fall is the same for all unequal weights by virtue of dynamically modelling pendulum motion in this manner as a case of cyclically repeated gravitational free-fall along the horizontal in principle.
The tunnel experiment was a crucial experiment in favour of impetus dynamics against both orthodox Aristotelian dynamics without any auxiliary impetus theory and Aristotelian dynamics with its H-P variant. According to the latter two theories, the bob cannot possibly pass beyond the normal. In orthodox Aristotelian dynamics there is no force to carry the bob upwards beyond the centre in violent motion against its own gravity that carries it to the centre, where it stops. When conjoined with the Philoponus auxiliary theory, in the case where the cannonball is released from rest, there is no such force because either all the initial upward force of impetus originally impressed within it to hold it in static dynamical equilibrium has been exhausted, or if any remained it would act in the opposite direction and combine with gravity to prevent motion through and beyond the centre. The cannonball being positively hurled downwards could not possibly result in an oscillatory motion either. Although it could then possibly pass beyond the centre, it could never return to pass through it and rise back up again. It would be logically possible for it to pass beyond the centre if upon reaching the centre some of the constantly decaying downward impetus remained and still was sufficiently stronger than gravity to push it beyond the centre and upwards again, eventually becoming weaker than gravity. The ball would then be pulled back towards the centre by its gravity but could not then pass beyond the centre to rise up again, because it would have no force directed against gravity to overcome it. Any possibly remaining impetus would be directed 'downwards' towards the centre, in the same direction it was originally created.
Thus pendulum motion was dynamically impossible for both orthodox Aristotelian dynamics and also for H-P impetus dynamics on this 'tunnel model' analogical reasoning. It was predicted by the impetus theory's tunnel prediction because that theory posited that a continually accumulating downwards force of impetus directed towards the centre is acquired in natural motion, sufficient to then carry it upwards beyond the centre against gravity, and rather than only having an initially upwards force of impetus away from the centre as in the theory of natural motion. So the tunnel experiment constituted a crucial experiment between three alternative theories of natural motion.
Impetus dynamics was to be preferred if the Aristotelian science of motion was to incorporate a dynamical explanation of pendulum motion. It was also to be preferred more generally if it was to explain other oscillatory motions, such as the to and fro vibrations around the normal of musical strings in tension, such as those of a guitar. The analogy made with the gravitational tunnel experiment was that the tension in the string pulling it towards the normal played the role of gravity, and thus when plucked (i.e. pulled away from the normal) and then released, it was the equivalent of pulling the cannonball to the Earth's surface and then releasing it. Thus the musical string vibrated in a continual cycle of the alternating creation of impetus towards the normal and its destruction after passing through the normal until this process starts again with the creation of fresh 'downward' impetus once all the 'upward' impetus has been destroyed.
This positing of a dynamical family resemblance of the motions of pendula and vibrating strings with the paradigmatic tunnel-experiment, the origin of all oscillations in the history of dynamics, was one of the greatest imaginative developments of medieval Aristotelian dynamics in its increasing repertoire of dynamical models of different kinds of motion.
Shortly before Galileo's theory of impetus, Giambattista Benedetti modified the growing theory of impetus to involve linear motion alone:
... [Any] portion of corporeal matter which moves by itself when an impetus has been impressed on it by any external motive force has a natural tendency to move on a rectilinear, not a curved, path.
Benedetti cites the motion of a rock in a sling as an example of the inherent linear motion of objects, forced into circular motion.
See also
Conatus
Physics in the medieval Islamic world
History of science
References and footnotes
Bibliography
Duhem, Pierre. [1906–13]: Etudes sur Leonard de Vinci
Duhem, Pierre, History of Physics, Section IX, XVI and XVII in The Catholic Encyclopedia
Natural philosophy
Classical mechanics
Obsolete theories in physics | 0.776802 | 0.980724 | 0.761828 |
Scattering | In physics, scattering is a wide range of physical processes where moving particles or radiation of some form, such as light or sound, are forced to deviate from a straight trajectory by localized non-uniformities (including particles and radiation) in the medium through which they pass. In conventional use, this also includes deviation of reflected radiation from the angle predicted by the law of reflection. Reflections of radiation that undergo scattering are often called diffuse reflections and unscattered reflections are called specular (mirror-like) reflections. Originally, the term was confined to light scattering (going back at least as far as Isaac Newton in the 17th century). As more "ray"-like phenomena were discovered, the idea of scattering was extended to them, so that William Herschel could refer to the scattering of "heat rays" (not then recognized as electromagnetic in nature) in 1800. John Tyndall, a pioneer in light scattering research, noted the connection between light scattering and acoustic scattering in the 1870s. Near the end of the 19th century, the scattering of cathode rays (electron beams) and X-rays was observed and discussed. With the discovery of subatomic particles (e.g. Ernest Rutherford in 1911) and the development of quantum theory in the 20th century, the sense of the term became broader as it was recognized that the same mathematical frameworks used in light scattering could be applied to many other phenomena.
Scattering can refer to the consequences of particle-particle collisions between molecules, atoms, electrons, photons and other particles. Examples include: cosmic ray scattering in the Earth's upper atmosphere; particle collisions inside particle accelerators; electron scattering by gas atoms in fluorescent lamps; and neutron scattering inside nuclear reactors.
The types of non-uniformities which can cause scattering, sometimes known as scatterers or scattering centers, are too numerous to list, but a small sample includes particles, bubbles, droplets, density fluctuations in fluids, crystallites in polycrystalline solids, defects in monocrystalline solids, surface roughness, cells in organisms, and textile fibers in clothing. The effects of such features on the path of almost any type of propagating wave or moving particle can be described in the framework of scattering theory.
Some areas where scattering and scattering theory are significant include radar sensing, medical ultrasound, semiconductor wafer inspection, polymerization process monitoring, acoustic tiling, free-space communications and computer-generated imagery. Particle-particle scattering theory is important in areas such as particle physics, atomic, molecular, and optical physics, nuclear physics and astrophysics. In particle physics the quantum interaction and scattering of fundamental particles is described by the Scattering Matrix or S-Matrix, introduced and developed by John Archibald Wheeler and Werner Heisenberg.
Scattering is quantified using many different concepts, including scattering cross section (σ), attenuation coefficients, the bidirectional scattering distribution function (BSDF), S-matrices, and mean free path.
Single and multiple scattering
When radiation is only scattered by one localized scattering center, this is called single scattering. It is more common that scattering centers are grouped together; in such cases, radiation may scatter many times, in what is known as multiple scattering. The main difference between the effects of single and multiple scattering is that single scattering can usually be treated as a random phenomenon, whereas multiple scattering, somewhat counterintuitively, can be modeled as a more deterministic process because the combined results of a large number of scattering events tend to average out. Multiple scattering can thus often be modeled well with diffusion theory.
Because the location of a single scattering center is not usually well known relative to the path of the radiation, the outcome, which tends to depend strongly on the exact incoming trajectory, appears random to an observer. This type of scattering would be exemplified by an electron being fired at an atomic nucleus. In this case, the atom's exact position relative to the path of the electron is unknown and would be unmeasurable, so the exact trajectory of the electron after the collision cannot be predicted. Single scattering is therefore often described by probability distributions.
With multiple scattering, the randomness of the interaction tends to be averaged out by a large number of scattering events, so that the final path of the radiation appears to be a deterministic distribution of intensity. This is exemplified by a light beam passing through thick fog. Multiple scattering is highly analogous to diffusion, and the terms multiple scattering and diffusion are interchangeable in many contexts. Optical elements designed to produce multiple scattering are thus known as diffusers. Coherent backscattering, an enhancement of backscattering that occurs when coherent radiation is multiply scattered by a random medium, is usually attributed to weak localization.
Not all single scattering is random, however. A well-controlled laser beam can be exactly positioned to scatter off a microscopic particle with a deterministic outcome, for instance. Such situations are encountered in radar scattering as well, where the targets tend to be macroscopic objects such as people or aircraft.
Similarly, multiple scattering can sometimes have somewhat random outcomes, particularly with coherent radiation. The random fluctuations in the multiply scattered intensity of coherent radiation are called speckles. Speckle also occurs if multiple parts of a coherent wave scatter from different centers. In certain rare circumstances, multiple scattering may only involve a small number of interactions such that the randomness is not completely averaged out. These systems are considered to be some of the most difficult to model accurately.
The description of scattering and the distinction between single and multiple scattering are tightly related to wave–particle duality.
Theory
Scattering theory is a framework for studying and understanding the scattering of waves and particles. Prosaically, wave scattering corresponds to the collision and scattering of a wave with some material object, for instance (sunlight) scattered by rain drops to form a rainbow. Scattering also includes the interaction of billiard balls on a table, the Rutherford scattering (or angle change) of alpha particles by gold nuclei, the Bragg scattering (or diffraction) of electrons and X-rays by a cluster of atoms, and the inelastic scattering of a fission fragment as it traverses a thin foil. More precisely, scattering consists of the study of how solutions of partial differential equations, propagating freely "in the distant past", come together and interact with one another or with a boundary condition, and then propagate away "to the distant future".
The direct scattering problem is the problem of determining the distribution of scattered radiation/particle flux basing on the characteristics of the scatterer. The inverse scattering problem is the problem of determining the characteristics of an object (e.g., its shape, internal constitution) from measurement data of radiation or particles scattered from the object.
Attenuation due to scattering
When the target is a set of many scattering centers whose relative position varies unpredictably, it is customary to think of a range equation whose arguments take different forms in different application areas. In the simplest case consider an interaction that removes particles from the "unscattered beam" at a uniform rate that is proportional to the incident number of particles per unit area per unit time, i.e. that
where Q is an interaction coefficient and x is the distance traveled in the target.
The above ordinary first-order differential equation has solutions of the form:
where Io is the initial flux, path length Δx ≡ x − xo, the second equality defines an interaction mean free path λ, the third uses the number of targets per unit volume η to define an area cross-section σ, and the last uses the target mass density ρ to define a density mean free path τ. Hence one converts between these quantities via Q = 1/λ = ησ = ρ/τ, as shown in the figure at left.
In electromagnetic absorption spectroscopy, for example, interaction coefficient (e.g. Q in cm−1) is variously called opacity, absorption coefficient, and attenuation coefficient. In nuclear physics, area cross-sections (e.g. σ in barns or units of 10−24 cm2), density mean free path (e.g. τ in grams/cm2), and its reciprocal the mass attenuation coefficient (e.g. in cm2/gram) or area per nucleon are all popular, while in electron microscopy the inelastic mean free path (e.g. λ in nanometers) is often discussed instead.
Elastic and inelastic scattering
The term "elastic scattering" implies that the internal states of the scattering particles do not change, and hence they emerge unchanged from the scattering process. In inelastic scattering, by contrast, the particles' internal state is changed, which may amount to exciting some of the electrons of a scattering atom, or the complete annihilation of a scattering particle and the creation of entirely new particles.
The example of scattering in quantum chemistry is particularly instructive, as the theory is reasonably complex while still having a good foundation on which to build an intuitive understanding. When two atoms are scattered off one another, one can understand them as being the bound state solutions of some differential equation. Thus, for example, the hydrogen atom corresponds to a solution to the Schrödinger equation with a negative inverse-power (i.e., attractive Coulombic) central potential. The scattering of two hydrogen atoms will disturb the state of each atom, resulting in one or both becoming excited, or even ionized, representing an inelastic scattering process.
The term "deep inelastic scattering" refers to a special kind of scattering experiment in particle physics.
Mathematical framework
In mathematics, scattering theory deals with a more abstract formulation of the same set of concepts. For example, if a differential equation is known to have some simple, localized solutions, and the solutions are a function of a single parameter, that parameter can take the conceptual role of time. One then asks what might happen if two such solutions are set up far away from each other, in the "distant past", and are made to move towards each other, interact (under the constraint of the differential equation) and then move apart in the "future". The scattering matrix then pairs solutions in the "distant past" to those in the "distant future".
Solutions to differential equations are often posed on manifolds. Frequently, the means to the solution requires the study of the spectrum of an operator on the manifold. As a result, the solutions often have a spectrum that can be identified with a Hilbert space, and scattering is described by a certain map, the S matrix, on Hilbert spaces. Solutions with a discrete spectrum correspond to bound states in quantum mechanics, while a continuous spectrum is associated with scattering states. The study of inelastic scattering then asks how discrete and continuous spectra are mixed together.
An important, notable development is the inverse scattering transform, central to the solution of many exactly solvable models.
Theoretical physics
In mathematical physics, scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations. In acoustics, the differential equation is the wave equation, and scattering studies how its solutions, the sound waves, scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water, coming from a submarine). In the case of classical electrodynamics, the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics, the equations are those of Quantum electrodynamics, Quantum chromodynamics and the Standard Model, the solutions of which correspond to fundamental particles.
In regular quantum mechanics, which includes quantum chemistry, the relevant equation is the Schrödinger equation, although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations, are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons. The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again. (The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis, and the Born approximation.
Electromagnetics
Electromagnetic waves are one of the best known and most commonly encountered forms of radiation that undergo scattering. Scattering of light and radio waves (especially in radar) is particularly important. Several different aspects of electromagnetic scattering are distinct enough to have conventional names. Major forms of elastic light scattering (involving negligible energy transfer) are Rayleigh scattering and Mie scattering. Inelastic scattering includes Brillouin scattering, Raman scattering, inelastic X-ray scattering and Compton scattering.
Light scattering is one of the two major physical processes that contribute to the visible appearance of most objects, the other being absorption. Surfaces described as white owe their appearance to multiple scattering of light by internal or surface inhomogeneities in the object, for example by the boundaries of transparent microscopic crystals that make up a stone or by the microscopic fibers in a sheet of paper. More generally, the gloss (or lustre or sheen) of the surface is determined by scattering. Highly scattering surfaces are described as being dull or having a matte finish, while the absence of surface scattering leads to a glossy appearance, as with polished metal or stone.
Spectral absorption, the selective absorption of certain colors, determines the color of most objects with some modification by elastic scattering. The apparent blue color of veins in skin is a common example where both spectral absorption and scattering play important and complex roles in the coloration. Light scattering can also create color without absorption, often shades of blue, as with the sky (Rayleigh scattering), the human blue iris, and the feathers of some birds (Prum et al. 1998). However, resonant light scattering in nanoparticles can produce many different highly saturated and vibrant hues, especially when surface plasmon resonance is involved (Roqué et al. 2006).
Models of light scattering can be divided into three domains based on a dimensionless size parameter, α which is defined as:
where πDp is the circumference of a particle and λ is the wavelength of incident radiation in the medium. Based on the value of α, these domains are:
α ≪ 1: Rayleigh scattering (small particle compared to wavelength of light);
α ≈ 1: Mie scattering (particle about the same size as wavelength of light, valid only for spheres);
α ≫ 1: geometric scattering (particle much larger than wavelength of light).
Rayleigh scattering is a process in which electromagnetic radiation (including light) is scattered by a small spherical volume of variant refractive indexes, such as a particle, bubble, droplet, or even a density fluctuation. This effect was first modeled successfully by Lord Rayleigh, from whom it gets its name. In order for Rayleigh's model to apply, the sphere must be much smaller in diameter than the wavelength (λ) of the scattered wave; typically the upper limit is taken to be about 1/10 the wavelength. In this size regime, the exact shape of the scattering center is usually not very significant and can often be treated as a sphere of equivalent volume. The inherent scattering that radiation undergoes passing through a pure gas is due to microscopic density fluctuations as the gas molecules move around, which are normally small enough in scale for Rayleigh's model to apply. This scattering mechanism is the primary cause of the blue color of the Earth's sky on a clear day, as the shorter blue wavelengths of sunlight passing overhead are more strongly scattered than the longer red wavelengths according to Rayleigh's famous 1/λ4 relation. Along with absorption, such scattering is a major cause of the attenuation of radiation by the atmosphere. The degree of scattering varies as a function of the ratio of the particle diameter to the wavelength of the radiation, along with many other factors including polarization, angle, and coherence.
For larger diameters, the problem of electromagnetic scattering by spheres was first solved by Gustav Mie, and scattering by spheres larger than the Rayleigh range is therefore usually known as Mie scattering. In the Mie regime, the shape of the scattering center becomes much more significant and the theory only applies well to spheres and, with some modification, spheroids and ellipsoids. Closed-form solutions for scattering by certain other simple shapes exist, but no general closed-form solution is known for arbitrary shapes.
Both Mie and Rayleigh scattering are considered elastic scattering processes, in which the energy (and thus wavelength and frequency) of the light is not substantially changed. However, electromagnetic radiation scattered by moving scattering centers does undergo a Doppler shift, which can be detected and used to measure the velocity of the scattering center/s in forms of techniques such as lidar and radar. This shift involves a slight change in energy.
At values of the ratio of particle diameter to wavelength more than about 10, the laws of geometric optics are mostly sufficient to describe the interaction of light with the particle. Mie theory can still be used for these larger spheres, but the solution often becomes numerically unwieldy.
For modeling of scattering in cases where the Rayleigh and Mie models do not apply such as larger, irregularly shaped particles, there are many numerical methods that can be used. The most common are finite-element methods which solve Maxwell's equations to find the distribution of the scattered electromagnetic field. Sophisticated software packages exist which allow the user to specify the refractive index or indices of the scattering feature in space, creating a 2- or sometimes 3-dimensional model of the structure. For relatively large and complex structures, these models usually require substantial execution times on a computer.
Electrophoresis involves the migration of macromolecules under the influence of an electric field. Electrophoretic light scattering involves passing an electric field through a liquid which makes particles move. The bigger the charge is on the particles, the faster they are able to move.
See also
Attenuation#Light scattering
Backscattering
Bragg diffraction
Brillouin scattering
Characteristic mode analysis
Compton scattering
Coulomb scattering
Deep scattering layer
Diffuse sky radiation
Doppler effect
Dynamic Light Scattering
Electrophoretic light scattering
Extinction
Haag–Ruelle scattering theory
Kikuchi line
Light scattering by particles
Linewidth
Mie scattering
Mie theory
Molecular scattering
Mott scattering
Neutron scattering
Phase space measurement with forward modeling
Photon diffusion
Powder diffraction
Raman scattering
Rayleigh scattering
Resonances in scattering from potentials
Rutherford scattering
Small-angle scattering
Scattering amplitude
Scattering from rough surfaces
Scintillation (physics)
S-Matrix
Tyndall effect
Thomson scattering
Wolf effect
X-ray crystallography
References
External links
Research group on light scattering and diffusion in complex systems
Multiple light scattering from a photonic science point of view
Neutron Scattering Web
Neutron and X-Ray Scattering
World directory of neutron scattering instruments
Scattering and diffraction
Optics Classification and Indexing Scheme (OCIS), Optical Society of America, 1997
Lectures of the European school on theoretical methods for electron and positron induced chemistry, Prague, Feb. 2005
E. Koelink, Lectures on scattering theory, Delft the Netherlands 2006
Physical phenomena
Atomic physics
Nuclear physics
Particle physics
Radar theory
Scattering, absorption and radiative transfer (optics) | 0.76562 | 0.995039 | 0.761822 |
How Not to Be Wrong | How Not to Be Wrong: The Power of Mathematical Thinking, written by Jordan Ellenberg, is a New York Times Best Selling book that connects various economic and societal philosophies with basic mathematics and statistical principles.
Summary
How Not to Be Wrong explains the mathematics behind some of simplest day-to-day thinking. It then goes into more complex decisions people make. For example, Ellenberg explains many misconceptions about lotteries and whether or not they can be mathematically beaten.
Ellenberg uses mathematics to examine real-world issues ranging from the loving of straight lines in the reporting of obesity to the game theory of missing flights, from the relevance to digestion of regression to the mean to the counter-intuitive Berkson's paradox.
Chapter summaries
Part 1: Linearity
Chapter 1, Less Like Sweden: Ellenberg encourages his readers to think nonlinearly, and know that "where you should go depends on where you are". To develop his thought, he relates this to Voodoo economics and the Laffer curve of taxation. Although there are few numbers in this chapter, the point is that the overall concept still ties back to mathematical thinking.
Chapter 2, Straight Locally, Curved Globally: This chapter puts an emphasis on recognizing that "not every curve is a straight line", and makes reference to multiple mathematical concepts including the Pythagorean theorem, the derivation of Pi, Zeno's paradox, and non-standard analysis.
Chapter 3, Everyone is Obese: Here, Ellenberg dissects some common statistics about Obesity trends in the United States. He ties it into linear regression, and points out basic contradictions made by the original arguments presented. He uses many examples to make his point, including the correlation between SAT scores and tuition rates, as well as the trajectory of missiles.
Chapter 4, How Many Is That In Dead Americans: Ellenberg analyzes statistics about the number of casualties around the world in different countries resulting from war. He notes that although proportion in these cases matters, it doesn't always necessarily make sense when relating them to American deaths. He uses examples of deaths due to brain cancer, the Binomial Theorem, and voting polls to reinforce his point.
Chapter 5, More Pie Than Plate: This chapter goes in depth with number percentages relating to employment rates, and references political allegations. He emphasizes that "actual numbers in these cases aren't important, but knowing what to divide by what is mathematics in its truest form", noting that mathematics in itself is in everything.
Part 2: Inference
Chapter 6, The Baltimore Stockbroker and the Bible Code: Ellenberg tries to get across that mathematics is in every single thing that we do. To support this, he uses examples about hidden codes in the Torah determined by Equidistant Letter Sequence, a stockbroker parable, noting that "improbable things happen", and wiggle room attributes to that.
Chapter 7, Dead Fish Don't Read Minds: This chapter touches on a lot of things. The basis for this chapter are stories about a dead salmon's MRI, trial and error in algebra, and birth control statistics as well as basketball statistics (the "hot hand"). He also notes that poetry can be compared to mathematics in that it's "trained by exposure to stimuli, and manipulable in the lab". Additionally, he writes of a few other mathematical concepts, including the Null hypothesis and the Quartic function.
Chapter 8, Reductio Ad Unlikely: This chapter focuses on the works and theorems/concepts of many famous mathematicians and philosophers. These include but aren't limited to the Reductio Ad Absurdum by Aristotle, a look into the constellation Taurus by John Mitchell, and Yitang "Tom" Zhang's "bounded gaps" conjecture. He also delves into explaining rational numbers, the prime number theorem, and makes up his own word, "flogarithms".
Chapter 9, The Internationals Journal of Haruspicy: Ellenberg relates the practice of haruspicy, genes that affect schizophrenia, and the accuracy of published papers as well as other things to the "P value" or statistical significance. He also notes at the end that Jerzy Neyman and Egon Pearson claimed that statistics is about doing, not interpreting, and then relates this to other real-world examples.
Chapter 10, Are You There, God? It's Me, Bayesian Inference: This chapter relates algorithms to things ranging from God, to Netflix movie recommendations, and to terrorism on Facebook. Ellenberg goes through quite a few mathematical concepts in this chapter, which include conditional probabilities relating back to "P value", posterior possibilities, Bayesian inference, and Bayes theorem as they correlate to radio psychics and probability. Additionally, he uses Punnett squares and other methods to explore the probability of God's existence.
Part 3: Expectation
Chapter 11, What to Expect When You're Expecting to Win the Lottery: This chapter discusses the different probabilities of winning the lottery and expected value as it relates to lottery tickets, including the story of how MIT students managed to "win" the lottery every time in their town. Ellenberg also talks about the Law of Large numbers again, as well as introducing the Additivity of expected value and the games of Franc-Carreau or the "needle/noodle problem". Many mathematicians and other famous people are mentioned in this chapter, including Georges-Louis Leclerc, Comte de Buffon, and James Harvey.
Chapter 12, Miss More Planes: The mathematical concepts in this chapter include utility and utils, and the Laffer curve again. This chapter discusses the amount of time spent in the airport as it relates to flights being missed, Daniel Ellsberg, Blaise Pascal's Pense's, the probability of God once more, and the St. Petersburg paradox.
Chapter 13, Where the Train Tracks Meet: This chapter includes discussions about the lottery again, and geometry in renaissance paintings. It introduces some things about coding, including error correcting code, Hamming code, and code words. It also mentions Hamming distance as it relates to language. The mathematical concepts included in this chapter are variance, the projective plane, the Fano plane, and the face-centered cubic lattice.
Part 4: Regression
Chapter 14, The Triumph of Mediocrity: This chapter discusses mediocrity in everyday business according to Horace Secrist. It also includes discussions about Francis Galton's "Hereditary Genius", and baseball statistics about home runs.
Chapter 15, Galtons Ellipse: This chapter focuses on Sir Francis Galton, and his work on scatter plots, as well as the ellipses formed by them, correlation and causation, and the development from linear systems to quadratics. This chapter also addressed conditional and unconditional expectation, regression to the mean, eccentricity, bivariate normal distribution, and dimensions in geometry.
Chapter 16, Does Lung Cancer Make You Smoke Cigarettes: This chapter explores the correlation between smoking cigarettes and lung cancer, using work from R.A. Fisher. It also goes into Berkson's Fallacy, and uses the attractiveness of men to develop the thought, and talks about common effect at the end.
Part 5: Existence
Chapter 17, There Is No Such Thing As Public Opinion: This chapter delves into the workings of a majority rules system, and points out the contradictions and confusion of it all, ultimately stating that public opinion doesn't exist. It uses many examples to make its point, including different election statistics, the death sentence of a mentally retarded person, and a case with Justice Antonin Scalia. It also includes mathematical terms/concepts such as independence of irrelevant alternatives, asymmetric domination effect, Australia's single transferable vote, and Condorcet paradoxes.
Chapter 18, "Out of Nothing, I Have Created a Strange New Universe": This chapter talks about János Bolyais, and his work on the parallel postulate. Others mentioned in this chapter include David Hilbert, and Gottlob Frege. It also explored points and lines, Formalism, and what the author calls a "Genius" mentality.
How to be Right
This last chapter introduces one last concept, ex falso quodlibet, and mentions Theodore Roosevelt, as well as the election between Obama and Romney. The author ends the book with encouraging statements, noting that it's okay to not know everything, and that we all learn from failure. He ends by saying that to love math is to be "touched by fire and bound by reason", and that we should all use it well.
Reception
Bill Gates endorsed How Not to Be Wrong and included it in his 2016 "5 Books to Read This Summer" list.
The Washington Post reported that the book is "brilliantly engaging... part of the sheer intellectual joy of the book is watching the author leap nimbly from topic to topic, comparing slime molds to the Bush–Gore Florida vote, criminology to Beethoven's Ninth Symphony. The final effect is of one enormous mosaic unified by mathematics."
The Wall Street Journal said, "Mr. Ellenberg writes, a kind of 'X-ray specs that reveal hidden structures underneath the messy and chaotic surface of the world." The Guardian wrote, "Ellenberg's prose is a delight – informal and robust, irreverent yet serious."
Business Insider said it's "A collection of fascinating examples of math and its surprising applications...How Not To Be Wrong is full of interesting and weird mathematical tools and observations".
Publishers Weekly writes "Wry, accessible, and entertaining... Ellenberg finds the common-sense math at work in the every day world, and his vivid examples and clear descriptions show how 'math is woven into the way we reason'".
Times Higher Education notes "How Not To Be Wrong is beautifully written, holding the reader's attention throughout with well-chosen material, illuminating exposition, wit, and helpful examples...Ellenberg shares Gardner's remarkable ability to write clearly and entertainingly, bringing in deep mathematical ideas without the reader registering their difficulty".
Salon describes the book as "A poet-mathematician offers an empowering and entertaining primer for the age of Big Data...A rewarding popular math book for just about anyone".
References
External links
Official Website
2014 non-fiction books
Popular mathematics books
Penguin Books books | 0.778208 | 0.978937 | 0.761816 |
Naturally aspirated engine | A naturally aspirated engine, also known as a normally aspirated engine, and abbreviated to N/A or NA, is an internal combustion engine in which air intake depends solely on atmospheric pressure and does not have forced induction through a turbocharger or a supercharger.
Description
In a naturally aspirated engine, air for combustion (Diesel cycle in a diesel engine or specific types of Otto cycle in petrol engines, namely petrol direct injection) or an air/fuel mixture (traditional Otto cycle petrol engines), is drawn into the engine's cylinders by atmospheric pressure acting against a partial vacuum that occurs as the piston travels downwards toward bottom dead centre during the intake stroke. Owing to innate restriction in the engine's inlet tract, which includes the intake manifold, a small pressure drop occurs as air is drawn in, resulting in a volumetric efficiency of less than 100 percent—and a less than complete air charge in the cylinder. The density of the air charge, and therefore the engine's maximum theoretical power output, in addition to being influenced by induction system restriction, is also affected by engine speed and atmospheric pressure, the latter of which decreases as the operating altitude increases.
This is in contrast to a forced-induction engine, in which a mechanically driven supercharger or an exhaust-driven turbocharger is employed to facilitate increasing the mass of intake air beyond what could be produced by atmospheric pressure alone. Nitrous oxide can also be used to artificially increase the mass of oxygen present in the intake air. This is accomplished by injecting liquid nitrous oxide into the intake, which supplies significantly more oxygen in a given volume than is possible with atmospheric air. Nitrous oxide is 36.3% available oxygen by mass after it decomposes as compared with atmospheric air at 20.95%. Nitrous oxide also boils at at atmospheric pressures and offers significant cooling from the latent heat of vaporization, which also aids in increasing the overall air charge density significantly compared to natural aspiration.
Applications
Most automobile petrol engines, as well as many small engines used for non-automotive purposes, are naturally aspirated. Most modern diesel engines powering highway vehicles are turbocharged to produce a more favourable power-to-weight ratio, a higher torque curve, as well as better fuel efficiency and lower exhaust emissions. Turbocharging is nearly universal on diesel engines that are used in railroad, marine engines, and commercial stationary applications (electrical power generation, for example). Forced induction is also used with reciprocating aircraft engines to negate some of the power loss that occurs as the aircraft climbs to higher altitudes.
Advantages and disadvantages
The advantages and disadvantages of a naturally aspirated engine in relation to a same-sized engine relying on forced induction include:
Advantages
Easier to maintain and repair
Lower development and production costs
Increased reliability, partly due to fewer separate, moving parts
More direct throttle response than a turbo system due to the lack of turbo lag (an advantage also shared with superchargers)
Less potential for overheating and or uncontrolled combustion (pinging/ knocking)
Disadvantages
Decreased efficiency
Decreased power-to-weight ratio
Decreased potential for tuning
Increased power loss at higher elevation (due to lower air pressure) compared to forced induction engines
See also
Carburetor
Fuel injection
Manifold vacuum
References
Engine technology
Internal combustion engine | 0.764657 | 0.996266 | 0.761802 |
D'Alembert operator | In special relativity, electromagnetism and wave theory, the d'Alembert operator (denoted by a box: ), also called the d'Alembertian, wave operator, box operator or sometimes quabla operator (cf. nabla symbol) is the Laplace operator of Minkowski space. The operator is named after French mathematician and physicist Jean le Rond d'Alembert.
In Minkowski space, in standard coordinates , it has the form
Here is the 3-dimensional Laplacian and is the inverse Minkowski metric with
, , for .
Note that the and summation indices range from 0 to 3: see Einstein notation.
(Some authors alternatively use the negative metric signature of , with .)
Lorentz transformations leave the Minkowski metric invariant, so the d'Alembertian yields a Lorentz scalar. The above coordinate expressions remain valid for the standard coordinates in every inertial frame.
The box symbol and alternate notations
There are a variety of notations for the d'Alembertian. The most common are the box symbol (Unicode: ) whose four sides represent the four dimensions of space-time and the box-squared symbol which emphasizes the scalar property through the squared term (much like the Laplacian). In keeping with the triangular notation for the Laplacian, sometimes is used.
Another way to write the d'Alembertian in flat standard coordinates is . This notation is used extensively in quantum field theory, where partial derivatives are usually indexed, so the lack of an index with the squared partial derivative signals the presence of the d'Alembertian.
Sometimes the box symbol is used to represent the four-dimensional Levi-Civita covariant derivative. The symbol is then used to represent the space derivatives, but this is coordinate chart dependent.
Applications
The wave equation for small vibrations is of the form
where is the displacement.
The wave equation for the electromagnetic field in vacuum is
where is the electromagnetic four-potential in Lorenz gauge.
The Klein–Gordon equation has the form
Green's function
The Green's function, , for the d'Alembertian is defined by the equation
where is the multidimensional Dirac delta function and and are two points in Minkowski space.
A special solution is given by the retarded Green's function which corresponds to signal propagation only forward in time
where is the Heaviside step function.
See also
Four-gradient
d'Alembert's formula
Klein–Gordon equation
Relativistic heat conduction
Ricci calculus
Wave equation
References
External links
, originally printed in Rendiconti del Circolo Matematico di Palermo.
Differential operators
Hyperbolic partial differential equations | 0.769393 | 0.990127 | 0.761796 |
Airfoil | An airfoil (American English) or aerofoil (British English) is a streamlined body that is capable of generating significantly more lift than drag. Wings, sails and propeller blades are examples of airfoils. Foils of similar function designed with water as the working fluid are called hydrofoils.
When oriented at a suitable angle, a solid body moving through a fluid deflects the oncoming fluid (for fixed-wing aircraft, a downward force), resulting in a force on the airfoil in the direction opposite to the deflection. This force is known as aerodynamic force and can be resolved into two components: lift (perpendicular to the remote freestream velocity) and drag (parallel to the freestream velocity).
The lift on an airfoil is primarily the result of its angle of attack. Most foil shapes require a positive angle of attack to generate lift, but cambered airfoils can generate lift at zero angle of attack. Airfoils can be designed for use at different speeds by modifying their geometry: those for subsonic flight generally have a rounded leading edge, while those designed for supersonic flight tend to be slimmer with a sharp leading edge. All have a sharp trailing edge.
The air deflected by an airfoil causes it to generate a lower-pressure "shadow" above and behind itself. This pressure difference is accompanied by a velocity difference, via Bernoulli's principle, so the resulting flowfield about the airfoil has a higher average velocity on the upper surface than on the lower surface. In some situations (e.g. inviscid potential flow) the lift force can be related directly to the average top/bottom velocity difference without computing the pressure by using the concept of circulation and the Kutta–Joukowski theorem.
Overview
The wings and stabilizers of fixed-wing aircraft, as well as helicopter rotor blades, are built with airfoil-shaped cross sections. Airfoils are also found in propellers, fans, compressors and turbines. Sails are also airfoils, and the underwater surfaces of sailboats, such as the centerboard, rudder, and keel, are similar in cross-section and operate on the same principles as airfoils. Swimming and flying creatures and even many plants and sessile organisms employ airfoils/hydrofoils: common examples being bird wings, the bodies of fish, and the shape of sand dollars. An airfoil-shaped wing can create downforce on an automobile or other motor vehicle, improving traction.
When the wind is obstructed by an object such as a flat plate, a building, or the deck of a bridge, the object will experience drag and also an aerodynamic force perpendicular to the wind. This does not mean the object qualifies as an airfoil. Airfoils are highly-efficient lifting shapes, able to generate more lift than similarly sized flat plates of the same area, and able to generate lift with significantly less drag. Airfoils are used in the design of aircraft, propellers, rotor blades, wind turbines and other applications of aeronautical engineering.
A lift and drag curve obtained in wind tunnel testing is shown on the right. The curve represents an airfoil with a positive camber so some lift is produced at zero angle of attack. With increased angle of attack, lift increases in a roughly linear relation, called the slope of the lift curve. At about 18 degrees this airfoil stalls, and lift falls off quickly beyond that. The drop in lift can be explained by the action of the upper-surface boundary layer, which separates and greatly thickens over the upper surface at and past the stall angle. The thickened boundary layer's displacement thickness changes the airfoil's effective shape, in particular it reduces its effective camber, which modifies the overall flow field so as to reduce the circulation and the lift. The thicker boundary layer also causes a large increase in pressure drag, so that the overall drag increases sharply near and past the stall point.
Airfoil design is a major facet of aerodynamics. Various airfoils serve different flight regimes. Asymmetric airfoils can generate lift at zero angle of attack, while a symmetric airfoil may better suit frequent inverted flight as in an aerobatic airplane. In the region of the ailerons and near a wingtip a symmetric airfoil can be used to increase the range of angles of attack to avoid spin–stall. Thus a large range of angles can be used without boundary layer separation. Subsonic airfoils have a round leading edge, which is naturally insensitive to the angle of attack. The cross section is not strictly circular, however: the radius of curvature is increased before the wing achieves maximum thickness to minimize the chance of boundary layer separation. This elongates the wing and moves the point of maximum thickness back from the leading edge.
Supersonic airfoils are much more angular in shape and can have a very sharp leading edge, which is very sensitive to angle of attack. A supercritical airfoil has its maximum thickness close to the leading edge to have a lot of length to slowly shock the supersonic flow back to subsonic speeds. Generally such transonic airfoils and also the supersonic airfoils have a low camber to reduce drag divergence. Modern aircraft wings may have different airfoil sections along the wing span, each one optimized for the conditions in each section of the wing.
Movable high-lift devices, flaps and sometimes slats, are fitted to airfoils on almost every aircraft. A trailing edge flap acts similarly to an aileron; however, it, as opposed to an aileron, can be retracted partially into the wing if not used.
A laminar flow wing has a maximum thickness in the middle camber line. Analyzing the Navier–Stokes equations in the linear regime shows that a negative pressure gradient along the flow has the same effect as reducing the speed. So with the maximum camber in the middle, maintaining a laminar flow over a larger percentage of the wing at a higher cruising speed is possible. However, some surface contamination will disrupt the laminar flow, making it turbulent. For example, with rain on the wing, the flow will be turbulent. Under certain conditions, insect debris on the wing will cause the loss of small regions of laminar flow as well. Before NASA's research in the 1970s and 1980s the aircraft design community understood from application attempts in the WW II era that laminar flow wing designs were not practical using common manufacturing tolerances and surface imperfections. That belief changed after new manufacturing methods were developed with composite materials (e.g. laminar-flow airfoils developed by Professor Franz Wortmann for use with wings made of fibre-reinforced plastic). Machined metal methods were also introduced. NASA's research in the 1980s revealed the practicality and usefulness of laminar flow wing designs and opened the way for laminar-flow applications on modern practical aircraft surfaces, from subsonic general aviation aircraft to transonic large transport aircraft, to supersonic designs.
Schemes have been devised to define airfoils – an example is the NACA system. Various airfoil generation systems are also used. An example of a general purpose airfoil that finds wide application, and pre–dates the NACA system, is the Clark-Y. Today, airfoils can be designed for specific functions by the use of computer programs.
Airfoil terminology
The various terms related to airfoils are defined below:
The suction surface (a.k.a. upper surface) is generally associated with higher velocity and lower static pressure.
The pressure surface (a.k.a. lower surface) has a comparatively higher static pressure than the suction surface. The pressure gradient between these two surfaces contributes to the lift force generated for a given airfoil.
The geometry of the airfoil is described with a variety of terms :
The leading edge is the point at the front of the airfoil that has maximum curvature (minimum radius).
The trailing edge is the point on the airfoil most remote from the leading edge. The angle between the upper and lower surfaces at the trailing edge is the trailing edge angle.
The chord line is the straight line connecting leading and trailing edges. The chord length, or simply chord, , is the length of the chord line. That is the reference dimension of the airfoil section.
The shape of the airfoil is defined using the following geometrical parameters:
The mean camber line or mean line is the locus of points midway between the upper and lower surfaces. Its shape depends on the thickness distribution along the chord;
The thickness of an airfoil varies along the chord. It may be measured in either of two ways:
Thickness measured perpendicular to the camber line. This is sometimes described as the "American convention";
Thickness measured perpendicular to the chord line. This is sometimes described as the "British convention".
Some important parameters to describe an airfoil's shape are its camber and its thickness. For example, an airfoil of the NACA 4-digit series such as the NACA 2415 (to be read as 2 – 4 – 15) describes an airfoil with a camber of 0.02 chord located at 0.40 chord, with 0.15 chord of maximum thickness.
Finally, important concepts used to describe the airfoil's behaviour when moving through a fluid are:
The aerodynamic center, which is the chord-wise location about which the pitching moment is independent of the lift coefficient and the angle of attack.
The center of pressure, which is the chord-wise location about which the pitching moment is momentarily zero. On a cambered airfoil, the center of pressure is not a fixed location as it moves in response to changes in angle of attack and lift coefficient.
In two-dimensional flow around a uniform wing of infinite span, the slope of the lift curve is determined primarily by the trailing edge angle. The slope is greatest if the angle is zero; and decreases as the angle increases. For a wing of finite span, the aspect ratio of the wing also significantly influences the slope of the curve. As aspect ratio decreases, the slope also decreases.
Thin airfoil theory
Thin airfoil theory is a simple theory of airfoils that relates angle of attack to lift for incompressible, inviscid flows. It was devised by German mathematician Max Munk and further refined by British aerodynamicist Hermann Glauert and others in the 1920s. The theory idealizes the flow around an airfoil as two-dimensional flow around a thin airfoil. It can be imagined as addressing an airfoil of zero thickness and infinite wingspan.
Thin airfoil theory was particularly notable in its day because it provided a sound theoretical basis for the following important properties of airfoils in two-dimensional inviscid flow:
on a symmetric airfoil, the center of pressure and aerodynamic center are coincident and lie exactly one quarter of the chord behind the leading edge.
on a cambered airfoil, the aerodynamic center lies exactly one quarter of the chord behind the leading edge, but the position of the center of pressure moves when the angle of attack changes.
the slope of the lift coefficient versus angle of attack line is units per radian.
As a consequence of (3), the section lift coefficient of a thin symmetric airfoil of infinite wingspan is:
where is the section lift coefficient,
is the angle of attack in radians, measured relative to the chord line.
(The above expression is also applicable to a cambered airfoil where is the angle of attack measured relative to the zero-lift line instead of the chord line.)
Also as a consequence of (3), the section lift coefficient of a cambered airfoil of infinite wingspan is:
where is the section lift coefficient when the angle of attack is zero.
Thin airfoil theory assumes the air is an inviscid fluid so does not account for the stall of the airfoil, which usually occurs at an angle of attack between 10° and 15° for typical airfoils. In the mid-late 2000s, however, a theory predicting the onset of leading-edge stall was proposed by Wallace J. Morris II in his doctoral thesis. Morris's subsequent refinements contain the details on the current state of theoretical knowledge on the leading-edge stall phenomenon. Morris's theory predicts the critical angle of attack for leading-edge stall onset as the condition at which a global separation zone is predicted in the solution for the inner flow. Morris's theory demonstrates that a subsonic flow about a thin airfoil can be described in terms of an outer region, around most of the airfoil chord, and an inner region, around the nose, that asymptotically match each other. As the flow in the outer region is dominated by classical thin airfoil theory, Morris's equations exhibit many components of thin airfoil theory.
Derivation
In thin airfoil theory, the width of the (2D) airfoil is assumed negligible, and the airfoil itself replaced with a 1D blade along its camber line, oriented at the angle of attack . Let the position along the blade be , ranging from at the wing's front to at the trailing edge; the camber of the airfoil, , is assumed sufficiently small that one need not distinguish between and position relative to the fuselage.
The flow across the airfoil generates a circulation around the blade, which can be modeled as a vortex sheet of position-varying strength . The Kutta condition implies that , but the strength is singular at the bladefront, with for . If the main flow has density , then the Kutta–Joukowski theorem gives that the total lift force is proportional to and its moment about the leading edge proportional to
From the Biot–Savart law, the vorticity produces a flow field oriented normal to the airfoil at . Since the airfoil is an impermeable surface, the flow must balance an inverse flow from . By the small-angle approximation, is inclined at angle relative to the blade at position , and the normal component is correspondingly . Thus, must satisfy the convolution equation which uniquely determines it in terms of known quantities.
An explicit solution can be obtained through first the change of variablesand then expanding both and as a nondimensionalized Fourier series in with a modified lead term: The resulting lift and moment depend on only the first few terms of this series.
The lift coefficient satisfies and the moment coefficient The moment about the 1/4 chord point will thus be From this it follows that the center of pressure is aft of the 'quarter-chord' point , by The aerodynamic center is the position at which the pitching moment does not vary with a change in lift coefficient:
Thin-airfoil theory shows that, in two-dimensional inviscid flow, the aerodynamic center is at the quarter-chord position.
See also
Circulation control wing
Hydrofoil
Kline–Fogleman airfoil
Küssner effect
Parafoil
Wing configuration
References
Citations
General Sources
Further reading
Ali Kamranpay, Alireza Mehrabadi. Numerical Analysis of NACA Airfoil 0012 at Different Attack Angles and Obtaining its Aerodynamic Coefficients. Journal of Mechatronics and Automation. 2019; 6(3): 8–16p.
External links
UIUC Airfoil Coordinates Database
Airfoil & Hydrofoil Reference Application
FoilSim An airfoil simulator from NASA
Airfoil Playground - Interactive WebApp
Desktopaero
Airflow across a wing (University of Cambridge)
Aerodynamics
Aircraft wing design | 0.763922 | 0.997213 | 0.761793 |
VALS | VALS (Values and Lifestyle Survey) is a proprietary research methodology used for psychographic market segmentation. Market segmentation is designed to guide companies in tailoring their products and services in order to appeal to the people most likely to purchase them.
History and description
VALS was developed in 1978 by social scientist and consumer futurist Arnold Mitchell and his colleagues at SRI International. It was immediately embraced by advertising agencies and is currently offered as a product of SRI's consulting services division. VALS draws heavily on the work of Harvard sociologist David Riesman and psychologist Abraham Maslow.
Mitchell used statistics to identify attitudinal and demographic questions that helped categorize adult American consumers into one of nine lifestyle types: survivors (4%), sustainers (7%), belongers (35%), emulators (9%), achievers (22%), I-am-me (5%), experiential (7%), societally conscious (9%), and integrated (2%). The questions were weighted using data developed from a sample of 1,635 Americans and their significant others, who responded to an SRI International survey in 1980.
The main dimensions of the VALS framework are resources (the vertical dimension) and primary motivation (the horizontal dimension). The vertical dimension segments people based on the degree to which they are innovative and have resources such as income, education, self-confidence, intelligence, leadership skills, and energy. The horizontal dimension represents primary motivations and includes three distinct types:
Consumers driven by knowledge and principles are motivated primarily by ideals. These consumers include groups called Thinkers and Believers.
Consumers driven by demonstrating success to their peers are motivated primarily by achievement. These consumers include groups referred to as Achievers and Strivers.
Consumers driven by a desire for social or physical activity, variety, and risk taking are motivated primarily by self-expression. These consumers include the groups known as Experiencers and Makers.
At the top of the rectangle are the Innovators, who have such high resources that they could have any of the three primary motivations. At the bottom of the rectangle are the Survivors, who live complacently and within their means without a strong primary motivation of the types listed above. The VALS Framework gives more details about each of the groups.
VALS
Researchers faced some problems with the VALS method, and in response, SRI developed the VALS2 programme in 1978; additionally, SRI significantly revised it in 1989. VALS2 places less emphasis on activities and interests and more on a psychological base to tap relatively enduring attitudes and values. The VALS2 program has two dimensions. The first dimension, Self-orientation, determines the type of goals and behaviours that individuals will pursue, and refers to patterns of attitudes and activities which help individuals reinforce, sustain, or modify their social self-image. This is a fundamental human need.
The second dimension, Resources, reflects the ability of individuals to pursue their dominant self-orientation and includes full-range of physical, psychological, demographic, and material means such as self-confidence, interpersonal skills, inventiveness, intelligence, eagerness to buy, money, position, education, etc. According to VALS 2, a consumer purchases certain products and services because the individual is a specific type of person. The purchase is believed to reflect a consumer's lifestyle, which is a function of self–orientation and resources.
In 1991, the name VALS2 was switched back to VALS, because of brand equity.
Criticisms
Psychographic segmentation has been criticized by well-known public opinion analyst and social scientist Daniel Yankelovich, who says psychographics are "very weak" at predicting people's purchases, making it a "very poor" tool for corporate decision-makers.
The VALS Framework has also been criticized as too culturally specific for international use.
Segments
The following types correspond to VALS segments of US adults based on two concepts for understanding consumers: primary motivation and resources.
Innovators. These consumers are on the leading edge of change, have the highest incomes, and such high self-esteem and abundant resources that they can indulge in any or all self-orientations. They are located above the rectangle. Image is important to them as an expression of taste, independence, and character. Their consumer choices are directed toward the "finer things in life."
Thinkers. These consumers are the high-resource group of those who are motivated by ideals. They are mature, responsible, well-educated professionals. Their leisure activities center on their homes, but they are well informed about what goes on in the world and are open to new ideas and social change. They have high incomes but are practical consumers and rational decision makers.
Believers. These consumers are the low-resource group of those who are motivated by ideals. They are conservative and predictable consumers who favor local products and established brands. Their lives are centered on family, community, and the nation. They have modest incomes.
Achievers. These consumers are the high-resource group of those who are motivated by achievement. They are successful work-oriented people who get their satisfaction from their jobs and families. They are politically conservative and respect authority and the status quo. They favor established products and services that show off their success to their peers.
Strivers. These consumers are the low-resource group of those who are motivated by achievements. They have values very similar to achievers but have fewer economic, social, and psychological resources. Style is extremely important to them as they strive to emulate people they admire.
Experiencers. These consumers are the high-resource group of those who are motivated by self-expression. They are the youngest of all the segments, with a median age of 25. They have a lot of energy, which they pour into physical exercise and social activities. They are avid consumers, spending heavily on clothing, fast-foods, music, and other youthful favorites, with particular emphasis on new products and services.
Makers. These consumers are the low-resource group of those who are motivated by self-expression. They are practical people who value self-sufficiency. They are focused on the familiar - family, work, and physical recreation - and have little interest in the broader world. As consumers, they appreciate practical and functional products.
Survivors. These consumers have the lowest incomes. They have too few resources to be included in any consumer self-orientation and are thus located below the rectangle. They are the oldest of all the segments, with a median age of 61. Within their limited means, they tend to be brand-loyal consumers.
See also
Advertising
Data mining
Demographics
Fear, uncertainty, and doubt
Marketing
Psychographics
References
Further reading
External links
Strategic Business Insights Official website (was formerly SRI Consulting Business Intelligence)
Market research
Market segmentation | 0.76963 | 0.989805 | 0.761783 |
Electromagnetic tensor | In electromagnetism, the electromagnetic tensor or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is a mathematical object that describes the electromagnetic field in spacetime. The field tensor was first used after the four-dimensional tensor formulation of special relativity was introduced by Hermann Minkowski. The tensor allows related physical laws to be written concisely, and allows for the quantization of the electromagnetic field by the Lagrangian formulation described below.
Definition
The electromagnetic tensor, conventionally labelled F, is defined as the exterior derivative of the electromagnetic four-potential, A, a differential 1-form:
Therefore, F is a differential 2-form— an antisymmetric rank-2 tensor field—on Minkowski space. In component form,
where is the four-gradient and is the four-potential.
SI units for Maxwell's equations and the particle physicist's sign convention for the signature of Minkowski space , will be used throughout this article.
Relationship with the classical fields
The Faraday differential 2-form is given by
where is the time element times the speed of light .
This is the exterior derivative of its 1-form antiderivative
,
where has ( is a scalar potential for the irrotational/conservative vector field ) and has ( is a vector potential for the solenoidal vector field ).
Note that
where is the exterior derivative, is the Hodge star, (where is the electric current density, and is the electric charge density) is the 4-current density 1-form, is the differential forms version of Maxwell's equations.
The electric and magnetic fields can be obtained from the components of the electromagnetic tensor. The relationship is simplest in Cartesian coordinates:
where c is the speed of light, and
where is the Levi-Civita tensor. This gives the fields in a particular reference frame; if the reference frame is changed, the components of the electromagnetic tensor will transform covariantly, and the fields in the new frame will be given by the new components.
In contravariant matrix form with metric signature (+,-,-,-),
The covariant form is given by index lowering,
The Faraday tensor's Hodge dual is
From now on in this article, when the electric or magnetic fields are mentioned, a Cartesian coordinate system is assumed, and the electric and magnetic fields are with respect to the coordinate system's reference frame, as in the equations above.
Properties
The matrix form of the field tensor yields the following properties:
Antisymmetry:
Six independent components: In Cartesian coordinates, these are simply the three spatial components of the electric field (Ex, Ey, Ez) and magnetic field (Bx, By, Bz).
Inner product: If one forms an inner product of the field strength tensor a Lorentz invariant is formed meaning this number does not change from one frame of reference to another.
Pseudoscalar invariant: The product of the tensor with its Hodge dual gives a Lorentz invariant: where is the rank-4 Levi-Civita symbol. The sign for the above depends on the convention used for the Levi-Civita symbol. The convention used here is .
Determinant: which is proportional to the square of the above invariant.
Trace: which is equal to zero.
Significance
This tensor simplifies and reduces Maxwell's equations as four vector calculus equations into two tensor field equations. In electrostatics and electrodynamics, Gauss's law and Ampère's circuital law are respectively:
and reduce to the inhomogeneous Maxwell equation:
, where is the four-current.
In magnetostatics and magnetodynamics, Gauss's law for magnetism and Maxwell–Faraday equation are respectively:
which reduce to the Bianchi identity:
or using the index notation with square brackets for the antisymmetric part of the tensor:
Using the expression relating the Faraday tensor to the four-potential, one can prove that the above antisymmetric quantity turns to zero identically. The implication of that identity is far-reaching: it means that the EM field theory leaves no room for magnetic monopoles and currents of such.
Relativity
The field tensor derives its name from the fact that the electromagnetic field is found to obey the tensor transformation law, this general property of physical laws being recognised after the advent of special relativity. This theory stipulated that all the laws of physics should take the same form in all coordinate systems – this led to the introduction of tensors. The tensor formalism also leads to a mathematically simpler presentation of physical laws.
The inhomogeneous Maxwell equation leads to the continuity equation:
implying conservation of charge.
Maxwell's laws above can be generalised to curved spacetime by simply replacing partial derivatives with covariant derivatives:
and
where the semicolon notation represents a covariant derivative, as opposed to a partial derivative. These equations are sometimes referred to as the curved space Maxwell equations. Again, the second equation implies charge conservation (in curved spacetime):
Lagrangian formulation of classical electromagnetism
Classical electromagnetism and Maxwell's equations can be derived from the action:
where is over space and time.
This means the Lagrangian density is
The two middle terms in the parentheses are the same, as are the two outer terms, so the Lagrangian density is
Substituting this into the Euler–Lagrange equation of motion for a field:
So the Euler–Lagrange equation becomes:
The quantity in parentheses above is just the field tensor, so this finally simplifies to
That equation is another way of writing the two inhomogeneous Maxwell's equations (namely, Gauss's law and Ampère's circuital law) using the substitutions:
where i, j, k take the values 1, 2, and 3.
Hamiltonian form
The Hamiltonian density can be obtained with the usual relation,
.
Quantum electrodynamics and field theory
The Lagrangian of quantum electrodynamics extends beyond the classical Lagrangian established in relativity to incorporate the creation and annihilation of photons (and electrons):
where the first part in the right hand side, containing the Dirac spinor , represents the Dirac field. In quantum field theory it is used as the template for the gauge field strength tensor. By being employed in addition to the local interaction Lagrangian it reprises its usual role in QED.
See also
Classification of electromagnetic fields
Covariant formulation of classical electromagnetism
Electromagnetic stress–energy tensor
Gluon field strength tensor
Ricci calculus
Riemann–Silberstein vector
Notes
References
Electromagnetism
Minkowski spacetime
Theory of relativity
Tensor physical quantities
Tensors in general relativity | 0.765577 | 0.995032 | 0.761774 |
Zeno's paradoxes | Zeno's paradoxes are a series of philosophical arguments presented by the ancient Greek philosopher Zeno of Elea (c. 490–430 BC), primarily known through the works of Plato, Aristotle, and later commentators like Simplicius of Cilicia. Zeno devised these paradoxes to support his teacher Parmenides's philosophy of monism, which posits that despite our sensory experiences, reality is singular and unchanging. The paradoxes famously challenge the notions of plurality (the existence of many things), motion, space, and time by suggesting they lead to logical contradictions.
Zeno's work, primarily known from second-hand accounts since his original texts are lost, comprises forty "paradoxes of plurality," which argue against the coherence of believing in multiple existences, and several arguments against motion and change. Of these, only a few are definitively known today, including the renowned "Achilles Paradox", which illustrates the problematic concept of infinite divisibility in space and time. In this paradox, Zeno argues that a swift runner like Achilles cannot overtake a slower moving tortoise with a head start, because the distance between them can be infinitely subdivided, implying Achilles would require an infinite number of steps to catch the tortoise.
These paradoxes have stirred extensive philosophical and mathematical discussion throughout history, particularly regarding the nature of infinity and the continuity of space and time. Initially, Aristotle's interpretation, suggesting a potential rather than actual infinity, was widely accepted. However, modern solutions leveraging the mathematical framework of calculus have provided a different perspective, highlighting Zeno's significant early insight into the complexities of infinity and continuous motion. Zeno's paradoxes remain a pivotal reference point in the philosophical and mathematical exploration of reality, motion, and the infinite, influencing both ancient thought and modern scientific understanding.
History
The origins of the paradoxes are somewhat unclear, but they are generally thought to have been developed to support Parmenides' doctrine of monism, that all of reality is one, and that all change is impossible, that is, that nothing ever changes in location or in any other respect. Diogenes Laërtius, citing Favorinus, says that Zeno's teacher Parmenides was the first to introduce the paradox of Achilles and the tortoise. But in a later passage, Laërtius attributes the origin of the paradox to Zeno, explaining that Favorinus disagrees. Modern academics attribute the paradox to Zeno.
Many of these paradoxes argue that contrary to the evidence of one's senses, motion is nothing but an illusion. In Plato's Parmenides (128a–d), Zeno is characterized as taking on the project of creating these paradoxes because other philosophers claimed paradoxes arise when considering Parmenides' view. Zeno's arguments may then be early examples of a method of proof called reductio ad absurdum, also known as proof by contradiction. Thus Plato has Zeno say the purpose of the paradoxes "is to show that their hypothesis that existences are many, if properly followed up, leads to still more absurd results than the hypothesis that they are one." Plato has Socrates claim that Zeno and Parmenides were essentially arguing exactly the same point. They are also credited as a source of the dialectic method used by Socrates.
Paradoxes
Some of Zeno's nine surviving paradoxes (preserved in Aristotle's Physics and Simplicius's commentary thereon) are essentially equivalent to one another. Aristotle offered a response to some of them. Popular literature often misrepresents Zeno's arguments. For example, Zeno is often said to have argued that the sum of an infinite number of terms must itself be infinite–with the result that not only the time, but also the distance to be travelled, become infinite. However, none of the original ancient sources has Zeno discussing the sum of any infinite series. Simplicius has Zeno saying "it is impossible to traverse an infinite number of things in a finite time". This presents Zeno's problem not with finding the sum, but rather with finishing a task with an infinite number of steps: how can one ever get from A to B, if an infinite number of (non-instantaneous) events can be identified that need to precede the arrival at B, and one cannot reach even the beginning of a "last event"?
Paradoxes of motion
Three of the strongest and most famous—that of Achilles and the tortoise, the Dichotomy argument, and that of an arrow in flight—are presented in detail below.
Dichotomy paradox
Suppose Atalanta wishes to walk to the end of a path. Before she can get there, she must get halfway there. Before she can get halfway there, she must get a quarter of the way there. Before traveling a quarter, she must travel one-eighth; before an eighth, one-sixteenth; and so on.
The resulting sequence can be represented as:
This description requires one to complete an infinite number of tasks, which Zeno maintains is an impossibility.
This sequence also presents a second problem in that it contains no first distance to run, for any possible (finite) first distance could be divided in half, and hence would not be first after all. Hence, the trip cannot even begin. The paradoxical conclusion then would be that travel over any finite distance can be neither completed nor begun, and so all motion must be an illusion.
This argument is called the "Dichotomy" because it involves repeatedly splitting a distance into two parts. An example with the original sense can be found in an asymptote. It is also known as the Race Course paradox.
Achilles and the tortoise
In the paradox of Achilles and the tortoise, Achilles is in a footrace with a tortoise. Achilles allows the tortoise a head start of 100 meters, for example. Suppose that each racer starts running at some constant speed, one faster than the other. After some finite time, Achilles will have run 100 meters, bringing him to the tortoise's starting point. During this time, the tortoise has run a much shorter distance, say 2 meters. It will then take Achilles some further time to run that distance, by which time the tortoise will have advanced farther; and then more time still to reach this third point, while the tortoise moves ahead. Thus, whenever Achilles arrives somewhere the tortoise has been, he still has some distance to go before he can even reach the tortoise. As Aristotle noted, this argument is similar to the Dichotomy. It lacks, however, the apparent conclusion of motionlessness.
Arrow paradox
In the arrow paradox, Zeno states that for motion to occur, an object must change the position which it occupies. He gives an example of an arrow in flight. He states that at any one (durationless) instant of time, the arrow is neither moving to where it is, nor to where it is not.
It cannot move to where it is not, because no time elapses for it to move there; it cannot move to where it is, because it is already there. In other words, at every instant of time there is no motion occurring. If everything is motionless at every instant, and time is entirely composed of instants, then motion is impossible.
Whereas the first two paradoxes divide space, this paradox starts by dividing time—and not into segments, but into points.
Other paradoxes
Aristotle gives three other paradoxes.
Paradox of place
From Aristotle:
Paradox of the grain of millet
Description of the paradox from the Routledge Dictionary of Philosophy:
Aristotle's response:
Description from Nick Huggett:
The moving rows (or stadium)
From Aristotle:
An expanded account of Zeno's arguments, as presented by Aristotle, is given in Simplicius's commentary On Aristotle's Physics.
According to Angie Hobbs of Sheffield university, this paradox is intended to be considered together with the paradox of Achilles and the Tortoise, problematizing the concept of discrete space & time where the other problematizes the concept of infinitely divisible space & time.
Proposed solutions
In classical antiquity
According to Simplicius, Diogenes the Cynic said nothing upon hearing Zeno's arguments, but stood up and walked, in order to demonstrate the falsity of Zeno's conclusions. To fully solve any of the paradoxes, however, one needs to show what is wrong with the argument, not just the conclusions. Throughout history several solutions have been proposed, among the earliest recorded being those of Aristotle and Archimedes.
Aristotle (384 BC–322 BC) remarked that as the distance decreases, the time needed to cover those distances also decreases, so that the time needed also becomes increasingly small.
Aristotle also distinguished "things infinite in respect of divisibility" (such as a unit of space that can be mentally divided into ever smaller units while remaining spatially the same) from things (or distances) that are infinite in extension ("with respect to their extremities").
Aristotle's objection to the arrow paradox was that "Time is not composed of indivisible nows any more than any other magnitude is composed of indivisibles." Thomas Aquinas, commenting on Aristotle's objection, wrote "Instants are not parts of time, for time is not made up of instants any more than a magnitude is made of points, as we have already proved. Hence it does not follow that a thing is not in motion in a given time, just because it is not in motion in any instant of that time."
In modern mathematics
Some mathematicians and historians, such as Carl Boyer, hold that Zeno's paradoxes are simply mathematical problems, for which modern calculus provides a mathematical solution. Infinite processes remained theoretically troublesome in mathematics until the late 19th century. With the epsilon-delta definition of limit, Weierstrass and Cauchy developed a rigorous formulation of the logic and calculus involved. These works resolved the mathematics involving infinite processes.
Some philosophers, however, say that Zeno's paradoxes and their variations (see Thomson's lamp) remain relevant metaphysical problems. While mathematics can calculate where and when the moving Achilles will overtake the Tortoise of Zeno's paradox, philosophers such as Kevin Brown and Francis Moorcroft hold that mathematics does not address the central point in Zeno's argument, and that solving the mathematical issues does not solve every issue the paradoxes raise. Brown concludes "Given the history of 'final resolutions', from Aristotle onwards, it's probably foolhardy to think we've reached the end. It may be that Zeno's arguments on motion, because of their simplicity and universality, will always serve as a kind of 'Rorschach image' onto which people can project their most fundamental phenomenological concerns (if they have any)."
Henri Bergson
An alternative conclusion, proposed by Henri Bergson in his 1896 book Matter and Memory, is that, while the path is divisible, the motion is not.
Peter Lynds
In 2003, Peter Lynds argued that all of Zeno's motion paradoxes are resolved by the conclusion that instants in time and instantaneous magnitudes do not physically exist. Lynds argues that an object in relative motion cannot have an instantaneous or determined relative position (for if it did, it could not be in motion), and so cannot have its motion fractionally dissected as if it does, as is assumed by the paradoxes. Nick Huggett argues that Zeno is assuming the conclusion when he says that objects that occupy the same space as they do at rest must be at rest.
Bertrand Russell
Based on the work of Georg Cantor, Bertrand Russell offered a solution to the paradoxes, what is known as the "at-at theory of motion". It agrees that there can be no motion "during" a durationless instant, and contends that all that is required for motion is that the arrow be at one point at one time, at another point another time, and at appropriate points between those two points for intervening times. In this view motion is just change in position over time.
Hermann Weyl
Another proposed solution is to question one of the assumptions Zeno used in his paradoxes (particularly the Dichotomy), which is that between any two different points in space (or time), there is always another point. Without this assumption there are only a finite number of distances between two points, hence there is no infinite sequence of movements, and the paradox is resolved. According to Hermann Weyl, the assumption that space is made of finite and discrete units is subject to a further problem, given by the "tile argument" or "distance function problem". According to this, the length of the hypotenuse of a right angled triangle in discretized space is always equal to the length of one of the two sides, in contradiction to geometry. Jean Paul Van Bendegem has argued that the Tile Argument can be resolved, and that discretization can therefore remove the paradox.
Applications
Quantum Zeno effect
In 1977, physicists E. C. George Sudarshan and B. Misra discovered that the dynamical evolution (motion) of a quantum system can be hindered (or even inhibited) through observation of the system. This effect is usually called the "Quantum Zeno effect" as it is strongly reminiscent of Zeno's arrow paradox. This effect was first theorized in 1958.
Zeno behaviour
In the field of verification and design of timed and hybrid systems, the system behaviour is called Zeno if it includes an infinite number of discrete steps in a finite amount of time. Some formal verification techniques exclude these behaviours from analysis, if they are not equivalent to non-Zeno behaviour. In systems design these behaviours will also often be excluded from system models, since they cannot be implemented with a digital controller.
Similar paradoxes
School of Names
Roughly contemporaneously during the Warring States period (475–221 BCE), ancient Chinese philosophers from the School of Names, a school of thought similarly concerned with logic and dialectics, developed paradoxes similar to those of Zeno. The works of the School of Names have largely been lost, with the exception of portions of the Gongsun Longzi. The second of the Ten Theses of Hui Shi suggests knowledge of infinitesimals:That which has no thickness cannot be piled up; yet it is a thousand li in dimension. Among the many puzzles of his recorded in the Zhuangzi is one very similar to Zeno's Dichotomy: The Mohist canon appears to propose a solution to this paradox by arguing that in moving across a measured length, the distance is not covered in successive fractions of the length, but in one stage. Due to the lack of surviving works from the School of Names, most of the other paradoxes listed are difficult to interpret.
Lewis Carroll's "What the Tortoise Said to Achilles"
"What the Tortoise Said to Achilles", written in 1895 by Lewis Carroll, describes a paradoxical infinite regress argument in the realm of pure logic. It uses Achilles and the Tortoise as characters in a clear reference to Zeno's paradox of Achilles.
See also
Incommensurable magnitudes
Infinite regress
Philosophy of space and time
Renormalization
Ross–Littlewood paradox
Supertask
Zeno machine
List of paradoxes
Notes
References
Kirk, G. S., J. E. Raven, M. Schofield (1984) The Presocratic Philosophers: A Critical History with a Selection of Texts, 2nd ed. Cambridge University Press. .
Plato (1926) Plato: Cratylus. Parmenides. Greater Hippias. Lesser Hippias, H. N. Fowler (Translator), Loeb Classical Library. .
Sainsbury, R.M. (2003) Paradoxes, 2nd ed. Cambridge University Press. .
External links
Dowden, Bradley. "Zeno’s Paradoxes." Entry in the Internet Encyclopedia of Philosophy.
Introduction to Mathematical Philosophy, Ludwig-Maximilians-Universität München
Silagadze, Z. K. "Zeno meets modern science,"
Zeno's Paradox: Achilles and the Tortoise by Jon McLoone, Wolfram Demonstrations Project.
Kevin Brown on Zeno and the Paradox of Motion
Eponymous paradoxes
Philosophical paradoxes
Supertasks
Mathematical paradoxes
Paradoxes of infinity
Physical paradoxes | 0.762472 | 0.999071 | 0.761763 |
Hess's law | Hess’ law of constant heat summation, also known simply as Hess' law, is a relationship in physical chemistry named after Germain Hess, a Swiss-born Russian chemist and physician who published it in 1840. The law states that the total enthalpy change during the complete course of a chemical reaction is independent of the sequence of steps taken.
Hess' law is now understood as an expression of the fact that the enthalpy of a chemical process is independent of the path taken from the initial to the final state (i.e. enthalpy is a state function). According to the first law of thermodynamics, the enthalpy change in a system due to a reaction at constant pressure is equal to the heat absorbed (or the negative of the heat released), which can be determined by calorimetry for many reactions. The values are usually stated for reactions with the same initial and final temperatures and pressures (while conditions are allowed to vary during the course of the reactions). Hess' law can be used to determine the overall energy required for a chemical reaction that can be divided into synthetic steps that are individually easier to characterize. This affords the compilation of standard enthalpies of formation, which may be used to predict the enthalpy change in complex synthesis.
Theory
Hess’ law states that the change of enthalpy in a chemical reaction is the same regardless of whether the reaction takes place in one step or several steps, provided the initial and final states of the reactants and products are the same. Enthalpy is an extensive property, meaning that its value is proportional to the system size. Because of this, the enthalpy change is proportional to the number of moles participating in a given reaction.
In other words, if a chemical change takes place by several different routes, the overall enthalpy change is the same, regardless of the route by which the chemical change occurs (provided the initial and final condition are the same). If this were not true, then one could violate the first law of thermodynamics.
Hess' law allows the enthalpy change (ΔH) for a reaction to be calculated even when it cannot be measured directly. This is accomplished by performing basic algebraic operations based on the chemical equations of reactions using previously determined values for the enthalpies of formation.
Combination of chemical equations leads to a net or overall equation. If the enthalpy changes are known for all the equations in the sequence, their sum will be the enthalpy change for the net equation. If the net enthalpy change is negative, the reaction is exothermic and is more likely to be spontaneous; positive ΔH values correspond to endothermic reactions. (Entropy also plays an important role in determining spontaneity, as some reactions with a positive enthalpy change are nevertheless spontaneous due to an entropy increase in the reaction system.)
Use of enthalpies of formation
Hess' law states that enthalpy changes are additive. Thus the value of the standard enthalpy of reaction can be calculated from standard enthalpies of formation of products and reactants as follows:
Here, the first sum is over all products and the second over all reactants, and are the stoichiometric coefficients of products and reactants respectively, and are the standard enthalpies of formation of products and reactants respectively, and the o superscript indicates standard state values. This may be considered as the sum of two (real or fictitious) reactions:
Reactants → Elements (in their standard states)
and Elements → Products
Examples
Given:
Cgraphite + O2 → CO2() ( ΔH = −393.5 kJ/mol) (direct step)
Cgraphite + 1/2 O2 → CO() (ΔH = −110.5 kJ/mol)
CO() +1/2 O2 → CO2() (ΔH = −283.0 kJ/mol)
Reaction (a) is the sum of reactions (b) and (c), for which the total ΔH = −393.5 kJ/mol, which is equal to ΔH in (a).
Given:
B2O3() + 3H2O() → 3O2() + B2H6() (ΔH = 2035 kJ/mol)
H2O() → H2O() (ΔH = 44 kJ/mol)
H2() + 1/2 O2() → H2O() (ΔH = −286 kJ/mol)
2B() + 3H2() → B2H6() (ΔH = 36 kJ/mol)
Find the ΔfH of:
2B() + 3/2 O2() → B2O3()
After multiplying the equations (and their enthalpy changes) by appropriate factors and reversing the direction when necessary, the result is:
B2H6() + 3O2() → B2O3() + 3H2O() (ΔH = 2035 × (−1) = −2035 kJ/mol)
3H2O() → 3H2O() (ΔH = 44 × (−3) = −132 kJ/mol)
3H2O() → 3H2() + (3/2) O2() (ΔH = −286 × (−3) = 858 kJ/mol)
2B() + 3H2() → B2H6() (ΔH = 36 kJ/mol)
Adding these equations and canceling out the common terms on both sides, we obtain
2B() + 3/2 O2() → B2O3() (ΔH = −1273 kJ/mol)
Extension to free energy and entropy
The concepts of Hess' law can be expanded to include changes in entropy and in Gibbs free energy, since these are also state functions. The Bordwell thermodynamic cycle is an example of such an extension that takes advantage of easily measured equilibria and redox potentials to determine experimentally inaccessible Gibbs free energy values. Combining ΔGo values from Bordwell thermodynamic cycles and ΔHo values found with Hess’ law can be helpful in determining entropy values that have not been measured directly and therefore need to be calculated through alternative paths.
For the free energy:
For entropy, the situation is a little different. Because entropy can be measured as an absolute value, not relative to those of the elements in their reference states (as with ΔHo and ΔGo), there is no need to use the entropy of formation; one simply uses the absolute entropies for products and reactants:
Applications
Hess' law is useful in the determination of enthalpies of the following:
Heats of formation of unstable intermediates like CO(g) and NO(g).
Heat changes in phase transitions and allotropic transitions.
Lattice energies of ionic substances by constructing Born–Haber cycles if the electron affinity to form the anion is known, or
Electron affinities using a Born–Haber cycle with a theoretical lattice energy.
See also
Thermochemistry
Thermodynamics
References
Further reading
External links
Hess' paper (1840) on which his law is based (at ChemTeam site)
a Hess’ Law experiment
Chemical thermodynamics
Physical chemistry
Thermochemistry | 0.769262 | 0.990237 | 0.761752 |