title
stringlengths 3
69
| text
stringlengths 776
102k
| relevans
float64 0.76
0.82
| popularity
float64 0.96
1
| ranking
float64 0.76
0.81
|
---|---|---|---|---|
Etendue | Etendue or étendue (; ) is a property of light in an optical system, which characterizes how "spread out" the light is in area and angle. It corresponds to the beam parameter product (BPP) in Gaussian beam optics. Other names for etendue include acceptance, throughput, light grasp, light-gathering power, optical extent, and the AΩ product. Throughput and AΩ product are especially used in radiometry and radiative transfer where it is related to the view factor (or shape factor). It is a central concept in nonimaging optics.
From the source point of view, etendue is the product of the area of the source and the solid angle that the system's entrance pupil subtends as seen from the source. Equivalently, from the system point of view, the etendue equals the area of the entrance pupil times the solid angle the source subtends as seen from the pupil. These definitions must be applied for infinitesimally small "elements" of area and solid angle, which must then be summed over both the source and the diaphragm as shown below. Etendue may be considered to be a volume in phase space.
Etendue never decreases in any optical system where optical power is conserved. A perfect optical system produces an image with the same etendue as the source. The etendue is related to the Lagrange invariant and the optical invariant, which share the property of being constant in an ideal optical system. The radiance of an optical system is equal to the derivative of the radiant flux with respect to the etendue.
Definition
An infinitesimal surface element, , with normal is immersed in a medium of refractive index . The surface is crossed by (or emits) light confined to a solid angle, , at an angle with the normal . The area of projected in the direction of the light propagation is . The etendue of an infinitesimal bundle of light crossing is defined as
Etendue is the product of geometric extent and the squared refractive index of a medium through which the beam propagates. Because angles, solid angles, and refractive indices are dimensionless quantities, etendue is often expressed in units of area (given by ). However, it can alternatively be expressed in units of area (square meters) multiplied by solid angle (steradians).
In free space
Consider a light source , and a light detector , both of which are extended surfaces (rather than differential elements), and which are separated by a medium of refractive index that is perfectly transparent (shown). To compute the etendue of the system, one must consider the contribution of each point on the surface of the light source as they cast rays to each point on the receiver.
According to the definition above, the etendue of the light crossing towards is given by:
where is the solid angle defined by area at area , and is the distance between the two areas. Similarly, the etendue of the light crossing coming from is given by:
where is the solid angle defined by area . These expressions result in
showing that etendue is conserved as light propagates in free space.
The etendue of the whole system is then:
If both surfaces and are immersed in air (or in vacuum), and the expression above for the etendue may be written as
where is the view factor between differential surfaces and . Integration on and results in which allows the etendue between two surfaces to be obtained from the view factors between those surfaces, as provided in a list of view factors for specific geometry cases or in several heat transfer textbooks.
Conservation
The etendue of a given bundle of light is conserved: etendue can be increased, but not decreased in any optical system. This means that any system that concentrates light from some source onto a smaller area must always increase the solid angle of incidence (that is, the area of the sky that the source subtends). For example, a magnifying glass can increase the intensity of sunlight onto a small spot, but does so because, viewed from the spot that the light is concentrated onto, the apparent size of the sun is increased proportional to the concentration.
As shown below, etendue is conserved as light travels through free space and at refractions or reflections. It is then also conserved as light travels through optical systems where it undergoes perfect reflections or refractions. However, if light was to hit, say, a diffuser, its solid angle would increase, increasing the etendue. Etendue can then remain constant or it can increase as light propagates through an optic, but it cannot decrease. This is a direct result of the fact that entropy must be constant or increasing.
Conservation of etendue can be derived in different contexts, such as from optical first principles, from Hamiltonian optics or from the second law of thermodynamics.
From the perspective of thermodynamics, etendue is a form of entropy. Specifically, the etendue of a bundle of light contributes to the entropy of it by . Etendue may be exponentially decreased by an increase in entropy elsewhere. For example, a material might absorb photons and emit lower-frequency photons, and emit the difference in energy as heat. This increases entropy due to heat, allowing a corresponding decrease in etendue.
The conservation of etendue in free space is related to the reciprocity theorem for view factors.
In refractions and reflections
The conservation of etendue discussed above applies to the case of light propagation in free space, or more generally, in a medium of any refractive index. In particular, etendue is conserved in refractions and reflections. Figure "etendue in refraction" shows an infinitesimal surface on the plane separating two media of refractive indices and .
The normal to points in the direction of the -axis. Incoming light is confined to a solid angle and reaches at an angle to its normal. Refracted light is confined to a solid angle and leaves at an angle to its normal. The directions of the incoming and refracted light are contained in a plane making an angle to the -axis, defining these directions in a spherical coordinate system. With these definitions, Snell's law of refraction can be written as
and its derivative relative to
multiplied by each other result in
where both sides of the equation were also multiplied by which does not change on refraction. This expression can now be written as
Multiplying both sides by we get
that is
showing that the etendue of the light refracted at is conserved. The same result is also valid for the case of a reflection at a surface , in which case and .
Brightness theorem
A consequence of the conservation of etendue is the brightness theorem, which states that no linear optical system can increase the brightness of the light emitted from a source to a higher value than the brightness of the surface of that source (where "brightness" is defined as the optical power emitted per unit solid angle per unit emitting or receiving area).
Conservation of basic radiance
Radiance of a surface is related to etendue by:
where
is the radiant flux emitted, reflected, transmitted or received;
is the refractive index in which that surface is immersed;
is the étendue of the light beam.
As the light travels through an ideal optical system, both the etendue and the radiant flux are conserved. Therefore, basic radiance defined as:
is also conserved. In real systems, the etendue may increase (for example due to scattering) or the radiant flux may decrease (for example due to absorption) and, therefore, basic radiance may decrease. However, etendue may not decrease and radiant flux may not increase and, therefore, basic radiance may not increase.
As a volume in phase space
In the context of Hamiltonian optics, at a point in space, a light ray may be completely defined by a point , a unit Euclidean vector indicating its direction and the refractive index at point . The optical momentum of the ray at that point is defined by
where . The geometry of the optical momentum vector is illustrated in figure "optical momentum".
In a spherical coordinate system may be written as
from which
and therefore, for an infinitesimal area on the -plane immersed in a medium of refractive index , the etendue is given by
which is an infinitesimal volume in phase space . Conservation of etendue in phase space is the equivalent in optics to Liouville's theorem in classical mechanics. Etendue as volume in phase space is commonly used in nonimaging optics.
Maximum concentration
Consider an infinitesimal surface , immersed in a medium of refractive index crossed by (or emitting) light inside a cone of angle . The etendue of this light is given by
Noting that is the numerical aperture NA, of the beam of light, this can also be expressed as
Note that is expressed in a spherical coordinate system. Now, if a large surface is crossed by (or emits) light also confined to a cone of angle , the etendue of the light crossing is
The limit on maximum concentration (shown) is an optic with an entrance aperture , in air collecting light within a solid angle of angle (its acceptance angle) and sending it to a smaller area receiver immersed in a medium of refractive index , whose points are illuminated within a solid angle of angle . From the above expression, the etendue of the incoming light is
and the etendue of the light reaching the receiver is
Conservation of etendue then gives
where is the concentration of the optic. For a given angular aperture , of the incoming light, this concentration will be maximum for the maximum value of , that is . The maximum possible concentration is then
In the case that the incident index is not unity, we have
and so
and in the best-case limit of , this becomes
If the optic were a collimator instead of a concentrator, the light direction is reversed and conservation of etendue gives us the minimum aperture, , for a given output full angle .
See also
Beam emittance
Beam parameter product
Light field
Noether's theorem
Symplectic geometry
References
Further reading
xkcd–author Randall Munroe explains why it's impossible to light a fire with concentrated moonlight using an etendue-conservation argument.
Optical quantities | 0.7903 | 0.989813 | 0.782249 |
Projectile motion | Projectile motion is a form of motion experienced by an object or particle (a projectile) that is projected in a gravitational field, such as from Earth's surface, and moves along a curved path (a trajectory) under the action of gravity only. In the particular case of projectile motion on Earth, most calculations assume the effects of air resistance are passive.
Galileo Galilei showed that the trajectory of a given projectile is parabolic, but the path may also be straight in the special case when the object is thrown directly upward or downward. The study of such motions is called ballistics, and such a trajectory is described as ballistic. The only force of mathematical significance that is actively exerted on the object is gravity, which acts downward, thus imparting to the object a downward acceleration towards Earth's center of mass. Due to the object's inertia, no external force is needed to maintain the horizontal velocity component of the object's motion.
Taking other forces into account, such as aerodynamic drag or internal propulsion (such as in a rocket), requires additional analysis. A ballistic missile is a missile only guided during the relatively brief initial powered phase of flight, and whose remaining course is governed by the laws of classical mechanics.
Ballistics is the science of dynamics that deals with the flight, behavior and effects of projectiles, especially bullets, unguided bombs, rockets, or the like; the science or art of designing and accelerating projectiles so as to achieve a desired performance.
The elementary equations of ballistics neglect nearly every factor except for initial velocity, the launch angle and an gravitational acceleration assumed constant. Practical solutions of a ballistics problem often require considerations of air resistance, cross winds, target motion, acceleration due to gravity varying with height, and in such problems as launching a rocket from one point on the Earth to another, the horizon's distance vs curvature of the Earth (its local speed of rotation). Detailed mathematical solutions of practical problems typically do not have closed-form solutions, and therefore require numerical methods to address.
Kinematic quantities
In projectile motion, the horizontal motion and the vertical motion are independent of each other; that is, neither motion affects the other. This is the principle of compound motion established by Galileo in 1638, and used by him to prove the parabolic form of projectile motion.
A ballistic trajectory is a parabola with homogeneous acceleration, such as in a space ship with constant acceleration in absence of other forces. On Earth the acceleration changes magnitude with altitude as and direction (faraway targets) with latitude/longitude along the trajectory. This causes an elliptic trajectory, which is very close to a parabola on a small scale. However, if an object was thrown and the Earth was suddenly replaced with a black hole of equal mass, it would become obvious that the ballistic trajectory is part of an elliptic orbit around that black hole, and not a parabola that extends to infinity. At higher speeds the trajectory can also be circular, parabolic or hyperbolic (unless distorted by other objects like the Moon or the Sun).
In this article a homogeneous gravitational acceleration is assumed.
Acceleration
Since there is acceleration only in the vertical direction, the velocity in the horizontal direction is constant, being equal to . The vertical motion of the projectile is the motion of a particle during its free fall. Here the acceleration is constant, being equal to g. The components of the acceleration are:
,
.*
*The y acceleration can also be referred to as the force of the earth on the object(s) of interest.
Velocity
Let the projectile be launched with an initial velocity , which can be expressed as the sum of horizontal and vertical components as follows:
.
The components and can be found if the initial launch angle, , is known:
,
The horizontal component of the velocity of the object remains unchanged throughout the motion. The vertical component of the velocity changes linearly, because the acceleration due to gravity is constant. The accelerations in the x and y directions can be integrated to solve for the components of velocity at any time t, as follows:
,
.
The magnitude of the velocity (under the Pythagorean theorem, also known as the triangle law):
.
Displacement
At any time , the projectile's horizontal and vertical displacement are:
,
.
The magnitude of the displacement is:
.
Consider the equations,
and .
If t is eliminated between these two equations the following equation is obtained:
Here R is the range of a projectile.
Since g, θ, and v0 are constants, the above equation is of the form
,
in which a and b are constants. This is the equation of a parabola, so the path is parabolic. The axis of the parabola is vertical.
If the projectile's position (x,y) and launch angle (θ or α) are known, the initial velocity can be found solving for v0 in the afore-mentioned parabolic equation:
.
Displacement in polar coordinates
The parabolic trajectory of a projectile can also be expressed in polar coordinates instead of Cartesian coordinates. In this case, the position has the general formula
.
In this equation, the origin is the midpoint of the horizontal range of the projectile, and if the ground is flat, the parabolic arc is plotted in the range . This expression can be obtained by transforming the Cartesian equation as stated above by and .
Properties of the trajectory
Time of flight or total time of the whole journey
The total time t for which the projectile remains in the air is called the time of flight.
After the flight, the projectile returns to the horizontal axis (x-axis), so .
Note that we have neglected air resistance on the projectile.
If the starting point is at height y0 with respect to the point of impact, the time of flight is:
As above, this expression can be reduced to
if θ is 45° and y0 is 0.
Time of flight to the target's position
As shown above in the Displacement section, the horizontal and vertical velocity of a projectile are independent of each other.
Because of this, we can find the time to reach a target using the displacement formula for the horizontal velocity:
This equation will give the total time t the projectile must travel for to reach the target's horizontal displacement, neglecting air resistance.
Maximum height of projectile
The greatest height that the object will reach is known as the peak of the object's motion.
The increase in height will last until , that is,
.
Time to reach the maximum height(h):
.
For the vertical displacement of the maximum height of the projectile:
The maximum reachable height is obtained for θ=90°:
If the projectile's position (x,y) and launch angle (θ) are known, the maximum height can be found by solving for h in the following equation:
Angle of elevation (φ) at the maximum height is given by:
Relation between horizontal range and maximum height
The relation between the range d on the horizontal plane and the maximum height h reached at is:
×
.
If
Maximum distance of projectile
The range and the maximum height of the projectile do not depend upon its mass. Hence range and maximum height are equal for all bodies that are thrown with the same velocity and direction. The horizontal range d of the projectile is the horizontal distance it has traveled when it returns to its initial height.
.
Time to reach ground:
.
From the horizontal displacement the maximum distance of projectile:
,
so
.
Note that d has its maximum value when
,
which necessarily corresponds to
,
or
.
The total horizontal distance (d) traveled.
When the surface is flat (initial height of the object is zero), the distance traveled:
Thus the maximum distance is obtained if θ is 45 degrees. This distance is:
Application of the work energy theorem
According to the work-energy theorem the vertical component of velocity is:
.
These formulae ignore aerodynamic drag and also assume that the landing area is at uniform height 0.
Angle of reach
The "angle of reach" is the angle (θ) at which a projectile must be launched in order to go a distance d, given the initial velocity v.
There are two solutions:
(shallow trajectory)
and because ,
(steep trajectory)
Angle θ required to hit coordinate (x, y)
To hit a target at range x and altitude y when fired from (0,0) and with initial speed v the required angle(s) of launch θ are:
The two roots of the equation correspond to the two possible launch angles, so long as they aren't imaginary, in which case the initial speed is not great enough to reach the point (x,y) selected. This formula allows one to find the angle of launch needed without the restriction of .
One can also ask what launch angle allows the lowest possible launch velocity. This occurs when the two solutions above are equal, implying that the quantity under the square root sign is zero. This requires solving a quadratic equation for , and we find
This gives
If we denote the angle whose tangent is by , then
This implies
In other words, the launch should be at the angle halfway between the target and zenith (vector opposite to gravity).
Total Path Length of the Trajectory
The length of the parabolic arc traced by a projectile, L, given that the height of launch and landing is the same (there is no air resistance), is given by the formula:
where is the initial velocity, is the launch angle and is the acceleration due to gravity as a positive value. The expression can be obtained by evaluating the arc length integral for the height-distance parabola between the bounds initial and final displacement (i.e. between 0 and the horizontal range of the projectile) such that:
If the time of flight is t,
Trajectory of a projectile with air resistance
Air resistance creates a force that (for symmetric projectiles) is always directed against the direction of motion in the surrounding medium and has a magnitude that depends on the absolute speed: . The speed-dependence of the friction force is linear at very low speeds (Stokes drag) and quadratic at large speeds (Newton drag). The transition between these behaviours is determined by the Reynolds number, which depends on speed, object size, density and dynamic viscosity of the medium. For Reynolds numbers below about 1 the dependence is linear, above 1000 (turbulent flow) it becomes quadratic. In air, which has a kinematic viscosity around , this means that the drag force becomes quadratic in v when the product of speed and diameter is more than about , which is typically the case for projectiles.
Stokes drag: (for )
Newton drag: (for )
The free body diagram on the right is for a projectile that experiences air resistance and the effects of gravity. Here, air resistance is assumed to be in the direction opposite of the projectile's velocity:
Trajectory of a projectile with Stokes drag
Stokes drag, where , only applies at very low speed in air, and is thus not the typical case for projectiles. However, the linear dependence of on causes a very simple differential equation of motion
in which the 2 cartesian components become completely independent, and it is thus easier to solve. Here, , and will be used to denote the initial velocity, the velocity along the direction of x and the velocity along the direction of y, respectively. The mass of the projectile will be denoted by m, and . For the derivation only the case where is considered. Again, the projectile is fired from the origin (0,0).
The relationships that represent the motion of the particle are derived by Newton's Second Law, both in the x and y directions.
In the x direction and in the y direction .
This implies that:
(1),
and
(2)
Solving (1) is an elementary differential equation, thus the steps leading to a unique solution for vx and, subsequently, x will not be enumerated. Given the initial conditions (where vx0 is understood to be the x component of the initial velocity) and for :
(1a)
(1b)
While (1) is solved much in the same way, (2) is of distinct interest because of its non-homogeneous nature. Hence, we will be extensively solving (2). Note that in this case the initial conditions are used and when .
(2)
(2a)
This first order, linear, non-homogeneous differential equation may be solved a number of ways; however, in this instance, it will be quicker to approach the solution via an integrating factor .
(2c)
(2d)
(2e)
(2f)
(2g)
And by integration we find:
(3)
Solving for our initial conditions:
(2h)
(3a)
With a bit of algebra to simplify (3a):
(3b)
The total time of the journey in the presence of air resistance (more specifically, when ) can be calculated by the same strategy as above, namely, we solve the equation . While in the case of zero air resistance this equation can be solved elementarily, here we shall need the Lambert W function. The equation
is of the form , and such an equation can be transformed into an equation solvable by the function (see an example of such a transformation here). Some algebra shows that the total time of flight, in closed form, is given as
.
Trajectory of a projectile with Newton drag
The most typical case of air resistance, in case of Reynolds numbers above about 1000, is Newton drag with a drag force proportional to the speed squared, . In air, which has a kinematic viscosity around , this means that the product of speed and diameter must be more than about .
Unfortunately, the equations of motion can not be easily solved analytically for this case. Therefore, a numerical solution will be examined.
The following assumptions are made:
Constant gravitational acceleration
Air resistance is given by the following drag formula,
Where:
FD is the drag force
c is the drag coefficient
ρ is the air density
A is the cross sectional area of the projectile Again Compare this with theory/practice of the ballistic coefficient.
Special cases
Even though the general case of a projectile with Newton drag cannot be solved analytically, some special cases can. Here we denote the terminal velocity in free-fall as and the characteristic settling time constant . (Dimension of [m/s2], [1/m])
Near-horizontal motion: In case the motion is almost horizontal, , such as a flying bullet, the vertical velocity component has very little influence on the horizontal motion. In this case:
The same pattern applies for motion with friction along a line in any direction, when gravity is negligible (relatively small ). It also applies when vertical motion is prevented, such as for a moving car with its engine off.
Vertical motion upward:
Here
and
where is the initial upward velocity at and the initial position is .
A projectile cannot rise longer than in the vertical direction before it reaches the peak.
Vertical motion downward:
After a time , the projectile reaches almost terminal velocity .
Numerical solution
A projectile motion with drag can be computed generically by numerical integration of the ordinary differential equation, for instance by applying a reduction to a first-order system. The equation to be solved is
.
This approach also allows to add the effects of speed-dependent drag coefficient, altitude-dependent air density (in product ) and position-dependent gravity field.
Lofted trajectory
A special case of a ballistic trajectory for a rocket is a lofted trajectory, a trajectory with an apogee greater than the minimum-energy trajectory to the same range. In other words, the rocket travels higher and by doing so it uses more energy to get to the same landing point. This may be done for various reasons such as increasing distance to the horizon to give greater viewing/communication range or for changing the angle with which a missile will impact on landing. Lofted trajectories are sometimes used in both missile rocketry and in spaceflight.
Projectile motion on a planetary scale
When a projectile travels a range that is significant compared to the Earth's radius (above ≈100 km), the curvature of the Earth and the non-uniform Earth's gravity have to be considered. This is, for example, the case with spacecrafts and intercontinental missiles. The trajectory then generalizes (without air resistance) from a parabola to a Kepler-ellipse with one focus at the center of the Earth. The projectile motion then follows Kepler's laws of planetary motion.
The trajectories' parameters have to be adapted from the values of a uniform gravity field stated above. The Earth radius is taken as R, and g as the standard surface gravity. Let be the launch velocity relative to the first cosmic velocity.
Total range d between launch and impact:
Maximum range of a projectile for optimum launch angle:
with , the first cosmic velocity
Maximum height of a projectile above the planetary surface:
Maximum height of a projectile for vertical launch:
with , the second cosmic velocity
Time of flight:
See also
Equations of motion
Phugoid
Notes
References
Mechanics | 0.783699 | 0.997975 | 0.782111 |
Paradigm shift | A paradigm shift is a fundamental change in the basic concepts and experimental practices of a scientific discipline. It is a concept in the philosophy of science that was introduced and brought into the common lexicon by the American physicist and philosopher Thomas Kuhn. Even though Kuhn restricted the use of the term to the natural sciences, the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events.
Kuhn presented his notion of a paradigm shift in his influential book The Structure of Scientific Revolutions (1962).
Kuhn contrasts paradigm shifts, which characterize a Scientific Revolution, to the activity of normal science, which he describes as scientific work done within a prevailing framework or paradigm. Paradigm shifts arise when the dominant paradigm under which normal science operates is rendered incompatible with new phenomena, facilitating the adoption of a new theory or paradigm.
As one commentator summarizes:
History
The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to the second edition of his Critique of Pure Reason (1787). Kant used the phrase "revolution of the way of thinking" to refer to Greek mathematics and Newtonian physics. In the 20th century, new developments in the basic concepts of mathematics, physics, and biology revitalized interest in the question among scholars.
Original usage
In his 1962 book The Structure of Scientific Revolutions, Kuhn explains the development of paradigm shifts in science into four stages:
Normal science – In this stage, which Kuhn sees as most prominent in science, a dominant paradigm is active. This paradigm is characterized by a set of theories and ideas that define what is possible and rational to do, giving scientists a clear set of tools to approach certain problems. Some examples of dominant paradigms that Kuhn gives are: Newtonian physics, caloric theory, and the theory of electromagnetism. Insofar as paradigms are useful, they expand both the scope and the tools with which scientists do research. Kuhn stresses that, rather than being monolithic, the paradigms that define normal science can be particular to different people. A chemist and a physicist might operate with different paradigms of what a helium atom is. Under normal science, scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made.
Extraordinary research – When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis. To address the crisis, scientists push the boundaries of normal science in what Kuhn calls “extraordinary research”, which is characterized by its exploratory nature. Without the structures of the dominant paradigm to depend on, scientists engaging in extraordinary research must produce new theories, thought experiments, and experiments to explain the anomalies. Kuhn sees the practice of this stage – “the proliferation of competing articulations, the willingness to try anything, the expression of explicit discontent, the recourse to philosophy and to debate over fundamentals” – as even more important to science than paradigm shifts.
Adoption of a new paradigm – Eventually a new paradigm is formed, which gains its own new followers. For Kuhn, this stage entails both resistance to the new paradigm, and reasons for why individual scientists adopt it. According to Max Planck, "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Because scientists are committed to the dominant paradigm, and paradigm shifts involve gestalt-like changes, Kuhn stresses that paradigms are difficult to change. However, paradigms can gain influence by explaining or predicting phenomena much better than before (i.e., Bohr's model of the atom) or by being more subjectively pleasing. During this phase, proponents for competing paradigms address what Kuhn considers the core of a paradigm debate: whether a given paradigm will be a good guide for problems – things that neither the proposed paradigm nor the dominant paradigm are capable of solving currently.
Aftermath of the scientific revolution – In the long run, the new paradigm becomes institutionalized as the dominant one. Textbooks are written, obscuring the revolutionary process.
Features
Paradigm shifts and progress
A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism: the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different.
Incommensurability
These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another—that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes—so different that whether or not one was better, they could not be understood by one another. However, the philosopher Donald Davidson published the highly regarded essay "On the Very Idea of a Conceptual Scheme" in 1974 arguing that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous, with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour.
Gradualism vs. sudden change
Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system.
In The Structure of Scientific Revolutions, Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science" (p. 12). Kuhn's idea was itself revolutionary in its time as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science.
Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it.
Examples
Natural sciences
Some of the "classical cases" of Kuhnian paradigm shifts in science are:
1543 – The transition in cosmology from a Ptolemaic cosmology to a Copernican one.
1543 – The acceptance of the work of Andreas Vesalius, whose work De humani corporis fabrica corrected the numerous errors in the previously held system of human anatomy created by Galen.
1687 – The transition in mechanics from Aristotelian mechanics to classical mechanics.
1783 – The acceptance of Lavoisier's theory of chemical reactions and combustion in place of phlogiston theory, known as the chemical revolution.
The transition in optics from geometrical optics to physical optics with Augustin-Jean Fresnel's wave theory.
1826 – The discovery of hyperbolic geometry.
1830 to 1833 – Geologist Charles Lyell published Principles of Geology, which not only put forth the concept of uniformitarianism, which was in direct contrast to the popular geological theory, at the time, catastrophism, but also utilized geological proof to determine that the age of the Earth was older than 6,000 years, which was previously held to be true.
1859 – The revolution in evolution from goal-directed change to Charles Darwin's natural selection.
1880 – The germ theory of disease began overtaking Galen's miasma theory.
1905 – The development of quantum mechanics, which replaced classical mechanics at microscopic scales.
1887 to 1905 – The transition from the luminiferous aether present in space to electromagnetic radiation in spacetime.
1919 – The transition between the worldview of Newtonian gravity and general relativity.
1920 – The emergence of the modern view of the Milky Way as just one of countless galaxies within an immeasurably vast universe following the results of the Smithsonian's Great Debate between astronomers Harlow Shapley and Heber Curtis.
1952 – Chemists Stanley Miller and Harold Urey perform an experiment which simulated the conditions on the early Earth that favored chemical reactions that synthesized more complex organic compounds from simpler inorganic precursors, kickstarting decades of research into the chemical origins of life.
1964 – The discovery of cosmic microwave background radiation leads to the big bang theory being accepted over the steady state theory in cosmology.
1965 – The acceptance of plate tectonics as the explanation for large-scale geologic changes.
1969 – Astronomer Victor Safronov, in his book Evolution of the protoplanetary cloud and formation of the Earth and the planets, developed the early version of the current accepted theory of planetary formation.
1974 – The November Revolution, with the discovery of the J/psi meson, and the acceptance of the existence of quarks and the Standard Model of particle physics.
1960 to 1985 – The acceptance of the ubiquity of nonlinear dynamical systems as promoted by chaos theory, instead of a laplacian world-view of deterministic predictability.
Social sciences
In Kuhn's view, the existence of a single reigning paradigm is characteristic of the natural sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals." Others have applied Kuhn's concept of paradigm shift to the social sciences.
The movement known as the cognitive revolution moved away from behaviourist approaches to psychology and the acceptance of cognition as central to studying human behavior.
Anthropologist Franz Boas published The Mind of Primitive Man, which integrated his theories concerning the history and development of cultures and established a program that would dominate American anthropology in the following years. His research, along with that of his other colleagues, combatted and debunked the claims being made by scholars at the time, given scientific racism and eugenics were dominant in many universities and institutions that were dedicated to studying humans and society. Eventually anthropology would apply a holistic approach, utilizing four subcategories to study humans: archaeology, cultural, evolutionary, and linguistic anthropology.
At the turn of the 20th century, sociologists, along with other social scientists developed and adopted methodological antipositivism, which sought to uphold a subjective perspective when studying human activities pertaining to culture, society, and behavior. This was in stark contrast to positivism, which took its influence from the methodologies utilized within the natural sciences.
First proposed by Ferdinand de Saussure in 1879, the laryngeal theory in Indo-European linguistics postulated the existence of "laryngeal" consonants in the Proto-Indo-European language (PIE), a theory that was confirmed by the discovery of the Hittite language in the early 20th century. The theory has since been accepted by the vast majority of linguists, paving the way for the internal reconstruction of the syntax and grammatical rules of PIE and is considered one of the most significant developments in linguistics since the initial discovery of the Indo-European language family.
The adoption of radiocarbon dating by archaeologists has been proposed as a paradigm shift because of how it greatly increased the time depth the archaeologists could reliably date objects from. Similarly the use of LIDAR for remote geospatial imaging of cultural landscapes, and the shift from processual to post-processual archaeology have both been claimed as paradigm shifts by archaeologists.
The emergence of three-phase traffic theory created by Boris Kerner in vehicular traffic science as an alternative theory to classical (standard) traffic flow theories.
Applied sciences
More recently, paradigm shifts are also recognisable in applied sciences:
In medicine, the transition from "clinical judgment" to evidence-based medicine.
In Artificial Intelligence, the transition from a knowledge-based to a data-driven paradigm has been discussed from 2010.
Other uses
The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought pattern—a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing:
M. L. Handa, a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by "paradigm" and introduces the idea of a "social paradigm". In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as "paradigm shift". In this respect, he focuses on the social circumstances that precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education.
The concept has been developed for technology and economics in the identification of new techno-economic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Joseph Schumpeter's idea of creative destruction. Examples include the move to mass production and the introduction of microelectronics.
Two photographs of the Earth from space, "Earthrise" (1968) and "The Blue Marble" (1972), are thought to have helped to usher in the environmentalist movement, which gained great prominence in the years immediately following distribution of those images.
Hans Küng applies Thomas Kuhn's theory of paradigm change to the entire history of Christian thought and theology. He identifies six historical "macromodels": 1) the apocalyptic paradigm of primitive Christianity, 2) the Hellenistic paradigm of the patristic period, 3) the medieval Roman Catholic paradigm, 4) the Protestant (Reformation) paradigm, 5) the modern Enlightenment paradigm, and 6) the emerging ecumenical paradigm. He also discusses five analogies between natural science and theology in relation to paradigm shifts. Küng addresses paradigm change in his books, Paradigm Change in Theology and Theology for the Third Millennium: An Ecumenical View.
In the later part of the 1990s, 'paradigm shift' emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication. In his book Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books as abused and overused to the point of becoming meaningless.
The concept of technological paradigms has been advanced, particularly by Giovanni Dosi.
Criticism
In a 2015 retrospective on Kuhn, the philosopher Martin Cohen describes the notion of the paradigm shift as a kind of intellectual virus – spreading from hard science to social science and on to the arts and even everyday political rhetoric today. Cohen claims that Kuhn had only a very hazy idea of what it might mean and, in line with the Austrian philosopher of science Paul Feyerabend, accuses Kuhn of retreating from the more radical implications of his theory, which are that scientific facts are never really more than opinions whose popularity is transitory and far from conclusive. Cohen says scientific knowledge is less certain than it is usually portrayed, and that science and knowledge generally is not the 'very sensible and reassuringly solid sort of affair' that Kuhn describes, in which progress involves periodic paradigm shifts in which much of the old certainties are abandoned in order to open up new approaches to understanding that scientists would never have considered valid before. He argues that information cascades can distort rational, scientific debate. He has focused on health issues, including the example of highly mediatised 'pandemic' alarms, and why they have turned out eventually to be little more than scares.
See also
(author of Paradigm Shift)
References
Citations
Sources
External links
MIT 6.933J – The Structure of Engineering Revolutions. From MIT OpenCourseWare, course materials (graduate level) for a course on the history of technology through a Kuhnian lens.
Change
Cognition
Concepts in epistemology
Concepts in the philosophy of science
Consensus reality
Critical thinking
Epistemology of science
Historiography of science
Innovation
Philosophical theories
Reasoning
Scientific Revolution
Thomas Kuhn | 0.784195 | 0.997237 | 0.782029 |
Physics of roller coasters | The physics of roller coasters comprises the mechanics that affect the design and operation of roller coasters, a machine that uses gravity and inertia to send a train of cars along a winding track. Gravity, inertia, g-forces, and centripetal acceleration give riders constantly changing forces which create certain sensations as the coaster travels around the track.
Introduction
A roller coaster is a machine that uses gravity and inertia to send a train of cars along a winding track. The combination of gravity and inertia, along with g-forces and centripetal acceleration give the body certain sensations as the coaster moves up, down, and around the track. The forces experienced by the rider are constantly changing, leading to feelings of joy in some riders and nausea in others.
Energy
Initially, the car is pulled to the top of the first hill and released, at which point it rolls freely along the track without any external mechanical assistance for the remainder of the ride. The purpose of the ascent of the first hill is to build up potential energy that will then be converted to kinetic energy as the ride progresses. The initial hill, or the lift hill, is the highest in the entire ride. As the train is pulled to the top, it gains potential energy, as explained by the equation for potential energy below:where Ug is potential energy, m is mass, g is acceleration due to gravity and h is height above the ground. Two trains of identical mass at different heights will therefore have different potential energies: the train at a greater height will have more potential energy than a train at a lower height. This means that the potential energy for the roller coaster system is greatest at the highest point on the track, or the top of the lift hill. As the roller coaster train begins its descent from the lift hill, the stored potential energy converts to kinetic energy, or energy of motion. The faster the train moves, the more kinetic energy the train gains, as shown by the equation for kinetic energy:where K is kinetic energy, m is mass, and v is velocity. Because the mass of a roller coaster car remains constant, if the speed is increased, the kinetic energy must also increase. This means that the kinetic energy for the roller coaster system is greatest at the bottom of the largest downhill slope on the track, typically at the bottom of the lift hill. When the train begins to climb the next hill on the track, the train's kinetic energy is converted back into potential energy, decreasing the train's velocity. This process of converting kinetic energy to potential energy and back to kinetic energy continues with each hill. The energy is never destroyed but is lost to friction between the car and track bringing the ride to a complete stop.
Inertia and gravity
When going around a roller coaster's vertical loop, the inertia, that produces a thrilling acceleration force, also keeps passengers in their seats. As the car approaches a loop, the direction of a passenger's inertial velocity points straight ahead at the same angle as the track leading up to the loop. As the car enters the loop, the track guides the car up, moving the passenger up as well. This change in direction creates a feeling of extra gravity as the passenger is pushed down into the seat.
At the top of the loop, the force of the car's acceleration pushes the passenger off the seat toward the center of the loop, while inertia pushes the passenger back into the seat. Gravity and acceleration forces push the passenger in opposite directions with nearly equal force, creating a sensation of weightlessness.
At the bottom of the loop, gravity and the change in direction of the passenger's inertia from a downward vertical direction to one that is horizontal push the passenger into the seat, causing the passenger to once again feel very heavy. Most roller coasters utilize restraint systems, but the forces exerted by most inverting coasters would keep passengers from falling out.
G-forces
G-forces (gravitational forces) create the so-called "butterfly" sensation felt as a car goes down a gradient. An acceleration of is the usual force of Earth's gravitational pull exerted on a person while standing still. The measurement of a person's normal weight incorporates this gravitational acceleration. When a person feels weightless at the top of a loop or while going down a hill, they are in free fall. However, if the top of a hill is curved more narrowly than a parabola, riders will experience negative Gs and be lifted out of their seats, experiencing the so-called "butterfly" sensation.
Difference between wood and steel coasters
A wooden coaster has a track consisting of thin laminates of wood stacked together, with a flat steel rail fixed to the top laminate. Steel coasters use tubular steel, I beam, or box section running rails. The supporting structure of both types may be steel or wood. Traditionally, steel coasters employed inversions to thrill riders, whereas wooden coasters relied on steep drops and sharp changes in direction to deliver their thrills. However, recent advances in coaster technology have seen the rise of hybrid steel coasters with wooden structures, an example being New Texas Giant at Six Flags Over Texas, and wooden coasters that feature inversions, such as Outlaw Run at Silver Dollar City.
History
The basic principles of roller coaster mechanics have been known since 1865, and since then roller coasters have become a popular diversion.
As better technology became available, engineers began to use computerized design tools to calculate the forces and stresses that the ride would subject passengers to. Computers are now used to design safe coasters with specially designed restraints and lightweight and durable materials. Today, tubular steel tracks and polyurethane wheels allow coasters to travel over , while even taller, faster, and more complex roller coasters continue to be built.
See also
Euthanasia Coaster
Jerk, Jounce, Crackle and Pop
References
Roller coasters | 0.79911 | 0.978552 | 0.781971 |
Gravitational field | In physics, a gravitational field or gravitational acceleration field is a vector field used to explain the influences that a body extends into the space around itself. A gravitational field is used to explain gravitational phenomena, such as the gravitational force field exerted on another massive body. It has dimension of acceleration (L/T2) and it is measured in units of newtons per kilogram (N/kg) or, equivalently, in meters per second squared (m/s2).
In its original concept, gravity was a force between point masses. Following Isaac Newton, Pierre-Simon Laplace attempted to model gravity as some kind of radiation field or fluid, and since the 19th century, explanations for gravity in classical mechanics have usually been taught in terms of a field model, rather than a point attraction. It results from the spatial gradient of the gravitational potential field.
In general relativity, rather than two particles attracting each other, the particles distort spacetime via their mass, and this distortion is what is perceived and measured as a "force". In such a model one states that matter moves in certain ways in response to the curvature of spacetime, and that there is either no gravitational force, or that gravity is a fictitious force.
Gravity is distinguished from other forces by its obedience to the equivalence principle.
Classical mechanics
In classical mechanics, a gravitational field is a physical quantity. A gravitational field can be defined using Newton's law of universal gravitation. Determined in this way, the gravitational field around a single particle of mass is a vector field consisting at every point of a vector pointing directly towards the particle. The magnitude of the field at every point is calculated by applying the universal law, and represents the force per unit mass on any object at that point in space. Because the force field is conservative, there is a scalar potential energy per unit mass, , at each point in space associated with the force fields; this is called gravitational potential. The gravitational field equation is
where is the gravitational force, is the mass of the test particle, is the radial vector of the test particle relative to the mass (or for Newton's second law of motion which is a time dependent function, a set of positions of test particles each occupying a particular point in space for the start of testing), is time, is the gravitational constant, and is the del operator.
This includes Newton's law of universal gravitation, and the relation between gravitational potential and field acceleration. and are both equal to the gravitational acceleration (equivalent to the inertial acceleration, so same mathematical form, but also defined as gravitational force per unit mass). The negative signs are inserted since the force acts antiparallel to the displacement. The equivalent field equation in terms of mass density of the attracting mass is:
which contains Gauss's law for gravity, and Poisson's equation for gravity. Newton's law implies Gauss's law, but not vice versa; see Relation between Gauss's and Newton's laws.
These classical equations are differential equations of motion for a test particle in the presence of a gravitational field, i.e. setting up and solving these equations allows the motion of a test mass to be determined and described.
The field around multiple particles is simply the vector sum of the fields around each individual particle. A test particle in such a field will experience a force that equals the vector sum of the forces that it would experience in these individual fields. This is
i.e. the gravitational field on mass is the sum of all gravitational fields due to all other masses mi, except the mass itself. is the position vector of the gravitating particle , and is that of the test particle.
General relativity
In general relativity, the Christoffel symbols play the role of the gravitational force field and the metric tensor plays the role of the gravitational potential.
In general relativity, the gravitational field is determined by solving the Einstein field equations
where is the stress–energy tensor, is the Einstein tensor, and is the Einstein gravitational constant. The latter is defined as , where is the Newtonian constant of gravitation and is the speed of light.
These equations are dependent on the distribution of matter, stress and momentum in a region of space, unlike Newtonian gravity, which is depends on only the distribution of matter. The fields themselves in general relativity represent the curvature of spacetime. General relativity states that being in a region of curved space is equivalent to accelerating up the gradient of the field. By Newton's second law, this will cause an object to experience a fictitious force if it is held still with respect to the field. This is why a person will feel himself pulled down by the force of gravity while standing still on the Earth's surface. In general the gravitational fields predicted by general relativity differ in their effects only slightly from those predicted by classical mechanics, but there are a number of easily verifiable differences, one of the most well known being the deflection of light in such fields.
Embedding diagram
Embedding diagrams are three dimensional graphs commonly used to educationally illustrate gravitational potential by drawing gravitational potential fields as a gravitational topography, depicting the potentials as so-called gravitational wells, sphere of influence.
See also
Classical mechanics
Entropic gravity
Gravitation
Gravitational energy
Gravitational potential
Gravitational wave
Gravity map
Newton's law of universal gravitation
Newton's laws of motion
Potential energy
Specific force
Speed of gravity
Tests of general relativity
References
Theories of gravity
Geodesy
General relativity | 0.78607 | 0.99472 | 0.78192 |
Bioenergetics | Bioenergetics is a field in biochemistry and cell biology that concerns energy flow through living systems. This is an active area of biological research that includes the study of the transformation of energy in living organisms and the study of thousands of different cellular processes such as cellular respiration and the many other metabolic and enzymatic processes that lead to production and utilization of energy in forms such as adenosine triphosphate (ATP) molecules. That is, the goal of bioenergetics is to describe how living organisms acquire and transform energy in order to perform biological work. The study of metabolic pathways is thus essential to bioenergetics.
Overview
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs harness energy that was initially transformed by the plants during photosynthesis.
In a living organism, chemical bonds are broken and made as part of the exchange and transformation of energy. Energy is available for work (such as mechanical work) or for other processes (such as chemical synthesis and anabolic processes in growth), when weak bonds are broken and stronger bonds are made. The production of stronger bonds allows release of usable energy.
Adenosine triphosphate (ATP) is the main "energy currency" for organisms; the goal of metabolic and catabolic processes are to synthesize ATP from available starting materials (from the environment), and to break- down ATP (into adenosine diphosphate (ADP) and inorganic phosphate) by utilizing it in biological processes. In a cell, the ratio of ATP to ADP concentrations is known as the "energy charge" of the cell. A cell can use this energy charge to relay information about cellular needs; if there is more ATP than ADP available, the cell can use ATP to do work, but if there is more ADP than ATP available, the cell must synthesize ATP via oxidative phosphorylation.
Living organisms produce ATP from energy sources via oxidative phosphorylation. The terminal phosphate bonds of ATP are relatively weak compared with the stronger bonds formed when ATP is hydrolyzed (broken down by water) to adenosine diphosphate and inorganic phosphate. Here it is the thermodynamically favorable free energy of hydrolysis that results in energy release; the phosphoanhydride bond between the terminal phosphate group and the rest of the ATP molecule does not itself contain this energy. An organism's stockpile of ATP is used as a battery to store energy in cells. Utilization of chemical energy from such molecular bond rearrangement powers biological processes in every biological organism.
Living organisms obtain energy from organic and inorganic materials; i.e. ATP can be synthesized from a variety of biochemical precursors. For example, lithotrophs can oxidize minerals such as nitrates or forms of sulfur, such as elemental sulfur, sulfites, and hydrogen sulfide to produce ATP. In photosynthesis, autotrophs produce ATP using light energy, whereas heterotrophs must consume organic compounds, mostly including carbohydrates, fats, and proteins. The amount of energy actually obtained by the organism is lower than the amount present in the food; there are losses in digestion, metabolism, and thermogenesis.
Environmental materials that an organism intakes are generally combined with oxygen to release energy, although some nutrients can also be oxidized anaerobically by various organisms. The utilization of these materials is a form of slow combustion because the nutrients are reacted with oxygen (the materials are oxidized slowly enough that the organisms do not produce fire). The oxidation releases energy, which may evolve as heat or be used by the organism for other purposes, such as breaking chemical bonds.
Types of reactions
An exergonic reaction is a spontaneous chemical reaction that releases energy. It is thermodynamically favored, indexed by a negative value of ΔG (Gibbs free energy). Over the course of a reaction, energy needs to be put in, and this activation energy drives the reactants from a stable state to a highly energetically unstable transition state to a more stable state that is lower in energy (see: reaction coordinate). The reactants are usually complex molecules that are broken into simpler products. The entire reaction is usually catabolic. The release of energy (called Gibbs free energy) is negative (i.e. −ΔG) because energy is released from the reactants to the products.
An endergonic reaction is an anabolic chemical reaction that consumes energy. It is the opposite of an exergonic reaction. It has a positive ΔG because it takes more energy to break the bonds of the reactant than the energy of the products offer, i.e. the products have weaker bonds than the reactants. Thus, endergonic reactions are thermodynamically unfavorable. Additionally, endergonic reactions are usually anabolic.
The free energy (ΔG) gained or lost in a reaction can be calculated as follows: ΔG = ΔH − TΔS
where ∆G = Gibbs free energy, ∆H = enthalpy, T = temperature (in kelvins), and ∆S = entropy.
Examples of major bioenergetic processes
Glycolysis is the process of breaking down glucose into pyruvate, producing two molecules of ATP (per 1 molecule of glucose) in the process. When a cell has a higher concentration of ATP than ADP (i.e. has a high energy charge), the cell cannot undergo glycolysis, releasing energy from available glucose to perform biological work. Pyruvate is one product of glycolysis, and can be shuttled into other metabolic pathways (gluconeogenesis, etc.) as needed by the cell. Additionally, glycolysis produces reducing equivalents in the form of NADH (nicotinamide adenine dinucleotide), which will ultimately be used to donate electrons to the electron transport chain.
Gluconeogenesis is the opposite of glycolysis; when the cell's energy charge is low (the concentration of ADP is higher than that of ATP), the cell must synthesize glucose from carbon- containing biomolecules such as proteins, amino acids, fats, pyruvate, etc. For example, proteins can be broken down into amino acids, and these simpler carbon skeletons are used to build/ synthesize glucose.
The citric acid cycle is a process of cellular respiration in which acetyl coenzyme A, synthesized from pyruvate dehydrogenase, is first reacted with oxaloacetate to yield citrate. The remaining eight reactions produce other carbon-containing metabolites. These metabolites are successively oxidized, and the free energy of oxidation is conserved in the form of the reduced coenzymes FADH2 and NADH. These reduced electron carriers can then be re-oxidized when they transfer electrons to the electron transport chain.
Ketosis is a metabolic process where the body prioritizes ketone bodies, produced from fat, as its primary fuel source instead of glucose. This shift often occurs when glucose levels are low: during prolonged fasting, strenuous exercise, or specialized diets like ketogenic plans, the body may also adopt ketosis as an efficient alternative for energy production. This metabolic adaptation allows the body to conserve precious glucose for organs that depend on it, like the brain, while utilizing readily available fat stores for fuel.
Oxidative phosphorylation and the electron transport chain is the process where reducing equivalents such as NADPH, FADH2 and NADH can be used to donate electrons to a series of redox reactions that take place in electron transport chain complexes. These redox reactions take place in enzyme complexes situated within the mitochondrial membrane. These redox reactions transfer electrons "down" the electron transport chain, which is coupled to the proton motive force. This difference in proton concentration between the mitochondrial matrix and inner membrane space is used to drive ATP synthesis via ATP synthase.
Photosynthesis, another major bioenergetic process, is the metabolic pathway used by plants in which solar energy is used to synthesize glucose from carbon dioxide and water. This reaction takes place in the chloroplast. After glucose is synthesized, the plant cell can undergo photophosphorylation to produce ATP.
Additional information
During energy transformations in living systems, order and organization must be compensated by releasing energy which will increase entropy of the surrounding.
Organisms are open systems that exchange materials and energy with the environment. They are never at equilibrium with the surrounding.
Energy is spent to create and maintain order in the cells, and surplus energy and other simpler by-products are released to create disorder such that there is an increase in entropy of the surrounding.
In a reversible process, entropy remains constant where as in an irreversible process (more common to real-world scenarios), entropy tends to increase.
During phase changes (from solid to liquid, or to gas), entropy increases because the number of possible arrangements of particles increases.
If ∆G<0, the chemical reaction is spontaneous and favourable in that direction.
If ∆G=0, the reactants and products of chemical reaction are at equilibrium.
If ∆G>0, the chemical reaction is non-spontaneous and unfavorable in that direction.
∆G is not an indicator for velocity or rate of chemical reaction at which equilibrium is reached. It depends on amount of enzyme and energy activation.
Reaction coupling
Is the linkage of chemical reactions in a way that the product of one reaction becomes the substrate of another reaction.
This allows organisms to utilize energy and resources efficiently. For example, in cellular respiration, energy released by the breakdown of glucose is coupled in the synthesis of ATP.
Cotransport
In August 1960, Robert K. Crane presented for the first time his discovery of the sodium-glucose cotransport as the mechanism for intestinal glucose absorption. Crane's discovery of cotransport was the first ever proposal of flux coupling in biology and was the most important event concerning carbohydrate absorption in the 20th century.
Chemiosmotic theory
One of the major triumphs of bioenergetics is Peter D. Mitchell's chemiosmotic theory of how protons in aqueous solution function in the production of ATP in cell organelles such as mitochondria. This work earned Mitchell the 1978 Nobel Prize for Chemistry. Other cellular sources of ATP such as glycolysis were understood first, but such processes for direct coupling of enzyme activity to ATP production are not the major source of useful chemical energy in most cells. Chemiosmotic coupling is the major energy producing process in most cells, being utilized in chloroplasts and several single celled organisms in addition to mitochondria.
Binding Change Mechanism
The binding change mechanism, proposed by Paul Boyer and John E. Walker, who were awarded the Nobel Prize in Chemistry in 1997, suggests that ATP synthesis is linked to a conformational change in ATP synthase. This change is triggered by the rotation of the gamma subunit. ATP synthesis can be achieved through several mechanisms. The first mechanism postulates that the free energy of the proton gradient is utilized to alter the conformation of polypeptide molecules in the ATP synthesis active centers. The second mechanism suggests that the change in the conformational state is also produced by the transformation of mechanical energy into chemical energy using biological mechanoemission.
Energy balance
Energy homeostasis is the homeostatic control of energy balance – the difference between energy obtained through food consumption and energy expenditure – in living systems.
See also
Bioenergetic systems
Cellular respiration
Photosynthesis
ATP synthase
Active transport
Myosin
Exercise physiology
Table of standard Gibbs free energies
References
Further reading
Juretic, D., 2021. Bioenergetics: a bridge across life and universe. CRC Press.
External links
The Molecular & Cellular Bioenergetics Gordon Research Conference (see).
American Society of Exercise Physiologists
Biochemistry
Biophysics
Cell biology
Energy (physics) | 0.789533 | 0.990302 | 0.781876 |
Linear motion | Linear motion, also called rectilinear motion, is one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion, with constant velocity (zero acceleration); and non-uniform linear motion, with variable velocity (non-zero acceleration). The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running a 100-meter dash along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Background
Displacement
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mathematically the displacement is given by:
The equivalent of displacement in rotational motion is the angular displacement measured in radians.
The displacement of an object cannot be greater than the distance because it is also a distance but the shortest one. Consider a person travelling to work daily. Overall displacement when he returns home is zero, since the person ends up back where he started, but the distance travelled is clearly not zero.
Velocity
Velocity refers to a displacement in one direction with respect to an interval of time. It is defined as the rate of change of displacement over change in time. Velocity is a vector quantity, representing a direction and a magnitude of movement. The magnitude of a velocity is called speed. The SI unit of speed is that is metre per second.
Average velocity
The average velocity of a moving body is its total displacement divided by the total time needed to travel from the initial point to the final point. It is an estimated velocity for a distance to travel. Mathematically, it is given by:
where:
is the time at which the object was at position and
is the time at which the object was at position
The magnitude of the average velocity is called an average speed.
Instantaneous velocity
In contrast to an average velocity, referring to the overall motion in a finite time interval, the instantaneous velocity of an object describes the state of motion at a specific point in time. It is defined by letting the length of the time interval tend to zero, that is, the velocity is the time derivative of the displacement as a function of time.
The magnitude of the instantaneous velocity is called the instantaneous speed.The instantaneous velocity equation comes from finding the limit as t approaches 0 of the average velocity. The instantaneous velocity shows the position function with respect to time. From the instantaneous velocity the instantaneous speed can be derived by getting the magnitude of the instantaneous velocity.
Acceleration
Acceleration is defined as the rate of change of velocity with respect to time. Acceleration is the second derivative of displacement i.e. acceleration can be found by differentiating position with respect to time twice or differentiating velocity with respect to time once. The SI unit of acceleration is or metre per second squared.
If is the average acceleration and is the change in velocity over the time interval then mathematically,
The instantaneous acceleration is the limit, as approaches zero, of the ratio and , i.e.,
Jerk
The rate of change of acceleration, the third derivative of displacement is known as jerk. The SI unit of jerk is . In the UK jerk is also referred to as jolt.
Jounce
The rate of change of jerk, the fourth derivative of displacement is known as jounce. The SI unit of jounce is which can be pronounced as metres per quartic second.
Formulation
In case of constant acceleration, the four physical quantities acceleration, velocity, time and displacement can be related by using the equations of motion.
Here,
is the initial velocity
is the final velocity
is acceleration
is displacement
is time
These relationships can be demonstrated graphically. The gradient of a line on a displacement time graph represents the velocity. The gradient of the velocity time graph gives the acceleration while the area under the velocity time graph gives the displacement. The area under a graph of acceleration versus time is equal to the change in velocity.
Comparison to circular motion
The following table refers to rotation of a rigid body about a fixed axis: is arc length, is the distance from the axis to any point, and is the tangential acceleration, which is the component of the acceleration that is parallel to the motion. In contrast, the centripetal acceleration, , is perpendicular to the motion. The component of the force parallel to the motion, or equivalently, perpendicular to the line connecting the point of application to the axis is . The sum is over from to particles and/or points of application.
The following table shows the analogy in derived SI units:
See also
Angular motion
Centripetal force
Inertial frame of reference
Linear actuator
Linear bearing
Linear motor
Mechanics of planar particle motion
Motion graphs and derivatives
Reciprocating motion
Rectilinear propagation
Uniformly accelerated linear motion
References
Further reading
Resnick, Robert and Halliday, David (1966), Physics, Chapter 3 (Vol I and II, Combined edition), Wiley International Edition, Library of Congress Catalog Card No. 66-11527
Tipler P.A., Mosca G., "Physics for Scientists and Engineers", Chapter 2 (5th edition), W. H. Freeman and company: New York and Basing stoke, 2003.
External links
Classical mechanics | 0.786574 | 0.993634 | 0.781567 |
Inertial frame of reference | In classical physics and special relativity, an inertial frame of reference (also called inertial space, or Galilean reference frame) is a stationary or uniformly moving frame of reference. Observed relative to such a frame, objects exhibit inertia, i.e., remain at rest until acted upon by external forces, and the laws of nature can be observed without the need for acceleration correction.
All frames of reference with zero acceleration are in a state of constant rectilinear motion (straight-line motion) with respect to one another. In such a frame, an object with zero net force acting on it, is perceived to move with a constant velocity, or, equivalently, Newton's first law of motion holds. Such frames are known as inertial. Some physicists, like Isaac Newton, originally thought that one of these frames was absolute — the one approximated by the fixed stars. However, this is not required for the definition, and it is now known that those stars are in fact moving.
According to the principle of special relativity, all physical laws look the same in all inertial reference frames, and no inertial frame is privileged over another. Measurements of objects in one inertial frame can be converted to measurements in another by a simple transformation — the Galilean transformation in Newtonian physics or the Lorentz transformation (combined with a translation) in special relativity; these approximately match when the relative speed of the frames is low, but differ as it approaches the speed of light.
By contrast, a non-inertial reference frame has non-zero acceleration. In such a frame, the interactions between physical objects vary depending on the acceleration of that frame with respect to an inertial frame. Viewed from the perspective of classical mechanics and special relativity, the usual physical forces caused by the interaction of objects have to be supplemented by fictitious forces caused by inertia.
Viewed from the perspective of general relativity theory, the fictitious (i.e. inertial) forces are attributed to geodesic motion in spacetime.
Due to Earth's rotation, its surface is not an inertial frame of reference. The Coriolis effect can deflect certain forms of motion as seen from Earth, and the centrifugal force will reduce the effective gravity at the equator. Nevertheless, for many applications the Earth is an adequate approximation of an inertial reference frame.
Introduction
The motion of a body can only be described relative to something else—other bodies, observers, or a set of spacetime coordinates. These are called frames of reference. According to the first postulate of special relativity, all physical laws take their simplest form in an inertial frame, and there exist multiple inertial frames interrelated by uniform translation:
This simplicity manifests itself in that inertial frames have self-contained physics without the need for external causes, while physics in non-inertial frames has external causes. The principle of simplicity can be used within Newtonian physics as well as in special relativity:
However, this definition of inertial frames is understood to apply in the Newtonian realm and ignores relativistic effects.
In practical terms, the equivalence of inertial reference frames means that scientists within a box moving with a constant absolute velocity cannot determine this velocity by any experiment. Otherwise, the differences would set up an absolute standard reference frame. According to this definition, supplemented with the constancy of the speed of light, inertial frames of reference transform among themselves according to the Poincaré group of symmetry transformations, of which the Lorentz transformations are a subgroup. In Newtonian mechanics, inertial frames of reference are related by the Galilean group of symmetries.
Newton's inertial frame of reference
Absolute space
Newton posited an absolute space considered well-approximated by a frame of reference stationary relative to the fixed stars. An inertial frame was then one in uniform translation relative to absolute space. However, some "relativists", even at the time of Newton, felt that absolute space was a defect of the formulation, and should be replaced.
The expression inertial frame of reference was coined by Ludwig Lange in 1885, to replace Newton's definitions of "absolute space and time" with a more operational definition:
The inadequacy of the notion of "absolute space" in Newtonian mechanics is spelled out by Blagojevich:
The utility of operational definitions was carried much further in the special theory of relativity. Some historical background including Lange's definition is provided by DiSalle, who says in summary:
Newtonian mechanics
Classical theories that use the Galilean transformation postulate the equivalence of all inertial reference frames. The Galilean transformation transforms coordinates from one inertial reference frame, , to another, , by simple addition or subtraction of coordinates:
where r0 and t0 represent shifts in the origin of space and time, and v is the relative velocity of the two inertial reference frames. Under Galilean transformations, the time t2 − t1 between two events is the same for all reference frames and the distance between two simultaneous events (or, equivalently, the length of any object, |r2 − r1|) is also the same.
Within the realm of Newtonian mechanics, an inertial frame of reference, or inertial reference frame, is one in which Newton's first law of motion is valid. However, the principle of special relativity generalizes the notion of an inertial frame to include all physical laws, not simply Newton's first law.
Newton viewed the first law as valid in any reference frame that is in uniform motion (neither rotating nor accelerating) relative to absolute space; as a practical matter, "absolute space" was considered to be the fixed stars In the theory of relativity the notion of absolute space or a privileged frame is abandoned, and an inertial frame in the field of classical mechanics is defined as:
Hence, with respect to an inertial frame, an object or body accelerates only when a physical force is applied, and (following Newton's first law of motion), in the absence of a net force, a body at rest will remain at rest and a body in motion will continue to move uniformly—that is, in a straight line and at constant speed. Newtonian inertial frames transform among each other according to the Galilean group of symmetries.
If this rule is interpreted as saying that straight-line motion is an indication of zero net force, the rule does not identify inertial reference frames because straight-line motion can be observed in a variety of frames. If the rule is interpreted as defining an inertial frame, then being able to determine when zero net force is applied is crucial. The problem was summarized by Einstein:
There are several approaches to this issue. One approach is to argue that all real forces drop off with distance from their sources in a known manner, so it is only needed that a body is far enough away from all sources to ensure that no force is present. A possible issue with this approach is the historically long-lived view that the distant universe might affect matters (Mach's principle). Another approach is to identify all real sources for real forces and account for them. A possible issue with this approach is the possibility of missing something, or accounting inappropriately for their influence, perhaps, again, due to Mach's principle and an incomplete understanding of the universe. A third approach is to look at the way the forces transform when shifting reference frames. Fictitious forces, those that arise due to the acceleration of a frame, disappear in inertial frames and have complicated rules of transformation in general cases. Based on the universality of physical law and the request for frames where the laws are most simply expressed, inertial frames are distinguished by the absence of such fictitious forces.
Newton enunciated a principle of relativity himself in one of his corollaries to the laws of motion:
This principle differs from the special principle in two ways: first, it is restricted to mechanics, and second, it makes no mention of simplicity. It shares the special principle of the invariance of the form of the description among mutually translating reference frames. The role of fictitious forces in classifying reference frames is pursued further below.
Special relativity
Einstein's theory of special relativity, like Newtonian mechanics, postulates the equivalence of all inertial reference frames. However, because special relativity postulates that the speed of light in free space is invariant, the transformation between inertial frames is the Lorentz transformation, not the Galilean transformation which is used in Newtonian mechanics.
The invariance of the speed of light leads to counter-intuitive phenomena, such as time dilation, length contraction, and the relativity of simultaneity. The predictions of special relativity have been extensively verified experimentally. The Lorentz transformation reduces to the Galilean transformation as the speed of light approaches infinity or as the relative velocity between frames approaches zero.
Examples
Simple example
Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 meters. The car in front is traveling at 22 meters per second and the car behind is traveling at 30 meters per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference" that we could choose.
First, we could observe the two cars from the side of the road. We define our "frame of reference" S as follows. We stand on the side of the road and start a stop-clock at the exact moment that the second car passes us, which happens to be when they are a distance apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where is the position in meters of car one after time t in seconds and is the position of car two after time t.
Notice that these formulas predict at t = 0 s the first car is 200m down the road and the second car is right beside us, as expected. We want to find the time at which . Therefore, we set and solve for , that is:
Alternatively, we could choose a frame of reference S′ situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of . To catch up to the first car, it will take a time of , that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at .
It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one can convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you can deduct five minutes from the time displayed on your watch to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three).
Additional example
For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north-south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving to the right. However, for the person facing west, the car was moving to the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system.
For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the -axis, and the direction in front of him as the positive -axis. To him, the car moves along the axis with some velocity in the positive -direction. Alfred's frame of reference is considered an inertial frame because he is not accelerating, ignoring effects such as Earth's rotation and gravity.
Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive -axis, and the direction in front of her as the positive -axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving – for instance, as she drives past Alfred, she observes him moving with velocity in the negative -direction. If she is driving north, then north is the positive -direction; if she turns east, east becomes the positive -direction.
Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be in the negative -direction. Assuming Candace's acceleration is constant, what acceleration does Betsy measure? If Betsy's velocity is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, in the negative -direction. However, if she is accelerating at rate in the negative -direction (in other words, slowing down), she will find Candace's acceleration to be in the negative -direction—a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive -direction (speeding up), she will observe Candace's acceleration as in the negative -direction—a larger value than Alfred's measurement.
Non-inertial frames
Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below.
General relativity
General relativity is based upon the principle of equivalence:
This idea was introduced in Einstein's 1907 article "Principle of Relativity and Gravitation" and later developed in 1911. Support for this principle is found in the Eötvös experiment, which determines whether the ratio of inertial to gravitational mass is the same for all bodies, regardless of size or composition. To date no difference has been found to a few parts in 1011. For some discussion of the subtleties of the Eötvös experiment, such as the local mass distribution around the experimental site (including a quip about the mass of Eötvös himself), see Franklin.
Einstein's general theory modifies the distinction between nominally "inertial" and "non-inertial" effects by replacing special relativity's "flat" Minkowski Space with a metric that produces non-zero curvature. In general relativity, the principle of inertia is replaced with the principle of geodesic motion, whereby objects move in a way dictated by the curvature of spacetime. As a consequence of this curvature, it is not a given in general relativity that inertial objects moving at a particular rate with respect to each other will continue to do so. This phenomenon of geodesic deviation means that inertial frames of reference do not exist globally as they do in Newtonian mechanics and special relativity.
However, the general theory reduces to the special theory over sufficiently small regions of spacetime, where curvature effects become less important and the earlier inertial frame arguments can come back into play. Consequently, modern special relativity is now sometimes described as only a "local theory". "Local" can encompass, for example, the entire Milky Way galaxy: The astronomer Karl Schwarzschild observed the motion of pairs of stars orbiting each other. He found that the two orbits of the stars of such a system lie in a plane, and the perihelion of the orbits of the two stars remains pointing in the same direction with respect to the Solar System. Schwarzschild pointed out that that was invariably seen: the direction of the angular momentum of all observed double star systems remains fixed with respect to the direction of the angular momentum of the Solar System. These observations allowed him to conclude that inertial frames inside the galaxy do not rotate with respect to one another, and that the space of the Milky Way is approximately Galilean or Minkowskian.
Inertial frames and rotation
In an inertial frame, Newton's first law, the law of inertia, is satisfied: Any free motion has a constant magnitude and direction. Newton's second law for a particle takes the form:
with F the net force (a vector), m the mass of a particle and a the acceleration of the particle (also a vector) which would be measured by an observer at rest in the frame. The force F is the vector sum of all "real" forces on the particle, such as contact forces, electromagnetic, gravitational, and nuclear forces.
In contrast, Newton's second law in a rotating frame of reference (a non-inertial frame of reference), rotating at angular rate Ω about an axis, takes the form:
which looks the same as in an inertial frame, but now the force F′ is the resultant of not only F, but also additional terms (the paragraph following this equation presents the main points without detailed mathematics):
where the angular rotation of the frame is expressed by the vector Ω pointing in the direction of the axis of rotation, and with magnitude equal to the angular rate of rotation Ω, symbol × denotes the vector cross product, vector xB locates the body and vector vB is the velocity of the body according to a rotating observer (different from the velocity seen by the inertial observer).
The extra terms in the force F′ are the "fictitious" forces for this frame, whose causes are external to the system in the frame. The first extra term is the Coriolis force, the second the centrifugal force, and the third the Euler force. These terms all have these properties: they vanish when Ω = 0; that is, they are zero for an inertial frame (which, of course, does not rotate); they take on a different magnitude and direction in every rotating frame, depending upon its particular value of Ω; they are ubiquitous in the rotating frame (affect every particle, regardless of circumstance); and they have no apparent source in identifiable physical sources, in particular, matter. Also, fictitious forces do not drop off with distance (unlike, for example, nuclear forces or electrical forces). For example, the centrifugal force that appears to emanate from the axis of rotation in a rotating frame increases with distance from the axis.
All observers agree on the real forces, F; only non-inertial observers need fictitious forces. The laws of physics in the inertial frame are simpler because unnecessary forces are not present.
In Newton's time the fixed stars were invoked as a reference frame, supposedly at rest relative to absolute space. In reference frames that were either at rest with respect to the fixed stars or in uniform translation relative to these stars, Newton's laws of motion were supposed to hold. In contrast, in frames accelerating with respect to the fixed stars, an important case being frames rotating relative to the fixed stars, the laws of motion did not hold in their simplest form, but had to be supplemented by the addition of fictitious forces, for example, the Coriolis force and the centrifugal force. Two experiments were devised by Newton to demonstrate how these forces could be discovered, thereby revealing to an observer that they were not in an inertial frame: the example of the tension in the cord linking two spheres rotating about their center of gravity, and the example of the curvature of the surface of water in a rotating bucket. In both cases, application of Newton's second law would not work for the rotating observer without invoking centrifugal and Coriolis forces to account for their observations (tension in the case of the spheres; parabolic water surface in the case of the rotating bucket).
As now known, the fixed stars are not fixed. Those that reside in the Milky Way turn with the galaxy, exhibiting proper motions. Those that are outside our galaxy (such as nebulae once mistaken to be stars) participate in their own motion as well, partly due to expansion of the universe, and partly due to peculiar velocities. For instance, the Andromeda Galaxy is on collision course with the Milky Way at a speed of 117 km/s. The concept of inertial frames of reference is no longer tied to either the fixed stars or to absolute space. Rather, the identification of an inertial frame is based on the simplicity of the laws of physics in the frame.
The laws of nature take a simpler form in inertial frames of reference because in these frames one did not have to introduce inertial forces when writing down Newton's law of motion.
In practice, using a frame of reference based upon the fixed stars as though it were an inertial frame of reference introduces little discrepancy. For example, the centrifugal acceleration of the Earth because of its rotation about the Sun is about thirty million times greater than that of the Sun about the galactic center.
To illustrate further, consider the question: "Does the Universe rotate?" An answer might explain the shape of the Milky Way galaxy using the laws of physics, although other observations might be more definitive; that is, provide larger discrepancies or less measurement uncertainty, like the anisotropy of the microwave background radiation or Big Bang nucleosynthesis. The flatness of the Milky Way depends on its rate of rotation in an inertial frame of reference. If its apparent rate of rotation is attributed entirely to rotation in an inertial frame, a different "flatness" is predicted than if it is supposed that part of this rotation is actually due to rotation of the universe and should not be included in the rotation of the galaxy itself. Based upon the laws of physics, a model is set up in which one parameter is the rate of rotation of the Universe. If the laws of physics agree more accurately with observations in a model with rotation than without it, we are inclined to select the best-fit value for rotation, subject to all other pertinent experimental observations. If no value of the rotation parameter is successful and theory is not within observational error, a modification of physical law is considered, for example, dark matter is invoked to explain the galactic rotation curve. So far, observations show any rotation of the universe is very slow, no faster than once every years (10−13 rad/yr), and debate persists over whether there is any rotation. However, if rotation were found, interpretation of observations in a frame tied to the universe would have to be corrected for the fictitious forces inherent in such rotation in classical physics and special relativity, or interpreted as the curvature of spacetime and the motion of matter along the geodesics in general relativity.
When quantum effects are important, there are additional conceptual complications that arise in quantum reference frames.
Primed frames
An accelerated frame of reference is often delineated as being the "primed" frame, and all variables that are dependent on that frame are notated with primes, e.g. x′, y′, a′.
The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r′.
From the geometry of the situation
Taking the first and second derivatives of this with respect to time
where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame.
These equations allow transformations between the two coordinate systems; for example, Newton's second law can be written as
When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object's motion, the Coriolis effect).
A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried).
This arrangement leads to the equation (see Fictitious force for a derivation):
or, to solve for the acceleration in the accelerated frame,
Multiplying through by the mass m gives
where
(Euler force),
(Coriolis force),
(centrifugal force).
Separating non-inertial from inertial reference frames
Theory
Inertial and non-inertial reference frames can be distinguished by the absence or presence of fictitious forces.
The presence of fictitious forces indicates the physical laws are not the simplest laws available, in terms of the special principle of relativity, a frame where fictitious forces are present is not an inertial frame:
Bodies in non-inertial reference frames are subject to so-called fictitious forces (pseudo-forces); that is, forces that result from the acceleration of the reference frame itself and not from any physical force acting on the body. Examples of fictitious forces are the centrifugal force and the Coriolis force in rotating reference frames.
To apply the Newtonian definition of an inertial frame, the understanding of separation between "fictitious" forces and "real" forces must be made clear.
For example, consider a stationary object in an inertial frame. Being at rest, no net force is applied. But in a frame rotating about a fixed axis, the object appears to move in a circle, and is subject to centripetal force. How can it be decided that the rotating frame is a non-inertial frame? There are two approaches to this resolution: one approach is to look for the origin of the fictitious forces (the Coriolis force and the centrifugal force). It will be found there are no sources for these forces, no associated force carriers, no originating bodies. A second approach is to look at a variety of frames of reference. For any inertial frame, the Coriolis force and the centrifugal force disappear, so application of the principle of special relativity would identify these frames where the forces disappear as sharing the same and the simplest physical laws, and hence rule that the rotating frame is not an inertial frame.
Newton examined this problem himself using rotating spheres, as shown in Figure 2 and Figure 3. He pointed out that if the spheres are not rotating, the tension in the tying string is measured as zero in every frame of reference. If the spheres only appear to rotate (that is, we are watching stationary spheres from a rotating frame), the zero tension in the string is accounted for by observing that the centripetal force is supplied by the centrifugal and Coriolis forces in combination, so no tension is needed. If the spheres really are rotating, the tension observed is exactly the centripetal force required by the circular motion. Thus, measurement of the tension in the string identifies the inertial frame: it is the one where the tension in the string provides exactly the centripetal force demanded by the motion as it is observed in that frame, and not a different value. That is, the inertial frame is the one where the fictitious forces vanish.
For linear acceleration, Newton expressed the idea of undetectability of straight-line accelerations held in common:
This principle generalizes the notion of an inertial frame. For example, an observer confined in a free-falling lift will assert that he himself is a valid inertial frame, even if he is accelerating under gravity, so long as he has no knowledge about anything outside the lift. So, strictly speaking, inertial frame is a relative concept. With this in mind, inertial frames can collectively be defined as a set of frames which are stationary or moving at constant velocity with respect to each other, so that a single inertial frame is defined as an element of this set.
For these ideas to apply, everything observed in the frame has to be subject to a base-line, common acceleration shared by the frame itself. That situation would apply, for example, to the elevator example, where all objects are subject to the same gravitational acceleration, and the elevator itself accelerates at the same rate.
Applications
Inertial navigation systems used a cluster of gyroscopes and accelerometers to determine accelerations relative to inertial space. After a gyroscope is spun up in a particular orientation in inertial space, the law of conservation of angular momentum requires that it retain that orientation as long as no external forces are applied to it. Three orthogonal gyroscopes establish an inertial reference frame, and the accelerators measure acceleration relative to that frame. The accelerations, along with a clock, can then be used to calculate the change in position. Thus, inertial navigation is a form of dead reckoning that requires no external input, and therefore cannot be jammed by any external or internal signal source.
A gyrocompass, employed for navigation of seagoing vessels, finds the geometric north. It does so, not by sensing the Earth's magnetic field, but by using inertial space as its reference. The outer casing of the gyrocompass device is held in such a way that it remains aligned with the local plumb line. When the gyroscope wheel inside the gyrocompass device is spun up, the way the gyroscope wheel is suspended causes the gyroscope wheel to gradually align its spinning axis with the Earth's axis. Alignment with the Earth's axis is the only direction for which the gyroscope's spinning axis can be stationary with respect to the Earth and not be required to change direction with respect to inertial space. After being spun up, a gyrocompass can reach the direction of alignment with the Earth's axis in as little as a quarter of an hour.
See also
Absolute rotation
Diffeomorphism
Galilean invariance
General covariance
Local reference frame
Lorentz covariance
Newton's first law
Quantum reference frame
References
Further reading
Edwin F. Taylor and John Archibald Wheeler, Spacetime Physics, 2nd ed. (Freeman, NY, 1992)
Albert Einstein, Relativity, the special and the general theories, 15th ed. (1954)
Albert Einstein, On the Electrodynamics of Moving Bodies, included in The Principle of Relativity, page 38. Dover 1923
Rotation of the Universe
B Ciobanu, I Radinchi Modeling the electric and magnetic fields in a rotating universe Rom. Journ. Phys., Vol. 53, Nos. 1–2, P. 405–415, Bucharest, 2008
Yuri N. Obukhov, Thoralf Chrobok, Mike Scherfner Shear-free rotating inflation Phys. Rev. D 66, 043518 (2002) [5 pages]
Yuri N. Obukhov On physical foundations and observational effects of cosmic rotation (2000)
Li-Xin Li Effect of the Global Rotation of the Universe on the Formation of Galaxies General Relativity and Gravitation, 30 (1998)
P Birch Is the Universe rotating? Nature 298, 451 – 454 (29 July 1982)
Kurt Gödel An example of a new type of cosmological solutions of Einstein's field equations of gravitation Rev. Mod. Phys., Vol. 21, p. 447, 1949.
External links
Stanford Encyclopedia of Philosophy entry
showing scenes as viewed from both an inertial frame and a rotating frame of reference, visualizing the Coriolis and centrifugal forces.
Classical mechanics
Frames of reference
Theory of relativity
Orbits | 0.78366 | 0.997098 | 0.781385 |
Statistical mechanics | In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applications include many problems in the fields of physics, biology, chemistry, neuroscience, computer science, information theory and sociology. Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion.
Statistical mechanics arose out of the development of classical thermodynamics, a field for which it was successful in explaining macroscopic physical properties—such as temperature, pressure, and heat capacity—in terms of microscopic parameters that fluctuate about average values and are characterized by probability distributions.
While classical thermodynamics is primarily concerned with thermodynamic equilibrium, statistical mechanics has been applied in non-equilibrium statistical mechanics to the issues of microscopically modeling the speed of irreversible processes that are driven by imbalances. Examples of such processes include chemical reactions and flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles.
History
In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion.
The founding of the field of statistical mechanics is generally credited to three physicists:
Ludwig Boltzmann, who developed the fundamental interpretation of entropy in terms of a collection of microstates
James Clerk Maxwell, who developed models of probability distribution of such states
Josiah Willard Gibbs, who coined the name of the field in 1884
In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. Five years later, in 1864, Ludwig Boltzmann, a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further.
Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory. Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem, transport theory, thermal equilibrium, the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with his H-theorem.
The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1884. According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871:
"Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous. Gibbs' methods were initially derived in the framework classical mechanics, however they were of such generality that they were found to adapt easily to the later quantum mechanics, and still form the foundation of statistical mechanics to this day.
Principles: mechanics and ensembles
In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics. For both types of mechanics, the standard mathematical approach is to consider two concepts:
The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics).
An equation of motion which carries the state forward in time: Hamilton's equations (classical mechanics) or the Schrödinger equation (quantum mechanics)
Using these two concepts, the state at any other time, past or future, can in principle be calculated.
There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in.
Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinate axes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as a density matrix.
As is usual for probabilities, the ensemble can be interpreted in different ways:
an ensemble can be taken to represent the various possible states that a single system could be in (epistemic probability, a form of knowledge), or
the members of the ensemble can be understood as the states of the systems in experiments repeated on independent systems which have been prepared in a similar but imperfectly controlled manner (empirical probability), in the limit of an infinite number of trials.
These two meanings are equivalent for many purposes, and will be used interchangeably in this article.
However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state.
One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast, mechanical equilibrium is a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.
Statistical thermodynamics
The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the material.
Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving.
Fundamental postulate
A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.).
There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another.
A common approach found in many textbooks is to take the equal a priori probability postulate. This postulate states that
For an isolated system with an exactly known energy and exactly known composition, the system can be found with equal probability in any microstate consistent with that knowledge.
The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate:
Ergodic hypothesis: An ergodic system is one that evolves over time to explore "all accessible" states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium ensemble with fixed energy. This approach has limited applicability, since most systems are not ergodic.
Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation.
Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy).
Other fundamental postulates for statistical mechanics have also been proposed. For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates:
where the third postulate can be replaced by the following:
Three thermodynamic ensembles
There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics.
Microcanonical ensemble
describes a system with a precisely given energy and fixed composition (precise number of particles). The microcanonical ensemble contains with equal probability each possible state that is consistent with that energy and composition.
Canonical ensemble
describes a system of fixed composition that is in thermal equilibrium with a heat bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the different states in the ensemble are accorded different probabilities depending on their total energy.
Grand canonical ensemble
describes a system with non-fixed composition (uncertain particle numbers) that is in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of particles; the different states in the ensemble are accorded different probabilities depending on their total energy and total particle numbers.
For systems containing many particles (the thermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used. The Gibbs theorem about equivalence of ensembles was developed into the theory of concentration of measure phenomenon, which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology.
Important cases where the thermodynamic ensembles do not give identical results include:
Microscopic systems.
Large systems at a phase transition.
Large systems with long-range interactions.
In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system.
Calculation methods
Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities.
Exact
There are some cases which allow exact solutions.
For very small microscopic systems, the ensembles can be directly computed by simply enumerating over all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all phase space in classical mechanics).
Some large systems consist of many separable microscopic systems, and each of the subsystems can be analysed independently. Notably, idealized gases of non-interacting particles have this property, allowing exact derivations of Maxwell–Boltzmann statistics, Fermi–Dirac statistics, and Bose–Einstein statistics.
A few large systems with interaction have been solved. By the use of subtle mathematical techniques, exact solutions have been found for a few toy models. Some examples include the Bethe ansatz, square-lattice Ising model in zero field, hard hexagon model.
Monte Carlo
Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system. Monte Carlo methods are important in computational physics, physical chemistry, and related fields, and have diverse applications including medical physics, where they are used to model radiation transport for radiation dosimetry calculations.
The Monte Carlo method examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level.
The Metropolis–Hastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble.
Path integral Monte Carlo, also used to sample the canonical ensemble.
Other
For rarefied non-ideal gases, approaches such as the cluster expansion use perturbation theory to include the effect of weak interactions, leading to a virial expansion.
For dense fluids, another approximate approach is based on reduced distribution functions, in particular the radial distribution function.
Molecular dynamics computer simulations can be used to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to a stochastic heat bath, they can also model canonical and grand canonical conditions.
Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful.
Non-equilibrium statistical mechanics
Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example:
heat transport by the internal motions in a material, driven by a temperature imbalance,
electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance,
spontaneous chemical reactions driven by a decrease in free energy,
friction, dissipation, quantum decoherence,
systems being pumped by external forces (optical pumping, etc.),
and irreversible processes in general.
All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.)
In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics.
Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections.
Stochastic methods
One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier.
Near-equilibrium methods
Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory. A remarkable result, as formalized by the fluctuation–dissipation theorem, is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium.
This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics.
A few of the theoretical tools used to make this connection include:
Fluctuation–dissipation theorem
Onsager reciprocal relations
Green–Kubo relations
Landauer–Büttiker formalism
Mori–Zwanzig formalism
GENERIC formalism
Hybrid methods
An advanced approach uses a combination of stochastic methods and linear response theory. As an example, one approach to compute quantum coherence effects (weak localization, conductance fluctuations) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method.
Applications
The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in:
propagation of uncertainty over time,
regression analysis of gravitational orbits,
ensemble forecasting of weather,
dynamics of neural networks,
bounded-rational potential games in game theory and economics.
Statistical physics explains and quantitatively describes superconductivity, superfluidity, turbulence, collective phenomena in solids and plasma, and the structural features of liquid. It underlies the modern astrophysics. In solid state physics, statistical physics aids the study of liquid crystals, phase transitions, and critical phenomena. Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons, X-ray, visible light, and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases).
Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
Quantum statistical mechanics
Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems. In quantum mechanics, a statistical ensemble (probability distribution over possible quantum states) is described by a density operator S, which is a non-negative, self-adjoint, trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics. One such formalism is provided by quantum logic.
See also
Quantum statistical mechanics
List of textbooks in thermodynamics and statistical mechanics
Laplace transform
References
Further reading
External links
Philosophy of Statistical Mechanics article by Lawrence Sklar for the Stanford Encyclopedia of Philosophy.
Sklogwiki - Thermodynamics, statistical mechanics, and the computer simulation of materials. SklogWiki is particularly orientated towards liquids and soft condensed matter.
Thermodynamics and Statistical Mechanics by Richard Fitzpatrick
taught by Leonard Susskind.
Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28.
Statistical mechanics
Thermodynamics | 0.783361 | 0.997418 | 0.781339 |
Engine | An engine or motor is a machine designed to convert one or more forms of energy into mechanical energy.
Available energy sources include potential energy (e.g. energy of the Earth's gravitational field as exploited in hydroelectric power generation), heat energy (e.g. geothermal), chemical energy, electric potential and nuclear energy (from nuclear fission or nuclear fusion). Many of these processes generate heat as an intermediate energy form; thus heat engines have special importance. Some natural processes, such as atmospheric convection cells convert environmental heat into motion (e.g. in the form of rising air currents). Mechanical energy is of particular importance in transportation, but also plays a role in many industrial processes such as cutting, grinding, crushing, and mixing.
Mechanical heat engines convert heat into work via various thermodynamic processes. The internal combustion engine is perhaps the most common example of a mechanical heat engine in which heat from the combustion of a fuel causes rapid pressurisation of the gaseous combustion products in the combustion chamber, causing them to expand and drive a piston, which turns a crankshaft. Unlike internal combustion engines, a reaction engine (such as a jet engine) produces thrust by expelling reaction mass, in accordance with Newton's third law of motion.
Apart from heat engines, electric motors convert electrical energy into mechanical motion, pneumatic motors use compressed air, and clockwork motors in wind-up toys use elastic energy. In biological systems, molecular motors, like myosins in muscles, use chemical energy to create forces and ultimately motion (a chemical engine, but not a heat engine).
Chemical heat engines which employ air (ambient atmospheric gas) as a part of the fuel reaction are regarded as airbreathing engines. Chemical heat engines designed to operate outside of Earth's atmosphere (e.g. rockets, deeply submerged submarines) need to carry an additional fuel component called the oxidizer (although there exist super-oxidizers suitable for use in rockets, such as fluorine, a more powerful oxidant than oxygen itself); or the application needs to obtain heat by non-chemical means, such as by means of nuclear reactions.
Emission/Byproducts
All chemically fueled heat engines emit exhaust gases. The cleanest engines emit water only. Strict zero-emissions generally means zero emissions other than water and water vapour. Only heat engines which combust pure hydrogen (fuel) and pure oxygen (oxidizer) achieve zero-emission by a strict definition (in practice, one type of rocket engine). If hydrogen is burnt in combination with air (all airbreathing engines), a side reaction occurs between atmospheric oxygen and atmospheric nitrogen resulting in small emissions of . If a hydrocarbon (such as alcohol or gasoline) is burnt as fuel, , a greenhouse gas, is emitted. Hydrogen and oxygen from air can be reacted into water by a fuel cell without side production of , but this is an electrochemical engine not a heat engine.
Terminology
The word engine derives from Old French , from the Latin –the root of the word . Pre-industrial weapons of war, such as catapults, trebuchets and battering rams, were called siege engines, and knowledge of how to construct them was often treated as a military secret. The word gin, as in cotton gin, is short for engine. Most mechanical devices invented during the Industrial Revolution were described as engines—the steam engine being a notable example. However, the original steam engines, such as those by Thomas Savery, were not mechanical engines but pumps. In this manner, a fire engine in its original form was merely a water pump, with the engine being transported to the fire by horses.
In modern usage, the term engine typically describes devices, like steam engines and internal combustion engines, that burn or otherwise consume fuel to perform mechanical work by exerting a torque or linear force (usually in the form of thrust). Devices converting heat energy into motion are commonly referred to simply as engines. Examples of engines which exert a torque include the familiar automobile gasoline and diesel engines, as well as turboshafts. Examples of engines which produce thrust include turbofans and rockets.
When the internal combustion engine was invented, the term motor was initially used to distinguish it from the steam engine—which was in wide use at the time, powering locomotives and other vehicles such as steam rollers. The term motor derives from the Latin verb which means 'to set in motion', or 'maintain motion'. Thus a motor is a device that imparts motion.
Motor and engine are interchangeable in standard English. In some engineering jargons, the two words have different meanings, in which engine is a device that burns or otherwise consumes fuel, changing its chemical composition, and a motor is a device driven by electricity, air, or hydraulic pressure, which does not change the chemical composition of its energy source. However, rocketry uses the term rocket motor, even though they consume fuel.
A heat engine may also serve as a prime mover—a component that transforms the flow or changes in pressure of a fluid into mechanical energy. An automobile powered by an internal combustion engine may make use of various motors and pumps, but ultimately all such devices derive their power from the engine. Another way of looking at it is that a motor receives power from an external source, and then converts it into mechanical energy, while an engine creates power from pressure (derived directly from the explosive force of combustion or other chemical reaction, or secondarily from the action of some such force on other substances such as air, water, or steam).
History
Antiquity
Simple machines, such as the club and oar (examples of the lever), are prehistoric. More complex engines using human power, animal power, water power, wind power and even steam power date back to antiquity. Human power was focused by the use of simple engines, such as the capstan, windlass or treadmill, and with ropes, pulleys, and block and tackle arrangements; this power was transmitted usually with the forces multiplied and the speed reduced. These were used in cranes and aboard ships in Ancient Greece, as well as in mines, water pumps and siege engines in Ancient Rome. The writers of those times, including Vitruvius, Frontinus and Pliny the Elder, treat these engines as commonplace, so their invention may be more ancient. By the 1st century AD, cattle and horses were used in mills, driving machines similar to those powered by humans in earlier times.
According to Strabo, a water-powered mill was built in Kaberia of the kingdom of Mithridates during the 1st century BC. Use of water wheels in mills spread throughout the Roman Empire over the next few centuries. Some were quite complex, with aqueducts, dams, and sluices to maintain and channel the water, along with systems of gears, or toothed-wheels made of wood and metal to regulate the speed of rotation. More sophisticated small devices, such as the Antikythera Mechanism used complex trains of gears and dials to act as calendars or predict astronomical events. In a poem by Ausonius in the 4th century AD, he mentions a stone-cutting saw powered by water. Hero of Alexandria is credited with many such wind and steam powered machines in the 1st century AD, including the Aeolipile and the vending machine, often these machines were associated with worship, such as animated altars and automated temple doors.
Medieval
Medieval Muslim engineers employed gears in mills and water-raising machines, and used dams as a source of water power to provide additional power to watermills and water-raising machines. In the medieval Islamic world, such advances made it possible to mechanize many industrial tasks previously carried out by manual labour.
In 1206, al-Jazari employed a crank-conrod system for two of his water-raising machines. A rudimentary steam turbine device was described by Taqi al-Din in 1551 and by Giovanni Branca in 1629.
In the 13th century, the solid rocket motor was invented in China. Driven by gunpowder, this simplest form of internal combustion engine was unable to deliver sustained power, but was useful for propelling weaponry at high speeds towards enemies in battle and for fireworks. After invention, this innovation spread throughout Europe.
Industrial Revolution
The Watt steam engine was the first type of steam engine to make use of steam at a pressure just above atmospheric to drive the piston helped by a partial vacuum. Improving on the design of the 1712 Newcomen steam engine, the Watt steam engine, developed sporadically from 1763 to 1775, was a great step in the development of the steam engine. Offering a dramatic increase in fuel efficiency, James Watt's design became synonymous with steam engines, due in no small part to his business partner, Matthew Boulton. It enabled rapid development of efficient semi-automated factories on a previously unimaginable scale in places where waterpower was not available. Later development led to steam locomotives and great expansion of railway transportation.
As for internal combustion piston engines, these were tested in France in 1807 by de Rivaz and independently, by the Niépce brothers. They were theoretically advanced by Carnot in 1824. In 1853–57 Eugenio Barsanti and Felice Matteucci invented and patented an engine using the free-piston principle that was possibly the first 4-cycle engine.
The invention of an internal combustion engine which was later commercially successful was made during 1860 by Etienne Lenoir.
In 1877, the Otto cycle was capable of giving a far higher power-to-weight ratio than steam engines and worked much better for many transportation applications such as cars and aircraft.
Automobiles
The first commercially successful automobile, created by Karl Benz, added to the interest in light and powerful engines. The lightweight gasoline internal combustion engine, operating on a four-stroke Otto cycle, has been the most successful for light automobiles, while the thermally more-efficient Diesel engine is used for trucks and buses. However, in recent years, turbocharged Diesel engines have become increasingly popular in automobiles, especially outside of the United States, even for quite small cars.
Horizontally-opposed pistons
In 1896, Karl Benz was granted a patent for his design of the first engine with horizontally opposed pistons. His design created an engine in which the corresponding pistons move in horizontal cylinders and reach top dead center simultaneously, thus automatically balancing each other with respect to their individual momentum. Engines of this design are often referred to as “flat” or “boxer” engines due to their shape and low profile. They were used in the Volkswagen Beetle, the Citroën 2CV, some Porsche and Subaru cars, many BMW and Honda motorcycles. Opposed four- and six-cylinder engines continue to be used as a power source in small, propeller-driven aircraft.
Advancement
The continued use of internal combustion engines in automobiles is partly due to the improvement of engine control systems, such as on-board computers providing engine management processes, and electronically controlled fuel injection. Forced air induction by turbocharging and supercharging have increased the power output of smaller displacement engines that are lighter in weight and more fuel-efficient at normal cruise power.. Similar changes have been applied to smaller Diesel engines, giving them almost the same performance characteristics as gasoline engines. This is especially evident with the popularity of smaller diesel engine-propelled cars in Europe. Diesel engines produce lower hydrocarbon and emissions, but greater particulate and pollution, than gasoline engines. Diesel engines are also 40% more fuel efficient than comparable gasoline engines.
Increasing power
In the first half of the 20th century, a trend of increasing engine power occurred, particularly in the U.S. models. Design changes incorporated all known methods of increasing engine capacity, including increasing the pressure in the cylinders to improve efficiency, increasing the size of the engine, and increasing the rate at which the engine produces work. The higher forces and pressures created by these changes created engine vibration and size problems that led to stiffer, more compact engines with V and opposed cylinder layouts replacing longer straight-line arrangements.
Combustion efficiency
Optimal combustion efficiency in passenger vehicles is reached with a coolant temperature of around .
Engine configuration
Earlier automobile engine development produced a much larger range of engines than is in common use today. Engines have ranged from 1- to 16-cylinder designs with corresponding differences in overall size, weight, engine displacement, and cylinder bores. Four cylinders and power ratings from 19 to 120 hp (14 to 90 kW) were followed in a majority of the models. Several three-cylinder, two-stroke-cycle models were built while most engines had straight or in-line cylinders. There were several V-type models and horizontally opposed two- and four-cylinder makes too. Overhead camshafts were frequently employed. The smaller engines were commonly air-cooled and located at the rear of the vehicle; compression ratios were relatively low. The 1970s and 1980s saw an increased interest in improved fuel economy, which caused a return to smaller V-6 and four-cylinder layouts, with as many as five valves per cylinder to improve efficiency. The Bugatti Veyron 16.4 operates with a W16 engine, meaning that two V8 cylinder layouts are positioned next to each other to create the W shape sharing the same crankshaft.
The largest internal combustion engine ever built is the Wärtsilä-Sulzer RTA96-C, a 14-cylinder, 2-stroke turbocharged diesel engine that was designed to power the Emma Mærsk, the largest container ship in the world when launched in 2006. This engine has a mass of 2,300 tonnes, and when running at 102 rpm (1.7 Hz) produces over 80 MW, and can use up to 250 tonnes of fuel per day.
Types
An engine can be put into a category according to two criteria: the form of energy it accepts in order to create motion, and the type of motion it outputs.
Heat engine
Combustion engine
Combustion engines are heat engines driven by the heat of a combustion process.
Internal combustion engine
The internal combustion engine is an engine in which the combustion of a fuel (generally, fossil fuel) occurs with an oxidizer (usually air) in a combustion chamber. In an internal combustion engine the expansion of the high temperature and high pressure gases, which are produced by the combustion, directly applies force to components of the engine, such as the pistons or turbine blades or a nozzle, and by moving it over a distance, generates mechanical work.
External combustion engine
An external combustion engine (EC engine) is a heat engine where an internal working fluid is heated by combustion of an external source, through the engine wall or a heat exchanger. The fluid then, by expanding and acting on the mechanism of the engine produces motion and usable work. The fluid is then cooled, compressed and reused (closed cycle), or (less commonly) dumped, and cool fluid pulled in (open cycle air engine).
"Combustion" refers to burning fuel with an oxidizer, to supply the heat. Engines of similar (or even identical) configuration and operation may use a supply of heat from other sources such as nuclear, solar, geothermal or exothermic reactions not involving combustion; but are not then strictly classed as external combustion engines, but as external thermal engines.
The working fluid can be a gas as in a Stirling engine, or steam as in a steam engine or an organic liquid such as n-pentane in an Organic Rankine cycle. The fluid can be of any composition; gas is by far the most common, although even single-phase liquid is sometimes used. In the case of the steam engine, the fluid changes phases between liquid and gas.
Air-breathing combustion engines
Air-breathing combustion engines are combustion engines that use the oxygen in atmospheric air to oxidise ('burn') the fuel, rather than carrying an oxidiser, as in a rocket. Theoretically, this should result in a better specific impulse than for rocket engines.
A continuous stream of air flows through the air-breathing engine. This air is compressed, mixed with fuel, ignited and expelled as the exhaust gas. In reaction engines, the majority of the combustion energy (heat) exits the engine as exhaust gas, which provides thrust directly.
Examples
Typical air-breathing engines include:
Reciprocating engine
Steam engine
Gas turbine
Airbreathing jet engine
Turbo-propeller engine
Pulse detonation engine
Pulse jet
Ramjet
Scramjet
Liquid air cycle engine/Reaction Engines SABRE.
Environmental effects
The operation of engines typically has a negative impact upon air quality and ambient sound levels. There has been a growing emphasis on the pollution producing features of automotive power systems. This has created new interest in alternate power sources and internal-combustion engine refinements. Though a few limited-production battery-powered electric vehicles have appeared, they have not proved competitive owing to costs and operating characteristics. In the 21st century the diesel engine has been increasing in popularity with automobile owners. However, the gasoline engine and the Diesel engine, with their new emission-control devices to improve emission performance, have not yet been significantly challenged. A number of manufacturers have introduced hybrid engines, mainly involving a small gasoline engine coupled with an electric motor and with a large battery bank, these are starting to become a popular option because of their environment awareness.
Air quality
Exhaust gas from a spark ignition engine consists of the following: nitrogen 70 to 75% (by volume), water vapor 10 to 12%, carbon dioxide 10 to 13.5%, hydrogen 0.5 to 2%, oxygen 0.2 to 2%, carbon monoxide: 0.1 to 6%, unburnt hydrocarbons and partial oxidation products (e.g. aldehydes) 0.5 to 1%, nitrogen monoxide 0.01 to 0.4%, nitrous oxide <100 ppm, sulfur dioxide 15 to 60 ppm, traces of other compounds such as fuel additives and lubricants, also halogen and metallic compounds, and other particles. Carbon monoxide is highly toxic, and can cause carbon monoxide poisoning, so it is important to avoid any build-up of the gas in a confined space. Catalytic converters can reduce toxic emissions, but not eliminate them. Also, resulting greenhouse gas emissions, chiefly carbon dioxide, from the widespread use of engines in the modern industrialized world is contributing to the global greenhouse effect – a primary concern regarding global warming.
Non-combusting heat engines
Some engines convert heat from noncombustive processes into mechanical work, for example a nuclear power plant uses the heat from the nuclear reaction to produce steam and drive a steam engine, or a gas turbine in a rocket engine may be driven by decomposing hydrogen peroxide. Apart from the different energy source, the engine is often engineered much the same as an internal or external combustion engine.
Another group of noncombustive engines includes thermoacoustic heat engines (sometimes called "TA engines") which are thermoacoustic devices that use high-amplitude sound waves to pump heat from one place to another, or conversely use a heat difference to induce high-amplitude sound waves. In general, thermoacoustic engines can be divided into standing wave and travelling wave devices.
Stirling engines can be another form of non-combustive heat engine. They use the Stirling thermodynamic cycle to convert heat into work. An example is the alpha type Stirling engine, whereby gas flows, via a recuperator, between a hot cylinder and a cold cylinder, which are attached to reciprocating pistons 90° out of phase. The gas receives heat at the hot cylinder and expands, driving the piston that turns the crankshaft. After expanding and flowing through the recuperator, the gas rejects heat at the cold cylinder and the ensuing pressure drop leads to its compression by the other (displacement) piston, which forces it back to the hot cylinder.
Non-thermal chemically powered motor
Non-thermal motors usually are powered by a chemical reaction, but are not heat engines. Examples include:
Molecular motor – motors found in living things
Synthetic molecular motor.
Electric motor
An electric motor uses electrical energy to produce mechanical energy, usually through the interaction of magnetic fields and current-carrying conductors. The reverse process, producing electrical energy from mechanical energy, is accomplished by a generator or dynamo. Traction motors used on vehicles often perform both tasks. Electric motors can be run as generators and vice versa, although this is not always practical.
Electric motors are ubiquitous, being found in applications as diverse as industrial fans, blowers and pumps, machine tools, household appliances, power tools, and disk drives. They may be powered by direct current (for example a battery powered portable device or motor vehicle), or by alternating current from a central electrical distribution grid. The smallest motors may be found in electric wristwatches. Medium-size motors of highly standardized dimensions and characteristics provide convenient mechanical power for industrial uses. The very largest electric motors are used for propulsion of large ships, and for such purposes as pipeline compressors, with ratings in the thousands of kilowatts. Electric motors may be classified by the source of electric power, by their internal construction, and by their application.
The physical principle of production of mechanical force by the interactions of an electric current and a magnetic field was known as early as 1821. Electric motors of increasing efficiency were constructed throughout the 19th century, but commercial exploitation of electric motors on a large scale required efficient electrical generators and electrical distribution networks.
To reduce the electric energy consumption from motors and their associated carbon footprints, various regulatory authorities in many countries have introduced and implemented legislation to encourage the manufacture and use of higher efficiency electric motors. A well-designed motor can convert over 90% of its input energy into useful power for decades. When the efficiency of a motor is raised by even a few percentage points, the savings, in kilowatt hours (and therefore in cost), are enormous. The electrical energy efficiency of a typical industrial induction motor can be improved by: 1) reducing the electrical losses in the stator windings (e.g., by increasing the cross-sectional area of the conductor, improving the winding technique, and using materials with higher electrical conductivities, such as copper), 2) reducing the electrical losses in the rotor coil or casting (e.g., by using materials with higher electrical conductivities, such as copper), 3) reducing magnetic losses by using better quality magnetic steel, 4) improving the aerodynamics of motors to reduce mechanical windage losses, 5) improving bearings to reduce friction losses, and 6) minimizing manufacturing tolerances. For further discussion on this subject, see Premium efficiency).
By convention, electric engine refers to a railroad electric locomotive, rather than an electric motor.
Physically powered motor
Some motors are powered by potential or kinetic energy, for example some funiculars, gravity plane and ropeway conveyors have used the energy from moving water or rocks, and some clocks have a weight that falls under gravity. Other forms of potential energy include compressed gases (such as pneumatic motors), springs (clockwork motors) and elastic bands.
Historic military siege engines included large catapults, trebuchets, and (to some extent) battering rams were powered by potential energy.
Pneumatic motor
A pneumatic motor is a machine that converts potential energy in the form of compressed air into mechanical work. Pneumatic motors generally convert the compressed air to mechanical work through either linear or rotary motion. Linear motion can come from either a diaphragm or piston actuator, while rotary motion is supplied by either a vane type air motor or piston air motor. Pneumatic motors have found widespread success in the hand-held tool industry and continual attempts are being made to expand their use to the transportation industry. However, pneumatic motors must overcome efficiency deficiencies before being seen as a viable option in the transportation industry.
Hydraulic motor
A hydraulic motor derives its power from a pressurized liquid. This type of engine is used to move heavy loads and drive machinery.
Hybrid
Some motor units can have multiple sources of energy. For example, a plug-in hybrid electric vehicle's electric motor could source electricity from either a battery or from fossil fuels inputs via an internal combustion engine and a generator.
Performance
The following are used in the assessment of the performance of an engine.
Speed
Speed refers to crankshaft rotation in piston engines and the speed of compressor/turbine rotors and electric motor rotors. It is measured in revolutions per minute (rpm).
Thrust
Thrust is the force exerted on an airplane as a consequence of its propeller or jet engine accelerating the air passing through it. It is also the force exerted on a ship as a consequence of its propeller accelerating the water passing through it.
Torque
Torque is a turning moment on a shaft and is calculated by multiplying the force causing the moment by its distance from the shaft.
Power
Power is the measure of how fast work is done.
Efficiency
Efficiency is a proportion of useful energy output compared to total input.
Sound levels
Vehicle noise is predominantly from the engine at low vehicle speeds and from tires and the air flowing past the vehicle at higher speeds. Electric motors are quieter than internal combustion engines. Thrust-producing engines, such as turbofans, turbojets and rockets emit the greatest amount of noise due to the way their thrust-producing, high-velocity exhaust streams interact with the surrounding stationary air.
Noise reduction technology includes intake and exhaust system mufflers (silencers) on gasoline and diesel engines and noise attenuation liners in turbofan inlets.
Engines by use
Particularly notable kinds of engines include:
Aircraft engine
Automobile engine
Model engine
Motorcycle engine
Marine propulsion engines such as Outboard motor
Non-road engine is the term used to define engines that are not used by vehicles on roadways.
Railway locomotive engine
Spacecraft propulsion engines such as Rocket engine
Traction engine
See also
Aircraft engine
Automobile engine replacement
Electric motor
Engine cooling
Engine swap
Gasoline engine
HCCI engine
Hesselman engine
Hot bulb engine
IRIS engine
Micromotor
Flagella – biological motor used by some microorganisms
Nanomotor
Molecular motor
Synthetic molecular motor
Adiabatic quantum motor
Multifuel
Reaction engine
Solid-state engine
Timeline of heat engine technology
Timeline of motor and engine technology
References
Citations
Sources
J.G. Landels, Engineering in the Ancient World,
External links
Detailed Engine Animations
Working 4-Stroke Engine – Animation
Animated illustrations of various engines
5 Ways to Redesign the Internal Combustion Engine
Article on Small SI Engines.
Article on Compact Diesel Engines.
Types Of Engines
Motors (1915) by James Slough Zerbe. | 0.782156 | 0.998953 | 0.781337 |
Euler's laws of motion | In classical mechanics, Euler's laws of motion are equations of motion which extend Newton's laws of motion for point particle to rigid body motion. They were formulated by Leonhard Euler about 50 years after Isaac Newton formulated his laws.
Overview
Euler's first law
Euler's first law states that the rate of change of linear momentum of a rigid body is equal to the resultant of all the external forces acting on the body:
Internal forces between the particles that make up a body do not contribute to changing the momentum of the body as there is an equal and opposite force resulting in no net effect.
The linear momentum of a rigid body is the product of the mass of the body and the velocity of its center of mass .
Euler's second law
Euler's second law states that the rate of change of angular momentum about a point that is fixed in an inertial reference frame (often the center of mass of the body), is equal to the sum of the external moments of force (torques) acting on that body about that point:
Note that the above formula holds only if both and are computed with respect to a fixed inertial frame or a frame parallel to the inertial frame but fixed on the center of mass.
For rigid bodies translating and rotating in only two dimensions, this can be expressed as:
where:
is the position vector of the center of mass of the body with respect to the point about which moments are summed,
is the linear acceleration of the center of mass of the body,
is the mass of the body,
is the angular acceleration of the body, and
is the moment of inertia of the body about its center of mass.
See also Euler's equations (rigid body dynamics).
Explanation and derivation
The distribution of internal forces in a deformable body are not necessarily equal throughout, i.e. the stresses vary from one point to the next. This variation of internal forces throughout the body is governed by Newton's second law of motion of conservation of linear momentum and angular momentum, which for their simplest use are applied to a mass particle but are extended in continuum mechanics to a body of continuously distributed mass. For continuous bodies these laws are called Euler's laws of motion.
The total body force applied to a continuous body with mass , mass density , and volume , is the volume integral integrated over the volume of the body:
where is the force acting on the body per unit mass (dimensions of acceleration, misleadingly called the "body force"), and is an infinitesimal mass element of the body.
Body forces and contact forces acting on the body lead to corresponding moments (torques) of those forces relative to a given point. Thus, the total applied torque about the origin is given by
where and respectively indicate the moments caused by the body and contact forces.
Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) acting on the body can be given as the sum of a volume and surface integral:
where is called the surface traction, integrated over the surface of the body, in turn denotes a unit vector normal and directed outwards to the surface .
Let the coordinate system be an inertial frame of reference, be the position vector of a point particle in the continuous body with respect to the origin of the coordinate system, and be the velocity vector of that point.
Euler's first axiom or law (law of balance of linear momentum or balance of forces) states that in an inertial frame the time rate of change of linear momentum of an arbitrary portion of a continuous body is equal to the total applied force acting on that portion, and it is expressed as
Euler's second axiom or law (law of balance of angular momentum or balance of torques) states that in an inertial frame the time rate of change of angular momentum of an arbitrary portion of a continuous body is equal to the total applied torque acting on that portion, and it is expressed as
where is the velocity, the volume, and the derivatives of and are material derivatives.
See also
List of topics named after Leonhard Euler
Euler's laws of rigid body rotations
Newton–Euler equations of motion with 6 components, combining Euler's two laws into one equation.
References
Equations of physics
Scientific observation
Rigid bodies | 0.794512 | 0.983362 | 0.781293 |
Precession | Precession is a change in the orientation of the rotational axis of a rotating body. In an appropriate reference frame it can be defined as a change in the first Euler angle, whereas the third Euler angle defines the rotation itself. In other words, if the axis of rotation of a body is itself rotating about a second axis, that body is said to be precessing about the second axis. A motion in which the second Euler angle changes is called nutation. In physics, there are two types of precession: torque-free and torque-induced.
In astronomy, precession refers to any of several slow changes in an astronomical body's rotational or orbital parameters. An important example is the steady change in the orientation of the axis of rotation of the Earth, known as the precession of the equinoxes.
Torque-free or torque neglected
Torque-free precession implies that no external moment (torque) is applied to the body. In torque-free precession, the angular momentum is a constant, but the angular velocity vector changes orientation with time. What makes this possible is a time-varying moment of inertia, or more precisely, a time-varying inertia matrix. The inertia matrix is composed of the moments of inertia of a body calculated with respect to separate coordinate axes (e.g. , , ). If an object is asymmetric about its principal axis of rotation, the moment of inertia with respect to each coordinate direction will change with time, while preserving angular momentum. The result is that the component of the angular velocities of the body about each axis will vary inversely with each axis' moment of inertia.
The torque-free precession rate of an object with an axis of symmetry, such as a disk, spinning about an axis not aligned with that axis of symmetry can be calculated as follows:
where is the precession rate, is the spin rate about the axis of symmetry, is the moment of inertia about the axis of symmetry, is moment of inertia about either of the other two equal perpendicular principal axes, and is the angle between the moment of inertia direction and the symmetry axis.
When an object is not perfectly rigid, inelastic dissipation will tend to damp torque-free precession, and the rotation axis will align itself with one of the inertia axes of the body.
For a generic solid object without any axis of symmetry, the evolution of the object's orientation, represented (for example) by a rotation matrix that transforms internal to external coordinates, may be numerically simulated. Given the object's fixed internal moment of inertia tensor and fixed external angular momentum , the instantaneous angular velocity is
Precession occurs by repeatedly recalculating and applying a small rotation vector for the short time ; e.g.:
for the skew-symmetric matrix . The errors induced by finite time steps tend to increase the rotational kinetic energy:
this unphysical tendency can be counteracted by repeatedly applying a small rotation vector perpendicular to both and , noting that
Torque-induced
Torque-induced precession (gyroscopic precession) is the phenomenon in which the axis of a spinning object (e.g., a gyroscope) describes a cone in space when an external torque is applied to it. The phenomenon is commonly seen in a spinning toy top, but all rotating objects can undergo precession. If the speed of the rotation and the magnitude of the external torque are constant, the spin axis will move at right angles to the direction that would intuitively result from the external torque. In the case of a toy top, its weight is acting downwards from its center of mass and the normal force (reaction) of the ground is pushing up on it at the point of contact with the support. These two opposite forces produce a torque which causes the top to precess.
The device depicted on the right is gimbal mounted. From inside to outside there are three axes of rotation: the hub of the wheel, the gimbal axis, and the vertical pivot.
To distinguish between the two horizontal axes, rotation around the wheel hub will be called spinning, and rotation around the gimbal axis will be called pitching. Rotation around the vertical pivot axis is called rotation.
First, imagine that the entire device is rotating around the (vertical) pivot axis. Then, spinning of the wheel (around the wheelhub) is added. Imagine the gimbal axis to be locked, so that the wheel cannot pitch. The gimbal axis has sensors, that measure whether there is a torque around the gimbal axis.
In the picture, a section of the wheel has been named . At the depicted moment in time, section is at the perimeter of the rotating motion around the (vertical) pivot axis. Section , therefore, has a lot of angular rotating velocity with respect to the rotation around the pivot axis, and as is forced closer to the pivot axis of the rotation (by the wheel spinning further), because of the Coriolis effect, with respect to the vertical pivot axis, tends to move in the direction of the top-left arrow in the diagram (shown at 45°) in the direction of rotation around the pivot axis. Section of the wheel is moving away from the pivot axis, and so a force (again, a Coriolis force) acts in the same direction as in the case of . Note that both arrows point in the same direction.
The same reasoning applies for the bottom half of the wheel, but there the arrows point in the opposite direction to that of the top arrows. Combined over the entire wheel, there is a torque around the gimbal axis when some spinning is added to rotation around a vertical axis.
It is important to note that the torque around the gimbal axis arises without any delay; the response is instantaneous.
In the discussion above, the setup was kept unchanging by preventing pitching around the gimbal axis. In the case of a spinning toy top, when the spinning top starts tilting, gravity exerts a torque. However, instead of rolling over, the spinning top just pitches a little. This pitching motion reorients the spinning top with respect to the torque that is being exerted. The result is that the torque exerted by gravity – via the pitching motion – elicits gyroscopic precession (which in turn yields a counter torque against the gravity torque) rather than causing the spinning top to fall to its side.
Precession or gyroscopic considerations have an effect on bicycle performance at high speed. Precession is also the mechanism behind gyrocompasses.
Classical (Newtonian)
Precession is the change of angular velocity and angular momentum produced by a torque. The general equation that relates the torque to the rate of change of angular momentum is:
where and are the torque and angular momentum vectors respectively.
Due to the way the torque vectors are defined, it is a vector that is perpendicular to the plane of the forces that create it. Thus it may be seen that the angular momentum vector will change perpendicular to those forces. Depending on how the forces are created, they will often rotate with the angular momentum vector, and then circular precession is created.
Under these circumstances the angular velocity of precession is given by:
where is the moment of inertia, is the angular velocity of spin about the spin axis, is the mass, is the acceleration due to gravity, is the angle between the spin axis and the axis of precession and is the distance between the center of mass and the pivot. The torque vector originates at the center of mass. Using , we find that the period of precession is given by:
Where is the moment of inertia, is the period of spin about the spin axis, and is the torque. In general, the problem is more complicated than this, however.
Relativistic (Einsteinian)
The special and general theories of relativity give three types of corrections to the Newtonian precession, of a gyroscope near a large mass such as Earth, described above. They are:
Thomas precession, a special-relativistic correction accounting for an object (such as a gyroscope) being accelerated along a curved path.
de Sitter precession, a general-relativistic correction accounting for the Schwarzschild metric of curved space near a large non-rotating mass.
Lense–Thirring precession, a general-relativistic correction accounting for the frame dragging by the Kerr metric of curved space near a large rotating mass.
The Schwarzschild geodesics (sometimes Schwarzschild precession) is used in the prediction of the anomalous perihelion precession of the planets, most notably for the accurate prediction of the apsidal precession of Mercury
Astronomy
In astronomy, precession refers to any of several gravity-induced, slow and continuous changes in an astronomical body's rotational axis or orbital path. Precession of the equinoxes, perihelion precession, changes in the tilt of Earth's axis to its orbit, and the eccentricity of its orbit over tens of thousands of years are all important parts of the astronomical theory of ice ages. (See Milankovitch cycles.)
Axial precession (precession of the equinoxes)
Axial precession is the movement of the rotational axis of an astronomical body, whereby the axis slowly traces out a cone. In the case of Earth, this type of precession is also known as the precession of the equinoxes, lunisolar precession, or precession of the equator. Earth goes through one such complete precessional cycle in a period of approximately 26,000 years or 1° every 72 years, during which the positions of stars will slowly change in both equatorial coordinates and ecliptic longitude. Over this cycle, Earth's north axial pole moves from where it is now, within 1° of Polaris, in a circle around the ecliptic pole, with an angular radius of about 23.5°.
The ancient Greek astronomer Hipparchus (c. 190–120 BC) is generally accepted to be the earliest known astronomer to recognize and assess the precession of the equinoxes at about 1° per century (which is not far from the actual value for antiquity, 1.38°), although there is some minor dispute about whether he was. In ancient China, the Jin-dynasty scholar-official Yu Xi ( 307–345 AD) made a similar discovery centuries later, noting that the position of the Sun during the winter solstice had drifted roughly one degree over the course of fifty years relative to the position of the stars. The precession of Earth's axis was later explained by Newtonian physics. Being an oblate spheroid, Earth has a non-spherical shape, bulging outward at the equator. The gravitational tidal forces of the Moon and Sun apply torque to the equator, attempting to pull the equatorial bulge into the plane of the ecliptic, but instead causing it to precess. The torque exerted by the planets, particularly Jupiter, also plays a role.
Apsidal precession
The orbits of planets around the Sun do not really follow an identical ellipse each time, but actually trace out a flower-petal shape because the major axis of each planet's elliptical orbit also precesses within its orbital plane, partly in response to perturbations in the form of the changing gravitational forces exerted by other planets. This is called perihelion precession or apsidal precession.
In the adjunct image, Earth's apsidal precession is illustrated. As the Earth travels around the Sun, its elliptical orbit rotates gradually over time. The eccentricity of its ellipse and the precession rate of its orbit are exaggerated for visualization. Most orbits in the Solar System have a much smaller eccentricity and precess at a much slower rate, making them nearly circular and nearly stationary.
Discrepancies between the observed perihelion precession rate of the planet Mercury and that predicted by classical mechanics were prominent among the forms of experimental evidence leading to the acceptance of Einstein's Theory of Relativity (in particular, his General Theory of Relativity), which accurately predicted the anomalies. Deviating from Newton's law, Einstein's theory of gravitation predicts an extra term of , which accurately gives the observed excess turning rate of 43 arcseconds every 100 years.
Nodal precession
Orbital nodes also precess over time.
See also
Larmor precession
Nutation
Polar motion
Precession (mechanical)
Precession as a form of parallel transport
References
External links
Explanation and derivation of formula for precession of a top
Precession and the Milankovich theory From Stargazers to Starships
Earth
Dynamics (mechanics) | 0.784092 | 0.996331 | 0.781215 |
Quintessence (physics) | In physics, quintessence is a hypothetical form of dark energy, more precisely a scalar field, postulated as an explanation of the observation of an accelerating rate of expansion of the universe. The first example of this scenario was proposed by Ratra and Peebles (1988) and Wetterich (1988). The concept was expanded to more general types of time-varying dark energy, and the term "quintessence" was first introduced in a 1998 paper by Robert R. Caldwell, Rahul Dave and Paul Steinhardt. It has been proposed by some physicists to be a fifth fundamental force. Quintessence differs from the cosmological constant explanation of dark energy in that it is dynamic; that is, it changes over time, unlike the cosmological constant which, by definition, does not change. Quintessence can be either attractive or repulsive depending on the ratio of its kinetic and potential energy. Those working with this postulate believe that quintessence became repulsive about ten billion years ago, about 3.5 billion years after the Big Bang.
A group of researchers argued in 2021 that observations of the Hubble tension may imply that only quintessence models with a nonzero coupling constant are viable.
Terminology
The name comes from quinta essentia (fifth element). So called in Latin starting from the Middle Ages, this was the (first) element added by Aristotle to the other four ancient classical elements because he thought it was the essence of the celestial world. Aristotle posited it to be a pure, fine, and primigenial element. Later scholars identified this element with aether. Similarly, modern quintessence would be the fifth known "dynamical, time-dependent, and spatially inhomogeneous" contribution to the overall mass–energy content of the universe.
Of course, the other four components are not the ancient Greek classical elements, but rather "baryons, neutrinos, dark matter, [and] radiation." Although neutrinos are sometimes considered radiation, the term "radiation" in this context is only used to refer to massless photons. Spatial curvature of the cosmos (which has not been detected) is excluded because it is non-dynamical and homogeneous; the cosmological constant would not be considered a fifth component in this sense, because it is non-dynamical, homogeneous, and time-independent.
Scalar field
Quintessence (Q) is a scalar field with an equation of state where wq, the ratio of pressure pq and density q, is given by the potential energy and a kinetic term:
Hence, quintessence is dynamic, and generally has a density and wq parameter that varies with time. By contrast, a cosmological constant is static, with a fixed energy density and wq = −1.
Tracker behavior
Many models of quintessence have a tracker behavior, which according to Ratra and Peebles (1988) and Paul Steinhardt et al. (1999) partly solves the cosmological constant problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start having characteristics similar to dark energy, eventually dominating the universe. This naturally sets the low scale of the dark energy. When comparing the predicted expansion rate of the universe as given by the tracker solutions with cosmological data, a main feature of tracker solutions is that one needs four parameters to properly describe the behavior of their equation of state, whereas it has been shown that at most a two-parameter model can optimally be constrained by mid-term future data (horizon 2015–2020).
Specific models
Some special cases of quintessence are phantom energy, in which wq < −1, and k-essence (short for kinetic quintessence), which has a non-standard form of kinetic energy. If this type of energy were to exist, it would cause a big rip in the universe due to the growing energy density of dark energy, which would cause the expansion of the universe to increase at a faster-than-exponential rate.
Holographic dark energy
Holographic dark energy models, compared with cosmological constant models, imply a high degeneracy. It has been suggested that dark energy might originate from quantum fluctuations of spacetime, and is limited by the event horizon of the universe.
Studies with quintessence dark energy found that it dominates gravitational collapse in a spacetime simulation, based on the holographic thermalization. These results show that the smaller the state parameter of quintessence is, the harder it is for the plasma to thermalize.
Quintom scenario
In 2004, when scientists fitted the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary ( = –1) from above to below. A proven no-go theorem indicates this situation, called the Quintom scenario, requires at least two degrees of freedom for dark energy models involving ideal gases or scalar fields.
See also
Aether (classical element)
References
Further reading
Dark energy | 0.788175 | 0.990962 | 0.781052 |
Angular acceleration | In physics, angular acceleration (symbol α, alpha) is the time rate of change of angular velocity. Following the two types of angular velocity, spin angular velocity and orbital angular velocity, the respective types of angular acceleration are: spin angular acceleration, involving a rigid body about an axis of rotation intersecting the body's centroid; and orbital angular acceleration, involving a point particle and an external axis.
Angular acceleration has physical dimensions of angle per time squared, measured in SI units of radians per second squared (rads-2). In two dimensions, angular acceleration is a pseudoscalar whose sign is taken to be positive if the angular speed increases counterclockwise or decreases clockwise, and is taken to be negative if the angular speed increases clockwise or decreases counterclockwise. In three dimensions, angular acceleration is a pseudovector.
For rigid bodies, angular acceleration must be caused by a net external torque. However, this is not so for non-rigid bodies: For example, a figure skater can speed up their rotation (thereby obtaining an angular acceleration) simply by contracting their arms and legs inwards, which involves no external torque.
Orbital angular acceleration of a point particle
Particle in two dimensions
In two dimensions, the orbital angular acceleration is the rate at which the two-dimensional orbital angular velocity of the particle about the origin changes. The instantaneous angular velocity ω at any point in time is given by
where is the distance from the origin and is the cross-radial component of the instantaneous velocity (i.e. the component perpendicular to the position vector), which by convention is positive for counter-clockwise motion and negative for clockwise motion.
Therefore, the instantaneous angular acceleration α of the particle is given by
Expanding the right-hand-side using the product rule from differential calculus, this becomes
In the special case where the particle undergoes circular motion about the origin, becomes just the tangential acceleration , and vanishes (since the distance from the origin stays constant), so the above equation simplifies to
In two dimensions, angular acceleration is a number with plus or minus sign indicating orientation, but not pointing in a direction. The sign is conventionally taken to be positive if the angular speed increases in the counter-clockwise direction or decreases in the clockwise direction, and the sign is taken negative if the angular speed increases in the clockwise direction or decreases in the counter-clockwise direction. Angular acceleration then may be termed a pseudoscalar, a numerical quantity which changes sign under a parity inversion, such as inverting one axis or switching the two axes.
Particle in three dimensions
In three dimensions, the orbital angular acceleration is the rate at which three-dimensional orbital angular velocity vector changes with time. The instantaneous angular velocity vector at any point in time is given by
where is the particle's position vector, its distance from the origin, and its velocity vector.
Therefore, the orbital angular acceleration is the vector defined by
Expanding this derivative using the product rule for cross-products and the ordinary quotient rule, one gets:
Since is just , the second term may be rewritten as . In the case where the distance of the particle from the origin does not change with time (which includes circular motion as a subcase), the second term vanishes and the above formula simplifies to
From the above equation, one can recover the cross-radial acceleration in this special case as:
Unlike in two dimensions, the angular acceleration in three dimensions need not be associated with a change in the angular speed : If the particle's position vector "twists" in space, changing its instantaneous plane of angular displacement, the change in the direction of the angular velocity will still produce a nonzero angular acceleration. This cannot not happen if the position vector is restricted to a fixed plane, in which case has a fixed direction perpendicular to the plane.
The angular acceleration vector is more properly called a pseudovector: It has three components which transform under rotations in the same way as the Cartesian coordinates of a point do, but which do not transform like Cartesian coordinates under reflections.
Relation to torque
The net torque on a point particle is defined to be the pseudovector
where is the net force on the particle.
Torque is the rotational analogue of force: it induces change in the rotational state of a system, just as force induces change in the translational state of a system. As force on a particle is connected to acceleration by the equation , one may write a similar equation connecting torque on a particle to angular acceleration, though this relation is necessarily more complicated.
First, substituting into the above equation for torque, one gets
From the previous section:
where is orbital angular acceleration and is orbital angular velocity. Therefore:
In the special case of constant distance of the particle from the origin, the second term in the above equation vanishes and the above equation simplifies to
which can be interpreted as a "rotational analogue" to , where the quantity (known as the moment of inertia of the particle) plays the role of the mass . However, unlike , this equation does not apply to an arbitrary trajectory, only to a trajectory contained within a spherical shell about the origin.
See also
Angular momentum
Angular frequency
Angular velocity
Chirpyness
Rotational acceleration
Torque
References
Acceleration
Kinematic properties
Rotation
Torque
Temporal rates | 0.785711 | 0.994061 | 0.781045 |
Time and motion study | A time and motion study (or time-motion study) is a business efficiency technique combining the Time Study work of Frederick Winslow Taylor with the Motion Study work of Frank and Lillian Gilbreth (the same couple as is best known through the biographical 1950 film and book Cheaper by the Dozen). It is a major part of scientific management (Taylorism). After its first introduction, time study developed in the direction of establishing standard times, while motion study evolved into a technique for improving work methods. The two techniques became integrated and refined into a widely accepted method applicable to the improvement and upgrading of work systems. This integrated approach to work system improvement is known as methods engineering and it is applied today to industrial as well as service organizations, including banks, schools and hospitals.
Time studies
Time study is a direct and continuous observation of a task, using a timekeeping device (e.g., decimal minute stopwatch, computer-assisted electronic stopwatch, and videotape camera) to record the time taken to accomplish a task and it is often used if at least one of the following applies:
There are repetitive work cycles of short to long duration.
A wide variety of dissimilar work is performed.
Process control elements constitute a part of the cycle.
The Industrial Engineering Terminology Standard, defines time study as "a work measurement technique consisting of careful time measurement of the task with a time measuring instrument, adjusted for any observed variance from normal effort or pace and to allow adequate time for such items as foreign elements, unavoidable or machine delays, rest to overcome fatigue, and personal needs."
The systems of time and motion studies are frequently assumed to be interchangeable terms that are descriptive of equivalent theories. However, the underlying principles and the rationale for the establishment of each respective method are dissimilar, despite originating within the same school of thought.
The application of science to business problems and the use of time-study methods in standard setting and the planning of work were pioneered by Frederick Winslow Taylor. Taylor liaised with factory managers and from the success of these discussions wrote several papers proposing the use of wage-contingent performance standards based on scientific time study. At its most basic level time studies involved breaking down each job into component parts, timing each part and rearranging the parts into the most efficient method of working. By counting and calculating, Taylor wanted to transform management, which was essentially an oral tradition, into a set of calculated and written techniques.
Taylor and his colleagues placed emphasis on the content of a fair day's work and sought to maximize productivity irrespective of the physiological cost to the worker. For example, Taylor thought unproductive time usage (soldiering) to be the deliberate attempt of workers to promote their best interests and to keep employers ignorant of how fast work could be carried out. This instrumental view of human behavior by Taylor prepared the path for human relations to supersede scientific management in terms of literary success and managerial application.
Direct time study procedure
Following is the procedure developed by Mikell Groover for a direct time study:
Define and document the standard method.
Divide the task into work elements.
These first two steps are conducted prior to the actual timing. They familiarize the analyst with the task and allow the analyst to attempt to improve the work procedure before defining the standard time.
Time the work elements to obtain the observed time for the task.
Evaluate the worker's pace relative to standard performance (performance rating), to determine the normal time.
Note that steps 3 and 4 are accomplished simultaneously. During these steps, several different work cycles are timed, and each cycle performance is rated independently. Finally, the values collected at these steps are averaged to get the normalized time.
Apply an allowance to the normal time to compute the standard time. The allowance factors that are needed in the work are then added to compute the standard time for the task.
Conducting time studies
According to good practice guidelines for production studies a comprehensive time study consists of:
Study goal set
Experimental design;
Time data collection;
Data analysis;
Reporting.
Easy analysis of working areas
The collection of time data can be done in several ways, depending on study goal and environmental conditions. Time and motion data can be captured with a common stopwatch, a handheld computer or a video recorder. There are a number of dedicated software packages used to turn a palmtop or a handheld PC into a time study device. As an alternative, time and motion data can be collected automatically from the memory of computer-control machines (i.e. automated time studies).
Criticisms
In response to Taylor's time studies and view of human nature, many strong criticisms and reactions were recorded. Unions, for example, regarded time study as a disguised tool of management designed to standardize and intensify the pace of production. Similarly, individuals such as Gilbreth (1909), Cadbury and Marshall heavily criticized Taylor and pervaded his work with subjectivity. For example, Cadbury in reply to Thompson stated that under scientific management employee skills and initiatives are passed from the individual to management, a view reiterated by Nyland. In addition, Taylor's critics condemned the lack of scientific substance in his time studies, in the sense that they relied heavily on individual interpretations of what workers actually do. However, the value in rationalizing production is indisputable and supported by academics such as Gantt, Ford and Munsterberg, and Taylor society members Mr C.G. Renold, Mr W.H. Jackson and Mr C.B. Thompson.
Proper time studies are based on repeated observation, so that motions performed on the same part differently by one or many workers can be recorded, to determine those values that are truly repetitive and measurable.
Motion studies
In contrast to, and motivated by, Taylor's time study methods, the Gilbreths proposed a technical language, allowing for the analysis of the labor process in a scientific context. The Gilbreths made use of scientific insights to develop a study method based upon the analysis of "work motions", consisting in part of filming the details of a worker's activities and their body posture while recording the time. The films served two main purposes. One was the visual record of how work had been done, emphasizing areas for improvement. Secondly, the films also served the purpose of training workers about the best way to perform their work. This method allowed the Gilbreths to build on the best elements of these workflows and to create a standardized best practice.
Taylor vs. Gilbreths
Although for Taylor, motion studies remained subordinate to time studies, the attention he paid to the motion study technique demonstrated the seriousness with which he considered the Gilbreths' method. The split with Taylor in 1914, on the basis of attitudes to workers, meant the Gilbreths had to argue contrary to the trade unionists, government commissions and Robert F. Hoxie who believed scientific management was unstoppable. The Gilbreths were charged with the task of proving that motion study particularly, and scientific management generally, increased industrial output in ways which improved and did not detract from workers' mental and physical strength. This was no simple task given the propaganda fuelling the Hoxie report and the consequent union opposition to scientific management. In addition, the Gilbreths credibility and academic success continued to be hampered by Taylor who held the view that motion studies were nothing more than a continuation of his work.
Both Taylor and the Gilbreths continue to be criticized for their respective work, but it should be remembered that they were writing at a time of industrial reorganization and the emergence of large, complex organizations with new forms of technology. Furthermore, to equate scientific management merely with time and motion study and consequently labor control not only misconceives the scope of scientific management but also misinterprets Taylor's incentives for proposing a different style of managerial thought.
Health care time and motion study
A health care time and motion study is used to research and track the efficiency and quality of health care workers. In the case of nurses, numerous programs have been initiated to increase the percent of a shift nurses spend providing direct care to patients. Prior to interventions nurses were found to spend ~20% of their time doing direct care. After focused intervention, some hospitals doubled that number, with some even exceeding 70% of shift time with patients, resulting in reduced errors, codes, and falls.
Methods
External observer: Someone visually follows the person being observed, either contemporaneously or via video recording. This method presents additional expense as it usually requires a 1 to 1 ratio of research time to subject time. An advantage is the data can be more consistent, complete, and accurate than with self-reporting.
Self-reporting: Self-reported studies require the target to record time and activity data. This can be done contemporaneously by having subjects stop and start a timer when completing a task, through work sampling where the subject records what they are doing at determined or random intervals, or by having the subject journal activities at the end of the day. Self-reporting introduces errors that may not be present through other methods, including errors in temporal perception and memory, as well as the motivation to manipulate the data.
Automation: Motion can be tracked with GPS. Documentation activities can be tracked through monitoring software embedded in the applications used to create documentation. Badge scans can also create a log of activity.
See also
Ergonomics
Human factors
Methods-time measurement
Memo motion
Predetermined motion time system
Standard time
Industrial Engineering
Evolutionary economics
References
Economic efficiency
Industrial engineering
Articles containing video clips | 0.785477 | 0.994313 | 0.78101 |
Physical chemistry | Physical chemistry is the study of macroscopic and microscopic phenomena in chemical systems in terms of the principles, practices, and concepts of physics such as motion, energy, force, time, thermodynamics, quantum chemistry, statistical mechanics, analytical dynamics and chemical equilibria.
Physical chemistry, in contrast to chemical physics, is predominantly (but not always) a supra-molecular science, as the majority of the principles on which it was founded relate to the bulk rather than the molecular or atomic structure alone (for example, chemical equilibrium and colloids).
Some of the relationships that physical chemistry strives to understand include the effects of:
Intermolecular forces that act upon the physical properties of materials (plasticity, tensile strength, surface tension in liquids).
Reaction kinetics on the rate of a reaction.
The identity of ions and the electrical conductivity of materials.
Surface science and electrochemistry of cell membranes.
Interaction of one body with another in terms of quantities of heat and work called thermodynamics.
Transfer of heat between a chemical system and its surroundings during change of phase or chemical reaction taking place called thermochemistry
Study of colligative properties of number of species present in solution.
Number of phases, number of components and degree of freedom (or variance) can be correlated with one another with help of phase rule.
Reactions of electrochemical cells.
Behaviour of microscopic systems using quantum mechanics and macroscopic systems using statistical thermodynamics.
Calculation of the energy of electron movement in molecules and metal complexes.
Key concepts
The key concepts of physical chemistry are the ways in which pure physics is applied to chemical problems.
One of the key concepts in classical chemistry is that all chemical compounds can be described as groups of atoms bonded together and chemical reactions can be described as the making and breaking of those bonds. Predicting the properties of chemical compounds from a description of atoms and how they bond is one of the major goals of physical chemistry. To describe the atoms and bonds precisely, it is necessary to know both where the nuclei of the atoms are, and how electrons are distributed around them.
Disciplines
Quantum chemistry, a subfield of physical chemistry especially concerned with the application of quantum mechanics to chemical problems, provides tools to determine how strong and what shape bonds are, how nuclei move, and how light can be absorbed or emitted by a chemical compound. Spectroscopy is the related sub-discipline of physical chemistry which is specifically concerned with the interaction of electromagnetic radiation with matter.
Another set of important questions in chemistry concerns what kind of reactions can happen spontaneously and which properties are possible for a given chemical mixture. This is studied in chemical thermodynamics, which sets limits on quantities like how far a reaction can proceed, or how much energy can be converted into work in an internal combustion engine, and which provides links between properties like the thermal expansion coefficient and rate of change of entropy with pressure for a gas or a liquid. It can frequently be used to assess whether a reactor or engine design is feasible, or to check the validity of experimental data. To a limited extent, quasi-equilibrium and non-equilibrium thermodynamics can describe irreversible changes. However, classical thermodynamics is mostly concerned with systems in equilibrium and reversible changes and not what actually does happen, or how fast, away from equilibrium.
Which reactions do occur and how fast is the subject of chemical kinetics, another branch of physical chemistry. A key idea in chemical kinetics is that for reactants to react and form products, most chemical species must go through transition states which are higher in energy than either the reactants or the products and serve as a barrier to reaction. In general, the higher the barrier, the slower the reaction. A second is that most chemical reactions occur as a sequence of elementary reactions, each with its own transition state. Key questions in kinetics include how the rate of reaction depends on temperature and on the concentrations of reactants and catalysts in the reaction mixture, as well as how catalysts and reaction conditions can be engineered to optimize the reaction rate.
The fact that how fast reactions occur can often be specified with just a few concentrations and a temperature, instead of needing to know all the positions and speeds of every molecule in a mixture, is a special case of another key concept in physical chemistry, which is that to the extent an engineer needs to know, everything going on in a mixture of very large numbers (perhaps of the order of the Avogadro constant, 6 x 1023) of particles can often be described by just a few variables like pressure, temperature, and concentration. The precise reasons for this are described in statistical mechanics, a specialty within physical chemistry which is also shared with physics. Statistical mechanics also provides ways to predict the properties we see in everyday life from molecular properties without relying on empirical correlations based on chemical similarities.
History
The term "physical chemistry" was coined by Mikhail Lomonosov in 1752, when he presented a lecture course entitled "A Course in True Physical Chemistry" before the students of Petersburg University. In the preamble to these lectures he gives the definition: "Physical chemistry is the science that must explain under provisions of physical experiments the reason for what is happening in complex bodies through chemical operations".
Modern physical chemistry originated in the 1860s to 1880s with work on chemical thermodynamics, electrolytes in solutions, chemical kinetics and other subjects. One milestone was the publication in 1876 by Josiah Willard Gibbs of his paper, On the Equilibrium of Heterogeneous Substances. This paper introduced several of the cornerstones of physical chemistry, such as Gibbs energy, chemical potentials, and Gibbs' phase rule.
The first scientific journal specifically in the field of physical chemistry was the German journal, Zeitschrift für Physikalische Chemie, founded in 1887 by Wilhelm Ostwald and Jacobus Henricus van 't Hoff. Together with Svante August Arrhenius, these were the leading figures in physical chemistry in the late 19th century and early 20th century. All three were awarded the Nobel Prize in Chemistry between 1901 and 1909.
Developments in the following decades include the application of statistical mechanics to chemical systems and work on colloids and surface chemistry, where Irving Langmuir made many contributions. Another important step was the development of quantum mechanics into quantum chemistry from the 1930s, where Linus Pauling was one of the leading names. Theoretical developments have gone hand in hand with developments in experimental methods, where the use of different forms of spectroscopy, such as infrared spectroscopy, microwave spectroscopy, electron paramagnetic resonance and nuclear magnetic resonance spectroscopy, is probably the most important 20th century development.
Further development in physical chemistry may be attributed to discoveries in nuclear chemistry, especially in isotope separation (before and during World War II), more recent discoveries in astrochemistry, as well as the development of calculation algorithms in the field of "additive physicochemical properties" (practically all physicochemical properties, such as boiling point, critical point, surface tension, vapor pressure, etc.—more than 20 in all—can be precisely calculated from chemical structure alone, even if the chemical molecule remains unsynthesized), and herein lies the practical importance of contemporary physical chemistry.
See Group contribution method, Lydersen method, Joback method, Benson group increment theory, quantitative structure–activity relationship
Journals
Some journals that deal with physical chemistry include
Zeitschrift für Physikalische Chemie (1887)
Journal of Physical Chemistry A (from 1896 as Journal of Physical Chemistry, renamed in 1997)
Physical Chemistry Chemical Physics (from 1999, formerly Faraday Transactions with a history dating back to 1905)
Macromolecular Chemistry and Physics (1947)
Annual Review of Physical Chemistry (1950)
Molecular Physics (1957)
Journal of Physical Organic Chemistry (1988)
Journal of Physical Chemistry B (1997)
ChemPhysChem (2000)
Journal of Physical Chemistry C (2007)
Journal of Physical Chemistry Letters (from 2010, combined letters previously published in the separate journals)
Historical journals that covered both chemistry and physics include Annales de chimie et de physique (started in 1789, published under the name given here from 1815 to 1914).
Branches and related topics
Chemical thermodynamics
Chemical kinetics
Statistical mechanics
Quantum chemistry
Electrochemistry
Photochemistry
Surface chemistry
Solid-state chemistry
Spectroscopy
Biophysical chemistry
Materials science
Physical organic chemistry
Micromeritics
See also
List of important publications in chemistry#Physical chemistry
List of unsolved problems in chemistry#Physical chemistry problems
Physical biochemistry
:Category:Physical chemists
References
External links
The World of Physical Chemistry (Keith J. Laidler, 1993)
Physical Chemistry from Ostwald to Pauling (John W. Servos, 1996)
Physical Chemistry: neither Fish nor Fowl? (Joachim Schummer, The Autonomy of Chemistry, Würzburg, Königshausen & Neumann, 1998, pp. 135–148)
The Cambridge History of Science: The modern physical and mathematical sciences (Mary Jo Nye, 2003) | 0.78408 | 0.996066 | 0.780995 |
Kinetic theory of gases | The kinetic theory of gases is a simple classical model of the thermodynamic behavior of gases. It treats a gas as composed of numerous particles, too small to see with a microscope, which are constantly in random motion. Their collisions with each other and with the walls of their container are used to explain physical properties of the gas—for example, the relationship between its temperature, pressure, and volume. The particles are now known to be the atoms or molecules of the gas.
The basic version of the model describes an ideal gas. It treats the collisions as perfectly elastic and as the only interaction between the particles, which are additionally assumed to be much smaller than their average distance apart.
The theory's introduction allowed many principal concepts of thermodynamics to be established. It explains the macroscopic properties of gases, such as volume, pressure, and temperature, as well as transport properties such as viscosity, thermal conductivity and mass diffusivity. Due to the time reversibility of microscopic dynamics (microscopic reversibility), the kinetic theory is also connected to the principle of detailed balance, in terms of the fluctuation-dissipation theorem (for Brownian motion) and the Onsager reciprocal relations.
The theory was historically significant as the first explicit exercise of the ideas of statistical mechanics.
History
Kinetic theory of matter
Antiquity
In about 50 BCE, the Roman philosopher Lucretius proposed that apparently static macroscopic bodies were composed on a small scale of rapidly moving atoms all bouncing off each other. This Epicurean atomistic point of view was rarely considered in the subsequent centuries, when Aristotlean ideas were dominant.
Modern era
An early scientific reflection on the microscopic and kinetic nature of matter and heat is found in a work by Mikhail Lomonosov, in which he wrote:
Kinetic theory of gases
In 1738 Daniel Bernoulli published Hydrodynamica, which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited the argument, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the pressure of the gas, and that their average kinetic energy determines the temperature of the gas. The theory was not immediately accepted, in part because conservation of energy had not yet been established, and it was not obvious to physicists how the collisions between molecules could be perfectly elastic.
Pioneers of the kinetic theory, whose work was also largely neglected by their contemporaries, were Mikhail Lomonosov (1747), Georges-Louis Le Sage (ca. 1780, published 1818), John Herapath (1816) and John James Waterston (1843), which connected their research with the development of mechanical explanations of gravitation.
In 1856 August Krönig created a simple gas-kinetic model, which only considered the translational motion of the particles. In 1857 Rudolf Clausius developed a similar, but more sophisticated version of the theory, which included translational and, contrary to Krönig, also rotational and vibrational molecular motions. In this same work he introduced the concept of mean free path of a particle. In 1859, after reading a paper about the diffusion of molecules by Clausius, Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. This was the first-ever statistical law in physics. Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. In his 1873 thirteen page article 'Molecules', Maxwell states: "we are told that an 'atom' is a material point, invested and surrounded by 'potential forces' and that when 'flying molecules' strike against a solid body in constant succession it causes what is called pressure of air and other gases."
In 1871, Ludwig Boltzmann generalized Maxwell's achievement and formulated the Maxwell–Boltzmann distribution. The logarithmic connection between entropy and probability was also first stated by Boltzmann.
At the beginning of the 20th century, atoms were considered by many physicists to be purely hypothetical constructs, rather than real objects. An important turning point was Albert Einstein's (1905) and Marian Smoluchowski's (1906) papers on Brownian motion, which succeeded in making certain accurate quantitative predictions based on the kinetic theory.
Following the development of the Boltzmann equation, a framework for its use in developing transport equations was developed independently by David Enskog and Sydney Chapman in 1917 and 1916. The framework provided a route to prediction of the transport properties of dilute gases, and became known as Chapman–Enskog theory. The framework was gradually expanded throughout the following century, eventually becoming a route to prediction of transport properties in real, dense gases.
Assumptions
The application of kinetic theory to ideal gases makes the following assumptions:
The gas consists of very small particles. This smallness of their size is such that the sum of the volume of the individual gas molecules is negligible compared to the volume of the container of the gas. This is equivalent to stating that the average distance separating the gas particles is large compared to their size, and that the elapsed time during a collision between particles and the container's wall is negligible when compared to the time between successive collisions.
The number of particles is so large that a statistical treatment of the problem is well justified. This assumption is sometimes referred to as the thermodynamic limit.
The rapidly moving particles constantly collide among themselves and with the walls of the container, and all these collisions are perfectly elastic.
Interactions (i.e. collisions) between particles are strictly binary and uncorrelated, meaning that there are no three-body (or higher) interactions, and the particles have no memory.
Except during collisions, the interactions among molecules are negligible. They exert no other forces on one another.
Thus, the dynamics of particle motion can be treated classically, and the equations of motion are time-reversible.
As a simplifying assumption, the particles are usually assumed to have the same mass as one another; however, the theory can be generalized to a mass distribution, with each mass type contributing to the gas properties independently of one another in agreement with Dalton's Law of partial pressures. Many of the model's predictions are the same whether or not collisions between particles are included, so they are often neglected as a simplifying assumption in derivations (see below).
More modern developments, such as Revised Enskog Theory and the Extended BGK model, relax one or more of the above assumptions. These can accurately describe the properties of dense gases, and gases with internal degrees of freedom, because they include the volume of the particles as well as contributions from intermolecular and intramolecular forces as well as quantized molecular rotations, quantum rotational-vibrational symmetry effects, and electronic excitation. While theories relaxing the assumptions that the gas particles occupy negligible volume and that collisions are strictly elastic have been successful, it has been shown that relaxing the requirement of interactions being binary and uncorrelated will eventually lead to divergent results.
Equilibrium properties
Pressure and kinetic energy
In the kinetic theory of gases, the pressure is assumed to be equal to the force (per unit area) exerted by the individual gas atoms or molecules hitting and rebounding from the gas container's surface.
Consider a gas particle traveling at velocity, , along the -direction in an enclosed volume with characteristic length, , cross-sectional area, , and volume, . The gas particle encounters a boundary after characteristic time
The momentum of the gas particle can then be described as
We combine the above with Newton's second law, which states that the force experienced by a particle is related to the time rate of change of its momentum, such that
Now consider a large number, N, of gas particles with random orientation in a three-dimensional volume. Because the orientation is random, the average particle speed, , in every direction is identical
Further, assume that the volume is symmetrical about its three dimensions, , such that
The total surface area on which the gas particles act is therefore
The pressure exerted by the collisions of the N gas particles with the surface can then be found by adding the force contribution of every particle and dividing by the interior surface area of the volume,
The total translational kinetic energy of the gas is defined as
providing the result
This is an important, non-trivial result of the kinetic theory because it relates pressure, a macroscopic property, to the translational kinetic energy of the molecules, which is a microscopic property.
Temperature and kinetic energy
Rewriting the above result for the pressure as , we may combine it with the ideal gas law
where is the Boltzmann constant and the absolute temperature defined by the ideal gas law, to obtain
which leads to a simplified expression of the average translational kinetic energy per molecule,
The translational kinetic energy of the system is times that of a molecule, namely . The temperature, is related to the translational kinetic energy by the description above, resulting in
which becomes
Equation is one important result of the kinetic theory:
The average molecular kinetic energy is proportional to the ideal gas law's absolute temperature.
From equations and, we have
Thus, the product of pressure and volume per mole is proportional to the average
translational molecular kinetic energy.
Equations and are called the "classical results", which could also be derived from statistical mechanics;
for more details, see:
The equipartition theorem requires that kinetic energy is partitioned equally between all kinetic degrees of freedom, D. A monatomic gas is axially symmetric about each spatial axis, so that D = 3 comprising translational motion along each axis. A diatomic gas is axially symmetric about only one axis, so that D = 5, comprising translational motion along three axes and rotational motion along two axes. A polyatomic gas, like water, is not radially symmetric about any axis, resulting in D = 6, comprising 3 translational and 3 rotational degrees of freedom.
Because the equipartition theorem requires that kinetic energy is partitioned equally, the total kinetic energy is
Thus, the energy added to the system per gas particle kinetic degree of freedom is
Therefore, the kinetic energy per kelvin of one mole of monatomic ideal gas (D = 3) is
where is the Avogadro constant, and R is the ideal gas constant.
Thus, the kinetic energy per unit kelvin of an ideal monatomic gas can be calculated easily:
per mole: 12.47 J / K
per molecule: 20.7 yJ / K = 129 μeV / K
At standard temperature (273.15 K), the kinetic energy can also be obtained:
per mole: 3406 J
per molecule: 5.65 zJ = 35.2 meV.
At higher temperatures (typically thousands of kelvins), vibrational modes become active to provide additional degrees of freedom, creating a temperature-dependence on D and the total molecular energy. Quantum statistical mechanics is needed to accurately compute these contributions.
Collisions with container wall
For an ideal gas in equilibrium, the rate of collisions with the container wall and velocity distribution of particles hitting the container wall can be calculated based on naive kinetic theory, and the results can be used for analyzing effusive flow rates, which is useful in applications such as the gaseous diffusion method for isotope separation.
Assume that in the container, the number density (number per unit volume) is and that the particles obey Maxwell's velocity distribution:
Then for a small area on the container wall, a particle with speed at angle from the normal of the area , will collide with the area within time interval , if it is within the distance from the area . Therefore, all the particles with speed at angle from the normal that can reach area within time interval are contained in the tilted pipe with a height of and a volume of .
The total number of particles that reach area within time interval also depends on the velocity distribution; All in all, it calculates to be:
Integrating this over all appropriate velocities within the constraint yields the number of atomic or molecular collisions with a wall of a container per unit area per unit time:
This quantity is also known as the "impingement rate" in vacuum physics. Note that to calculate the average speed of the Maxwell's velocity distribution, one has to integrate over.
The momentum transfer to the container wall from particles hitting the area with speed at angle from the normal, in time interval is:Integrating this over all appropriate velocities within the constraint yields the pressure (consistent with Ideal gas law):If this small area is punched to become a small hole, the effusive flow rate will be:
Combined with the ideal gas law, this yields
The above expression is consistent with Graham's law.
To calculate the velocity distribution of particles hitting this small area, we must take into account that all the particles with that hit the area within the time interval are contained in the tilted pipe with a height of and a volume of ; Therefore, compared to the Maxwell distribution, the velocity distribution will have an extra factor of :
with the constraint . The constant can be determined by the normalization condition to be , and overall:
Speed of molecules
From the kinetic energy formula it can be shown that
where v is in m/s, T is in kelvin, and m is the mass of one molecule of gas in kg. The most probable (or mode) speed is 81.6% of the root-mean-square speed , and the mean (arithmetic mean, or average) speed is 92.1% of the rms speed (isotropic distribution of speeds).
See:
Average,
Root-mean-square speed
Arithmetic mean
Mean
Mode (statistics)
Mean free path
In kinetic theory of gases, the mean free path is the average distance traveled by a molecule, or a number of molecules per volume, before they make their first collision. Let be the collision cross section of one molecule colliding with another. As in the previous section, the number density is defined as the number of molecules per (extensive) volume, or . The collision cross section per volume or collision cross section density is , and it is related to the mean free path by
Notice that the unit of the collision cross section per volume is reciprocal of length.
Transport properties
The kinetic theory of gases deals not only with gases in thermodynamic equilibrium, but also very importantly with gases not in thermodynamic equilibrium. This means using Kinetic Theory to consider what are known as "transport properties", such as viscosity, thermal conductivity, mass diffusivity and thermal diffusion.
In its most basic form, Kinetic gas theory is only applicable to dilute gases. The extension of Kinetic gas theory to dense gas mixtures, Revised Enskog Theory, was developed in 1983-1987 by E. G. D. Cohen, J. M. Kincaid and M. Lòpez de Haro, building on work by H. van Beijeren and M. H. Ernst.
Viscosity and kinetic momentum
In books on elementary kinetic theory one can find results for dilute gas modeling that are used in many fields. Derivation of the kinetic model for shear viscosity usually starts by considering a Couette flow where two parallel plates are separated by a gas layer. The upper plate is moving at a constant velocity to the right due to a force F. The lower plate is stationary, and an equal and opposite force must therefore be acting on it to keep it at rest. The molecules in the gas layer have a forward velocity component which increase uniformly with distance above the lower plate. The non-equilibrium flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions.
Inside a dilute gas in a Couette flow setup, let be the forward velocity of the gas at a horizontal flat layer (labeled as ); is along the horizontal direction. The number of molecules arriving at the area on one side of the gas layer, with speed at angle from the normal, in time interval is
These molecules made their last collision at , where is the mean free path. Each molecule will contribute a forward momentum of
where plus sign applies to molecules from above, and minus sign below. Note that the forward velocity gradient can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint
yields the forward momentum transfer per unit time per unit area (also known as shear stress):
The net rate of momentum per unit area that is transported across the imaginary surface is thus
Combining the above kinetic equation with Newton's law of viscosity
gives the equation for shear viscosity, which is usually denoted when it is a dilute gas:
Combining this equation with the equation for mean free path gives
Maxwell-Boltzmann distribution gives the average (equilibrium) molecular speed as
where is the most probable speed. We note that
and insert the velocity in the viscosity equation above. This gives the well known equation (with subsequently estimated below) for shear viscosity for dilute gases:
and is the molar mass. The equation above presupposes that the gas density is low (i.e. the pressure is low). This implies that the transport of momentum through the gas due to the translational motion of molecules is much larger than the transport due to momentum being transferred between molecules during collisions. The transfer of momentum between molecules is explicitly accounted for in Revised Enskog theory, which relaxes the requirement of a gas being dilute. The viscosity equation further presupposes that there is only one type of gas molecules, and that the gas molecules are perfect elastic and hard core particles of spherical shape. This assumption of elastic, hard core spherical molecules, like billiard balls, implies that the collision cross section of one molecule can be estimated by
The radius is called collision cross section radius or kinetic radius, and the diameter is called collision cross section diameter or kinetic diameter of a molecule in a monomolecular gas. There are no simple general relation between the collision cross section and the hard core size of the (fairly spherical) molecule. The relation depends on shape of the potential energy of the molecule. For a real spherical molecule (i.e. a noble gas atom or a reasonably spherical molecule) the interaction potential is more like the Lennard-Jones potential or Morse potential which have a negative part that attracts the other molecule from distances longer than the hard core radius. The radius for zero Lennard-Jones potential may then be used as a rough estimate for the kinetic radius. However, using this estimate will typically lead to an erroneous temperature dependency of the viscosity. For such interaction potentials, significantly more accurate results are obtained by numerical evaluation of the required collision integrals.
The expression for viscosity obtained from Revised Enskog Theory reduces to the above expression in the limit of infinite dilution, and can be written as
where is a term that tends to zero in the limit of infinite dilution that accounts for excluded volume, and is a term accounting for the transfer of momentum over a non-zero distance between particles during a collision.
Thermal conductivity and heat flux
Following a similar logic as above, one can derive the kinetic model for thermal conductivity of a dilute gas:
Consider two parallel plates separated by a gas layer. Both plates have uniform temperatures, and are so massive compared to the gas layer that they can be treated as thermal reservoirs. The upper plate has a higher temperature than the lower plate. The molecules in the gas layer have a molecular kinetic energy which increases uniformly with distance above the lower plate. The non-equilibrium energy flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions.
Let be the molecular kinetic energy of the gas at an imaginary horizontal surface inside the gas layer. The number of molecules arriving at an area on one side of the gas layer, with speed at angle from the normal, in time interval is
These molecules made their last collision at a distance above and below the gas layer, and each will contribute a molecular kinetic energy of
where is the specific heat capacity. Again, plus sign applies to molecules from above, and minus sign below. Note that the temperature gradient can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint
yields the energy transfer per unit time per unit area (also known as heat flux):
Note that the energy transfer from above is in the direction, and therefore the overall minus sign in the equation. The net heat flux across the imaginary surface is thus
Combining the above kinetic equation with Fourier's law
gives the equation for thermal conductivity, which is usually denoted when it is a dilute gas:
Similarly to viscosity, Revised Enskog Theory yields an expression for thermal conductivity that reduces to the above expression in the limit of infinite dilution, and which can be written as
where is a term that tends to unity in the limit of infinite dilution, accounting for excluded volume, and is a term accounting for the transfer of energy across a non-zero distance between particles during a collision.
Diffusion Coefficient and diffusion flux
Following a similar logic as above, one can derive the kinetic model for mass diffusivity of a dilute gas:
Consider a steady diffusion between two regions of the same gas with perfectly flat and parallel boundaries separated by a layer of the same gas. Both regions have uniform number densities, but the upper region has a higher number density than the lower region. In the steady state, the number density at any point is constant (that is, independent of time). However, the number density in the layer increases uniformly with distance above the lower plate. The non-equilibrium molecular flow is superimposed on a Maxwell-Boltzmann equilibrium distribution of molecular motions.
Let be the number density of the gas at an imaginary horizontal surface inside the layer. The number of molecules arriving at an area on one side of the gas layer, with speed at angle from the normal, in time interval is
These molecules made their last collision at a distance above and below the gas layer, where the local number density is
Again, plus sign applies to molecules from above, and minus sign below. Note that the number density gradient can be considered to be constant over a distance of mean free path.
Integrating over all appropriate velocities within the constraint
yields the molecular transfer per unit time per unit area (also known as diffusion flux):
Note that the molecular transfer from above is in the direction, and therefore the overall minus sign in the equation. The net diffusion flux across the imaginary surface is thus
Combining the above kinetic equation with Fick's first law of diffusion
gives the equation for mass diffusivity, which is usually denoted when it is a dilute gas:
The corresponding expression obtained from Revised Enskog Theory may be written as
where is a factor that tends to unity in the limit of infinite dilution, which accounts for excluded volume and the variation chemical potentials with density.
Detailed balance
Fluctuation and dissipation
The kinetic theory of gases entails that due to the microscopic reversibility of the gas particles' detailed dynamics, the system must obey the principle of detailed balance. Specifically, the fluctuation-dissipation theorem applies to the Brownian motion (or diffusion) and the drag force, which leads to the Einstein–Smoluchowski equation:where
is the mass diffusivity;
is the "mobility", or the ratio of the particle's terminal drift velocity to an applied force, ;
is the Boltzmann constant;
is the absolute temperature.
Note that the mobility can be calculated based on the viscosity of the gas; Therefore, the Einstein–Smoluchowski equation also provides a relation between the mass diffusivity and the viscosity of the gas.
Onsager reciprocal relations
The mathematical similarities between the expressions for shear viscocity, thermal conductivity and diffusion coefficient of the ideal (dilute) gas is not a coincidence; It is a direct result of the Onsager reciprocal relations (i.e. the detailed balance of the reversible dynamics of the particles), when applied to the convection (matter flow due to temperature gradient, and heat flow due to pressure gradient) and advection (matter flow due to the velocity of particles, and momentum transfer due to pressure gradient) of the ideal (dilute) gas.
See also
Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy of equations
Boltzmann equation
Chapman–Enskog theory
Collision theory
Critical temperature
Gas laws
Heat
Interatomic potential
Magnetohydrodynamics
Maxwell–Boltzmann distribution
Mixmaster universe
Thermodynamics
Vicsek model
Vlasov equation
Notes
References
de Groot, S. R., W. A. van Leeuwen and Ch. G. van Weert (1980), Relativistic Kinetic Theory, North-Holland, Amsterdam.
Liboff, R. L. (1990), Kinetic Theory, Prentice-Hall, Englewood Cliffs, N. J.
(reprinted in his Papers, 3, 167, 183.)
Further reading
Sydney Chapman and Thomas George Cowling (1939/1970), The Mathematical Theory of Non-uniform Gases: An Account of the Kinetic Theory of Viscosity, Thermal Conduction and Diffusion in Gases, (first edition 1939, second edition 1952), third edition 1970 prepared in co-operation with D. Burnett, Cambridge University Press, London
Joseph Oakland Hirschfelder, Charles Francis Curtiss, and Robert Byron Bird (1964), Molecular Theory of Gases and Liquids, revised edition (Wiley-Interscience), ISBN 978-0471400653
Richard Lawrence Liboff (2003), Kinetic Theory: Classical, Quantum, and Relativistic Descriptions, third edition (Springer), ISBN 978-0-387-21775-8
Behnam Rahimi and Henning Struchtrup (2016), "Macroscopic and kinetic modelling of rarefied polyatomic gases", Journal of Fluid Mechanics, 806, 437–505, DOI 10.1017/jfm.2016.604
External links
Early Theories of Gases
Thermodynamics - a chapter from an online textbook
Temperature and Pressure of an Ideal Gas: The Equation of State on Project PHYSNET.
Introduction to the kinetic molecular theory of gases, from The Upper Canada District School Board
Java animation illustrating the kinetic theory from University of Arkansas
Flowchart linking together kinetic theory concepts, from HyperPhysics
Interactive Java Applets allowing high school students to experiment and discover how various factors affect rates of chemical reactions.
https://www.youtube.com/watch?v=47bF13o8pb8&list=UUXrJjdDeqLgGjJbP1sMnH8A A demonstration apparatus for the thermal agitation in gases.
Gases
Thermodynamics
Classical mechanics | 0.783338 | 0.99694 | 0.780941 |
Work (thermodynamics) | Thermodynamic work is one of the principal processes by which a thermodynamic system can interact with its surroundings and exchange energy. This exchange results in externally measurable macroscopic forces on the system's surroundings, which can cause mechanical work, to lift a weight, for example, or cause changes in electromagnetic, or gravitational variables. The surroundings also can perform work on a thermodynamic system, which is measured by an opposite sign convention.
For thermodynamic work, appropriately chosen externally measured quantities are exactly matched by values of or contributions to changes in macroscopic internal state variables of the system, which always occur in conjugate pairs, for example pressure and volume or magnetic flux density and magnetization.
In the International System of Units (SI), work is measured in joules (symbol J). The rate at which work is performed is power, measured in joules per second, and denoted with the unit watt (W).
History
1824
Work, i.e. "weight lifted through a height", was originally defined in 1824 by Sadi Carnot in his famous paper Reflections on the Motive Power of Fire, where he used the term motive power for work. Specifically, according to Carnot:
We use here motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.
1845
In 1845, the English physicist James Joule wrote a paper On the mechanical equivalent of heat for the British Association meeting in Cambridge. In this paper, he reported his best-known experiment, in which the mechanical power released through the action of a "weight falling through a height" was used to turn a paddle-wheel in an insulated barrel of water.
In this experiment, the motion of the paddle wheel, through agitation and friction, heated the body of water, so as to increase its temperature. Both the temperature change of the water and the height of the fall of the weight were recorded. Using these values, Joule was able to determine the mechanical equivalent of heat. Joule estimated a mechanical equivalent of heat to be 819 ft•lbf/Btu (4.41 J/cal). The modern day definitions of heat, work, temperature, and energy all have connection to this experiment. In this arrangement of apparatus, it never happens that the process runs in reverse, with the water driving the paddles so as to raise the weight, not even slightly. Mechanical work was done by the apparatus of falling weight, pulley, and paddles, which lay in the surroundings of the water. Their motion scarcely affected the volume of the water. A quantity of mechanical work, measured as force × distance in the surroundings, that does not change the volume of the water, is said to be isochoric. Such work reaches the system only as friction, through microscopic modes, and is irreversible. It does not count as thermodynamic work. The energy supplied by the fall of the weight passed into the water as heat.
Overview
Conservation of energy
A fundamental guiding principle of thermodynamics is the conservation of energy. The total energy of a system is the sum of its internal energy, of its potential energy as a whole system in an external force field, such as gravity, and of its kinetic energy as a whole system in motion. Thermodynamics has special concern with transfers of energy, from a body of matter, such as, for example a cylinder of steam, to the surroundings of the body, by mechanisms through which the body exerts macroscopic forces on its surroundings so as to lift a weight there; such mechanisms are the ones that are said to mediate thermodynamic work.
Besides transfer of energy as work, thermodynamics admits transfer of energy as heat. For a process in a closed (no transfer of matter) thermodynamic system, the first law of thermodynamics relates changes in the internal energy (or other cardinal energy function, depending on the conditions of the transfer) of the system to those two modes of energy transfer, as work, and as heat. Adiabatic work is done without matter transfer and without heat transfer. In principle, in thermodynamics, for a process in a closed system, the quantity of heat transferred is defined by the amount of adiabatic work that would be needed to effect the change in the system that is occasioned by the heat transfer. In experimental practice, heat transfer is often estimated calorimetrically, through change of temperature of a known quantity of calorimetric material substance.
Energy can also be transferred to or from a system through transfer of matter. The possibility of such transfer defines the system as an open system, as opposed to a closed system. By definition, such transfer is neither as work nor as heat.
Changes in the potential energy of a body as a whole with respect to forces in its surroundings, and in the kinetic energy of the body moving as a whole with respect to its surroundings, are by definition excluded from the body's cardinal energy (examples are internal energy and enthalpy).
Nearly reversible transfer of energy by work in the surroundings
In the surroundings of a thermodynamic system, external to it, all the various mechanical and non-mechanical macroscopic forms of work can be converted into each other with no limitation in principle due to the laws of thermodynamics, so that the energy conversion efficiency can approach 100% in some cases; such conversion is required to be frictionless, and consequently adiabatic. In particular, in principle, all macroscopic forms of work can be converted into the mechanical work of lifting a weight, which was the original form of thermodynamic work considered by Carnot and Joule (see History section above). Some authors have considered this equivalence to the lifting of a weight as a defining characteristic of work. For example, with the apparatus of Joule's experiment in which, through pulleys, a weight descending in the surroundings drives the stirring of a thermodynamic system, the descent of the weight can be diverted by a re-arrangement of pulleys, so that it lifts another weight in the surroundings, instead of stirring the thermodynamic system.
Such conversion may be idealized as nearly frictionless, though it occurs relatively quickly. It usually comes about through devices that are not simple thermodynamic systems (a simple thermodynamic system is a homogeneous body of material substances). For example, the descent of the weight in Joule's stirring experiment reduces the weight's total energy. It is described as loss of gravitational potential energy by the weight, due to change of its macroscopic position in the gravity field, in contrast to, for example, loss of the weight's internal energy due to changes in its entropy, volume, and chemical composition. Though it occurs relatively rapidly, because the energy remains nearly fully available as work in one way or another, such diversion of work in the surroundings may be idealized as nearly reversible, or nearly perfectly efficient.
In contrast, the conversion of heat into work in a heat engine can never exceed the Carnot efficiency, as a consequence of the second law of thermodynamics. Such energy conversion, through work done relatively rapidly, in a practical heat engine, by a thermodynamic system on its surroundings, cannot be idealized, not even nearly, as reversible.
Thermodynamic work done by a thermodynamic system on its surroundings is defined so as to comply with this principle. Historically, thermodynamics was about how a thermodynamic system could do work on its surroundings.
Work done by and on a simple thermodynamic system
Work done on, and work done by, a thermodynamic system need to be distinguished, through consideration of their precise mechanisms. Work done on a thermodynamic system, by devices or systems in the surroundings, is performed by actions such as compression, and includes shaft work, stirring, and rubbing. Such work done by compression is thermodynamic work as here defined. But shaft work, stirring, and rubbing are not thermodynamic work as here defined, in that they do not change the volume of the system against its resisting pressure. Work without change of volume is known as isochoric work, for example when an agency, in the surroundings of the system, drives a frictional action on the surface or in the interior of the system.
In a process of transfer of energy from or to a thermodynamic system, the change of internal energy of the system is defined in theory by the amount of adiabatic work that would have been necessary to reach the final from the initial state, such adiabatic work being measurable only through the externally measurable mechanical or deformation variables of the system, that provide full information about the forces exerted by the surroundings on the system during the process. In the case of some of Joule's measurements, the process was so arranged that some heating that occurred outside the system (in the substance of the paddles) by the frictional process also led to heat transfer from the paddles into the system during the process, so that the quantity of work done by the surrounds on the system could be calculated as shaft work, an external mechanical variable.
The amount of energy transferred as work is measured through quantities defined externally to the system of interest, and thus belonging to its surroundings. In an important sign convention, preferred in chemistry, work that adds to the internal energy of the system is counted as positive. On the other hand, for historical reasons, an oft-encountered sign convention, preferred in physics, is to consider work done by the system on its surroundings as positive.
Processes not described by macroscopic work
Transfer of thermal energy through direct contact between a closed system and its surroundings, is by the microscopic thermal motions of particles and their associated inter-molecular potential energies. The microscopic description of such processes are the province of statistical mechanics, not of macroscopic thermodynamics. Another kind of energy transfer is by radiation, performing work on the system. Radiative transfer of energy is irreversible in the sense that it occurs only from a hotter to a colder system. There are several forms of dissipative transduction of energy that can occur internally within a system at a microscopic level, such as friction including bulk and shear viscosity chemical reaction, unconstrained expansion as in Joule expansion and in diffusion, and phase change.
Open systems
For an open system, the first law of thermodynamics admits three forms of energy transfer, as work, as heat, and as energy associated with matter that is transferred. The latter cannot be split uniquely into heat and work components.
One-way convection of internal energy is a form a transport of energy but is not, as sometimes mistakenly supposed (a relic of the caloric theory of heat), transfer of energy as heat, because one-way convection is transfer of matter; nor is it transfer of energy as work. Nevertheless, if the wall between the system and its surroundings is thick and contains fluid, in the presence of a gravitational field, convective circulation within the wall can be considered as indirectly mediating transfer of energy as heat between the system and its surroundings, though the source and destination of the transferred energy are not in direct contact.
Fictively imagined reversible thermodynamic "processes"
For purposes of theoretical calculations about a thermodynamic system, one can imagine fictive idealized thermodynamic "processes" that occur so slowly that they do not incur friction within or on the surface of system; they can then be regarded as virtually reversible. These fictive processes proceed along paths on geometrical surfaces that are described exactly by a characteristic equation of the thermodynamic system. Those geometrical surfaces are the loci of possible states of thermodynamic equilibrium for the system. Really possible thermodynamic processes, occurring at practical rates, even when they occur only by work assessed in the surroundings as adiabatic, without heat transfer, always incur friction within the system, and so are always irreversible. The paths of such really possible processes always depart from those geometrical characteristic surfaces. Even when they occur only by work assessed in the surroundings as adiabatic, without heat transfer, such departures always entail entropy production.
Joule heating and rubbing
The definition of thermodynamic work is in terms of the changes of the system's extensive deformation (and chemical constitutive and certain other) state variables, such as volume, molar chemical constitution, or electric polarisation. Examples of state variables that are not extensive deformation or other such variables are temperature and entropy , as for example in the expression . Changes of such variables are not actually physically measureable by use of a single simple adiabatic thermodynamic process; they are processes that occur neither by thermodynamic work nor by transfer of matter, and therefore are said occur by heat transfer. The quantity of thermodynamic work is defined as work done by the system on its surroundings. According to the second law of thermodynamics, such work is irreversible. To get an actual and precise physical measurement of a quantity of thermodynamic work, it is necessary to take account of the irreversibility by restoring the system to its initial condition by running a cycle, for example a Carnot cycle, that includes the target work as a step. The work done by the system on its surroundings is calculated from the quantities that constitute the whole cycle. A different cycle would be needed to actually measure the work done by the surroundings on the system. This is a reminder that rubbing the surface of a system appears to the rubbing agent in the surroundings as mechanical, though not thermodynamic, work done on the system, not as heat, but appears to the system as heat transferred to the system, not as thermodynamic work. The production of heat by rubbing is irreversible; historically, it was a piece of evidence for the rejection of the caloric theory of heat as a conserved substance. The irreversible process known as Joule heating also occurs through a change of a non-deformation extensive state variable.
Accordingly, in the opinion of Lavenda, work is not as primitive concept as is heat, which can be measured by calorimetry. This opinion does not negate the now customary thermodynamic definition of heat in terms of adiabatic work.
Known as a thermodynamic operation, the initiating factor of a thermodynamic process is, in many cases, a change in the permeability of a wall between the system and the surroundings. Rubbing is not a change in wall permeability. Kelvin's statement of the second law of thermodynamics uses the notion of an "inanimate material agency"; this notion is sometimes regarded as puzzling. The triggering of a process of rubbing can occur only in the surroundings, not in a thermodynamic system in its own state of internal thermodynamic equilibrium. Such triggering may be described as a thermodynamic operation.
Formal definition
In thermodynamics, the quantity of work done by a closed system on its surroundings is defined by factors strictly confined to the interface of the surroundings with the system and to the surroundings of the system, for example, an extended gravitational field in which the system sits, that is to say, to things external to the system.
A main concern of thermodynamics is the properties of materials. Thermodynamic work is defined for the purposes of thermodynamic calculations about bodies of material, known as thermodynamic systems. Consequently, thermodynamic work is defined in terms of quantities that describe the states of materials, which appear as the usual thermodynamic state variables, such as volume, pressure, temperature, chemical composition, and electric polarization. For example, to measure the pressure inside a system from outside it, the observer needs the system to have a wall that can move by a measurable amount in response to pressure differences between the interior of the system and the surroundings. In this sense, part of the definition of a thermodynamic system is the nature of the walls that confine it.
Several kinds of thermodynamic work are especially important. One simple example is pressure–volume work. The pressure of concern is that exerted by the surroundings on the surface of the system, and the volume of interest is the negative of the increment of volume gained by the system from the surroundings. It is usually arranged that the pressure exerted by the surroundings on the surface of the system is well defined and equal to the pressure exerted by the system on the surroundings. This arrangement for transfer of energy as work can be varied in a particular way that depends on the strictly mechanical nature of pressure–volume work. The variation consists in letting the coupling between the system and surroundings be through a rigid rod that links pistons of different areas for the system and surroundings. Then for a given amount of work transferred, the exchange of volumes involves different pressures, inversely with the piston areas, for mechanical equilibrium. This cannot be done for the transfer of energy as heat because of its non-mechanical nature.
Another important kind of work is isochoric work, i.e., work that involves no eventual overall change of volume of the system between the initial and the final states of the process. Examples are friction on the surface of the system as in Rumford's experiment; shaft work such as in Joule's experiments; stirring of the system by a magnetic paddle inside it, driven by a moving magnetic field from the surroundings; and vibrational action on the system that leaves its eventual volume unchanged, but involves friction within the system. Isochoric mechanical work for a body in its own state of internal thermodynamic equilibrium is done only by the surroundings on the body, not by the body on the surroundings, so that the sign of isochoric mechanical work with the physics sign convention is always negative.
When work, for example pressure–volume work, is done on its surroundings by a closed system that cannot pass heat in or out because it is confined by an adiabatic wall, the work is said to be adiabatic for the system as well as for the surroundings. When mechanical work is done on such an adiabatically enclosed system by the surroundings, it can happen that friction in the surroundings is negligible, for example in the Joule experiment with the falling weight driving paddles that stir the system. Such work is adiabatic for the surroundings, even though it is associated with friction within the system. Such work may or may not be isochoric for the system, depending on the system and its confining walls. If it happens to be isochoric for the system (and does not eventually change other system state variables such as magnetization), it appears as a heat transfer to the system, and does not appear to be adiabatic for the system.
Sign convention
In the early history of thermodynamics, a positive amount of work done by the system on the surroundings leads to energy being lost from the system. This historical sign convention has been used in many physics textbooks and is used in the present article.
According to the first law of thermodynamics for a closed system, any net change in the internal energy U must be fully accounted for, in terms of heat Q entering the system and work W done by the system:
An alternate sign convention is to consider the work performed on the system by its surroundings as positive. This leads to a change in sign of the work, so that . This convention has historically been used in chemistry, and has been adopted by most physics textbooks.
This equation reflects the fact that the heat transferred and the work done are not properties of the state of the system. Given only the initial state and the final state of the system, one can only say what the total change in internal energy was, not how much of the energy went out as heat, and how much as work. This can be summarized by saying that heat and work are not state functions of the system. This is in contrast to classical mechanics, where net work exerted by a particle is a state function.
Pressure–volume work
Pressure–volume work (or PV or P-V work) occurs when the volume of a system changes. PV work is often measured in units of litre-atmospheres where . However, the litre-atmosphere is not a recognized unit in the SI system of units, which measures P in pascals (Pa), V in m3, and PV in joules (J), where 1 J = 1 Pa·m3. PV work is an important topic in chemical thermodynamics.
For a process in a closed system, occurring slowly enough for accurate definition of the pressure on the inside of the system's wall that moves and transmits force to the surroundings, described as quasi-static, work is represented by the following equation between differentials:
where
(inexact differential) denotes an infinitesimal increment of work done by the system, transferring energy to the surroundings;
denotes the pressure inside the system, that it exerts on the moving wall that transmits force to the surroundings. In the alternative sign convention the right hand side has a negative sign.
(exact differential) denotes an infinitesimal increment of the volume of the system.
Moreover,
where denotes the work done by the system during the whole of the reversible process.
The first law of thermodynamics can then be expressed as
(In the alternative sign convention where W = work done on the system, . However, is unchanged.)
Path dependence
PV work is path-dependent and is, therefore, a thermodynamic process function. In general, the term is not an exact differential. The statement that a process is quasi-static gives important information about the process but does not determine the P–V path uniquely, because the path can include several slow goings backwards and forward in volume, slowly enough to exclude friction within the system occasioned by departure from the quasi-static requirement. An adiabatic wall is one that does not permit passage of energy by conduction or radiation.
The first law of thermodynamics states that .
For a quasi-static adiabatic process, so that
Also so that
It follows that so that
Internal energy is a state function so its change depends only on the initial and final states of a process. For a quasi-static adiabatic process, the change in internal energy is equal to minus the integral amount of work done by the system, so the work also depends only on the initial and final states of the process and is one and the same for every intermediate path. As a result, the work done by the system also depends on the initial and final states.
If the process path is other than quasi-static and adiabatic, there are indefinitely many different paths, with significantly different work amounts, between the initial and final states. (Again the internal energy change depends only on the initial and final states as it is a state function).
In the current mathematical notation, the differential is an inexact differential.
In another notation, is written (with a horizontal line through the d). This notation indicates that is not an exact one-form. The line-through is merely a flag to warn us there is actually no function (0-form) which is the potential of . If there were, indeed, this function , we should be able to just use Stokes Theorem to evaluate this putative function, the potential of , at the boundary of the path, that is, the initial and final points, and therefore the work would be a state function. This impossibility is consistent with the fact that it does not make sense to refer to the work on a point in the PV diagram; work presupposes a path.
Other mechanical types of work
There are several ways of doing mechanical work, each in some way related to a force acting through a distance. In basic mechanics, the work done by a constant force F on a body displaced a distance s in the direction of the force is given by
If the force is not constant, the work done is obtained by integrating the differential amount of work,
Rotational work
Energy transmission with a rotating shaft is very common in engineering practice. Often the torque T applied to the shaft is constant which means that the force F applied is constant. For a specified constant torque, the work done during n revolutions is determined as follows: A force F acting through a moment arm r generates a torque T
This force acts through a distance s, which is related to the radius r by
The shaft work is then determined from:
The power transmitted through the shaft is the shaft work done per unit time, which is expressed as
Spring work
When a force is applied on a spring, and the length of the spring changes by a differential amount dx, the work done is
For linear elastic springs, the displacement x is proportional to the force applied
where K is the spring constant and has the unit of N/m. The displacement x is measured from the undisturbed position of the spring (that is, when ). Substituting the two equations
,
where x1 and x2 are the initial and the final displacement of the spring respectively, measured from the undisturbed position of the spring.
Work done on elastic solid bars
Solids are often modeled as linear springs because under the action of a force they contract or elongate, and when the force is lifted, they return to their original lengths, like a spring. This is true as long as the force is in the elastic range, that is, not large enough to cause permanent or plastic deformation. Therefore, the equations given for a linear spring can also be used for elastic solid bars. Alternately, we can determine the work associated with the expansion or contraction of an elastic solid bar by replacing the pressure P by its counterpart in solids, normal stress in the work expansion
where A is the cross sectional area of the bar.
Work associated with the stretching of liquid film
Consider a liquid film such as a soap film suspended on a wire frame. Some force is required to stretch this film by the movable portion of the wire frame. This force is used to overcome the microscopic forces between molecules at the liquid-air interface. These microscopic forces are perpendicular to any line in the surface and the force generated by these forces per unit length is called the surface tension σ whose unit is N/m. Therefore, the work associated with the stretching of a film is called surface tension work, and is determined from
where is the change in the surface area of the film. The factor 2 is due to the fact that the film has two surfaces in contact with air. The force acting on the moveable wire as a result of surface tension effects is , where σ is the surface tension force per unit length.
Free energy and exergy
The amount of useful work which may be extracted from a thermodynamic system is determined by the second law of thermodynamics. Under many practical situations this can be represented by the thermodynamic availability, or Exergy, function. Two important cases are: in thermodynamic systems where the temperature and volume are held constant, the measure of useful work attainable is the Helmholtz free energy function; and in systems where the temperature and pressure are held constant, the measure of useful work attainable is the Gibbs free energy.
Non-mechanical forms of work
Non-mechanical work in thermodynamics is work caused by external force fields that a system is exposed to. The action of such forces can be initiated by events in the surroundings of the system, or by thermodynamic operations on the shielding walls of the system.
The non-mechanical work of force fields can have either positive or negative sign, work being done by the system on the surroundings, or vice versa. Work done by force fields can be done indefinitely slowly, so as to approach the fictive reversible quasi-static ideal, in which entropy is not created in the system by the process.
In thermodynamics, non-mechanical work is to be contrasted with mechanical work that is done by forces in immediate contact between the system and its surroundings. If the putative 'work' of a process cannot be defined as either long-range work or else as contact work, then sometimes it cannot be described by the thermodynamic formalism as work at all. Nevertheless, the thermodynamic formalism allows that energy can be transferred between an open system and its surroundings by processes for which work is not defined. An example is when the wall between the system and its surrounds is not considered as idealized and vanishingly thin, so that processes can occur within the wall, such as friction affecting the transfer of matter across the wall; in this case, the forces of transfer are neither strictly long-range nor strictly due to contact between the system and its surroundings; the transfer of energy can then be considered as convection, and assessed in sum just as transfer of internal energy. This is conceptually different from transfer of energy as heat through a thick fluid-filled wall in the presence of a gravitational field, between a closed system and its surroundings; in this case there may convective circulation within the wall but the process may still be considered as transfer of energy as heat between the system and its surroundings; if the whole wall is moved by the application of force from the surroundings, without change of volume of the wall, so as to change the volume of the system, then it is also at the same time transferring energy as work. A chemical reaction within a system can lead to electrical long-range forces and to electric current flow, which transfer energy as work between system and surroundings, though the system's chemical reactions themselves (except for the special limiting case in which in they are driven through devices in the surroundings so as to occur along a line of thermodynamic equilibrium) are always irreversible and do not directly interact with the surroundings of the system.
Non-mechanical work contrasts with pressure–volume work. Pressure–volume work is one of the two mainly considered kinds of mechanical contact work. A force acts on the interfacing wall between system and surroundings. The force is due to the pressure exerted on the interfacing wall by the material inside the system; that pressure is an internal state variable of the system, but is properly measured by external devices at the wall. The work is due to change of system volume by expansion or contraction of the system. If the system expands, in the present article it is said to do positive work on the surroundings. If the system contracts, in the present article it is said to do negative work on the surroundings. Pressure–volume work is a kind of contact work, because it occurs through direct material contact with the surrounding wall or matter at the boundary of the system. It is accurately described by changes in state variables of the system, such as the time courses of changes in the pressure and volume of the system. The volume of the system is classified as a "deformation variable", and is properly measured externally to the system, in the surroundings. Pressure–volume work can have either positive or negative sign. Pressure–volume work, performed slowly enough, can be made to approach the fictive reversible quasi-static ideal.
Non-mechanical work also contrasts with shaft work. Shaft work is the other of the two mainly considered kinds of mechanical contact work. It transfers energy by rotation, but it does not eventually change the shape or volume of the system. Because it does not change the volume of the system it is not measured as pressure–volume work, and it is called isochoric work. Considered solely in terms of the eventual difference between initial and final shapes and volumes of the system, shaft work does not make a change. During the process of shaft work, for example the rotation of a paddle, the shape of the system changes cyclically, but this does not make an eventual change in the shape or volume of the system. Shaft work is a kind of contact work, because it occurs through direct material contact with the surrounding matter at the boundary of the system. A system that is initially in a state of thermodynamic equilibrium cannot initiate any change in its internal energy. In particular, it cannot initiate shaft work. This explains the curious use of the phrase "inanimate material agency" by Kelvin in one of his statements of the second law of thermodynamics. Thermodynamic operations or changes in the surroundings are considered to be able to create elaborate changes such as indefinitely prolonged, varied, or ceased rotation of a driving shaft, while a system that starts in a state of thermodynamic equilibrium is inanimate and cannot spontaneously do that. Thus the sign of shaft work is always negative, work being done on the system by the surroundings. Shaft work can hardly be done indefinitely slowly; consequently it always produces entropy within the system, because it relies on friction or viscosity within the system for its transfer. The foregoing comments about shaft work apply only when one ignores that the system can store angular momentum and its related energy.
Examples of non-mechanical work modes include
Electric field work – where the force is defined by the surroundings' voltage (the electrical potential) and the generalized displacement is change of spatial distribution of electrical charge
Electrical polarization work – where the force is defined by the surroundings' electric field strength and the generalized displacement is change of the polarization of the medium (the sum of the electric dipole moments of the molecules)
Magnetic work – where the force is defined by the surroundings' magnetic field strength and the generalized displacement is change of total magnetic dipole moment
Gravitational work
Gravitational work is defined by the force on a body measured in a gravitational field. It may cause a generalized displacement in the form of change of the spatial distribution of the matter within the system. The system gains internal energy (or other relevant cardinal quantity of energy, such as enthalpy) through internal friction. As seen by the surroundings, such frictional work appears as mechanical work done on the system, but as seen by the system, it appears as transfer of energy as heat. When the system is in its own state of internal thermodynamic equilibrium, its temperature is uniform throughout. If the volume and other extensive state variables, apart from entropy, are held constant over the process, then the transferred heat must appear as increased temperature and entropy; in a uniform gravitational field, the pressure of the system will be greater at the bottom than at the top.
By definition, the relevant cardinal energy function is distinct from the gravitational potential energy of the system as a whole; the latter may also change as a result of gravitational work done by the surroundings on the system. The gravitational potential energy of the system is a component of its total energy, alongside its other components, namely its cardinal thermodynamic (e.g. internal) energy and its kinetic energy as a whole system in motion.
See also
Electrochemical hydrogen compressor
Chemical reactions
Microstate (statistical mechanics) - includes Microscopic definition of work
References
Thermodynamics | 0.785419 | 0.994006 | 0.780711 |
Kinetics (physics) | In physics and engineering, kinetics is the branch of classical mechanics that is concerned with the relationship between the motion and its causes, specifically, forces and torques. Since the mid-20th century, the term "dynamics" (or "analytical dynamics") has largely superseded "kinetics" in physics textbooks, though the term is still used in engineering.
In plasma physics, kinetics refers to the study of continua in velocity space. This is usually in the context of non-thermal (non-Maxwellian) velocity distributions, or processes that perturb thermal distributions. These "kinetic plasmas" cannot be adequately described with fluid equations.
The term kinetics is also used to refer to chemical kinetics, particularly in chemical physics and physical chemistry. In such uses, a qualifier is often used or implied, for example: "physical kinetics", "crystal growth kinetics", and so on.
References | 0.792736 | 0.984805 | 0.780691 |
Airflow | Airflow, or air flow, is the movement of air. Air behaves in a fluid manner, meaning particles naturally flow from areas of higher pressure to those where the pressure is lower. Atmospheric air pressure is directly related to altitude, temperature, and composition.In engineering, airflow is a measurement of the amount of air per unit of time that flows through a particular device.
It can be described as a volumetric flow rate (volume of air per unit time) or a mass flow rate (mass of air per unit time). What relates both forms of description is the air density, which is a function of pressure and temperature through the ideal gas law. The flow of air can be induced through mechanical means (such as by operating an electric or manual fan) or can take place passively, as a function of pressure differentials present in the environment.
Types of airflow
Like any fluid, air may exhibit both laminar and turbulent flow patterns. Laminar flow occurs when air can flow smoothly, and exhibits a parabolic velocity profile; turbulent flow occurs when there is an irregularity (such as a disruption in the surface across which the fluid is flowing), which alters the direction of movement. Turbulent flow exhibits a flat velocity profile. Velocity profiles of fluid movement describe the spatial distribution of instantaneous velocity vectors across a given cross section. The size and shape of the geometric configuration that the fluid is traveling through, the fluid properties (such as viscosity), physical disruptions to the flow, and engineered components (e.g. pumps) that add energy to the flow are factors that determine what the velocity profile looks like. Generally, in encased flows, instantaneous velocity vectors are larger in magnitude in the middle of the profile due to the effect of friction from the material of the pipe, duct, or channel walls on nearby layers of fluid. In tropospheric atmospheric flows, velocity increases with elevation from ground level due to friction from obstructions like trees and hills slowing down airflow near the surface. The level of friction is quantified by a parameter called the "roughness length." Streamlines connect velocities and are tangential to the instantaneous direction of multiple velocity vectors. They can be curved and do not always follow the shape of the container. Additionally, they only exist in steady flows, i.e. flows whose velocity vectors do not change over time. In a laminar flow, all particles of the fluid are traveling in parallel lines which gives rise to parallel streamlines. In a turbulent flow, particles are traveling in random and chaotic directions which gives rise to curved, spiraling, and often intersecting streamlines.
The Reynolds number, a ratio indicating the relationship between viscous and inertial forces in a fluid, can be used to predict the transition from laminar to turbulent flow. Laminar flows occur at low Reynold's numbers where viscous forces dominate, and turbulent flows occur at high Reynold's numbers where inertial forces dominate. The range of Reynold's number that defines each type of flow depends on whether the air is moving through a pipe, wide duct, open channel, or around airfoils. Reynold's number can also characterize an object (for example, a particle under the effect of gravitational settling) moving through a fluid. This number and related concepts can be applied to studying flow in systems of all scales. Transitional flow is a mixture of turbulence in the center of the velocity profile and laminar flow near the edges. Each of these three flows have distinct mechanisms of frictional energy losses that give rise to different behavior. As a result, different equations are used to predict and quantify the behavior of each type of flow.
The speed at which a fluid flows past an object varies with distance from the object's surface. The region surrounding an object where the air speed approaches zero is known as the boundary layer. It is here that surface friction most affects flow; irregularities in surfaces may affect boundary layer thickness, and hence act to disrupt flow.
Units
Typical units to express airflow are:
By volume
m3/min (cubic metres per minute)
m3/h (cubic metres per hour)
ft3/h (cubic feet per hour)
ft3/min (cubic feet per minute, a.k.a. CFM)
l/s (litres per second)
By mass
kg/s (kilograms per second)
Airflow can also be described in terms of air changes per hour (ACH), indicating full replacement of the volume of air filling the space in question. This unit is frequently used in the field of building science, with higher ACH values corresponding to leakier envelopes which are typical of older buildings that are less tightly sealed.
Measurement
The instrument that measures airflow is called an airflow meter. Anemometers are also used to measure wind speed and indoor airflow.
There are a variety of types, including straight probe anemometers, designed to measure air velocity, differential pressure, temperature, and humidity; rotating vane anemometers, used for measuring air velocity and volumetric flow; and hot-sphere anemometers.
Anemometers may use ultrasound or resistive wire to measure the energy transfer between the measurement device and the passing particles. A hot-wire anemometer, for example, registers decreases in wire temperature, which can be translated into airflow velocity by analyzing the rate of change. Convective cooling is a function of airflow rate, and the electrical resistance of most metals is dependent upon the temperature of the metal, which is affected by the convective cooling. Engineers have taken advantage of these physical phenomena in the design and use of hot-wire anemometers. Some tools are capable of calculating air flow, wet bulb temperature, dew point, and turbulence.
Simulation
Air flow can be simulated using Computational Fluid Dynamics (CFD) modeling, or observed experimentally through the operation of a wind tunnel.''' This may be used to predict airflow patterns around automobiles, aircraft, and marine craft, as well as air penetration of a building envelope. Because CFD models "also track the flow of solids through a system," they can be used for analysis of pollution concentrations in indoor and outdoor environments. Particulate matter generated indoors generally comes from cooking with oil and combustion activities such as burning candles or firewood. In outdoor environments, particulate matter comes from direct sources such as internal combustion engine vehicles’ (ICEVs) tailpipe emissions from burning fuel (petroleum products), windblow and soil, and indirectly from atmospheric oxidation of volatile organic compounds (VOCs), sulfur dioxide (SO2), and nitrogen oxide (NOx) emissions.
Control
One type of equipment that regulates the airflow in ducts is called a damper. The damper can be used to increase, decrease or completely stop the flow of air. A more complex device that can not only regulate the airflow but also has the ability to generate and condition airflow is an air handler. Fans also generate flows by "producing air flows with high volume and low pressure (although higher than ambient pressure)." This pressure differential induced by the fan is what causes air to flow. The direction of airflow is determined by the direction of the pressure gradient. Total or static pressure rise, and therefore by extension airflow rate, is determined primarily by the fan speed measured in revolutions per minute (RPM). In control of HVAC systems to modulate the airflow rate, one typically changes the fan speed, which often come in 3-category settings such as low, medium, and high.
Uses
Measuring the airflow is necessary in many applications such as ventilation (to determine how much air is being replaced), pneumatic conveying (to control the air velocity and phase of transport) and engines (to control the Air–fuel ratio).
Aerodynamics is the branch of fluid dynamics (physics) that is specifically concerned with the measurement, simulation, and control of airflow. Managing airflow is of concern to many fields, including meteorology, aeronautics, medicine, mechanical engineering, civil engineering, environmental engineering and building science.
Airflow in buildings
In building science, airflow is often addressed in terms of its desirability, for example in contrasting ventilation and infiltration. Ventilation is defined as the desired flow of fresh outdoor supply air to another, typically indoor, space, along with the simultaneous expulsion of exhaust air from indoors to the outdoors. This may be achieved through mechanical means (i.e. the use of a louver or damper for air intake and a fan to induce flow through ductwork) or through passive strategies (also known as natural ventilation). While natural ventilation has economic benefits over mechanical ventilation because it typically requires far less operational energy consumption, it can only be utilized during certain times of day and under certain outdoor conditions. If there is a large temperature difference between the outdoor air and indoor conditioned air, the use of natural ventilation may cause unintentional heating or cooling loads on a space and increase HVAC energy consumption to maintain comfortable temperatures within ranges determined by the heating and cooling setpoint temperatures. Natural ventilation also has the flaw that its feasibility is dependent on outdoor conditions; if outdoor air is significantly polluted with ground-level ozone concentrations from transportation related emissions or particulate matter from wildfires for example, residential and commercial building occupants may have to keep doors and windows closed to preserve indoor environmental quality (IEQ). By contrast, air infiltration is characterized as the uncontrolled influx of air through an inadequately-sealed building envelope, usually coupled with unintentional leakage of conditioned air from the interior of a building to the exterior.
Buildings may be ventilated using mechanical systems, passive systems or strategies, or a combination of the two.
Airflow in mechanical ventilation systems (HVAC)
Mechanical ventilation uses fans to induce flow of air into and through a building. Duct configuration and assembly affect air flow rates through the system. Dampers, valves, joints and other geometrical or material changes within a duct can lead to flow pressure (energy) losses.
Passive strategies for maximizing airflow
Passive ventilation strategies take advantage of inherent characteristics of air, specifically thermal buoyancy and pressure differentials, to evacuate exhaust air from within a building. Stack effect equates to using chimneys or similar tall spaces with openings near the top to passively draw exhaust air up and out of the space, thanks to the fact that air will rise when its temperature increases (as the volume increases and pressure decreases). Wind-driven passive ventilation relies on building configuration, orientation, and aperture distribution to take advantage of outdoor air movement. Cross-ventilation requires strategically-positioned openings aligned with local wind patterns.
Relationship of air movement to thermal comfort and overall Indoor Environmental Quality (IEQ)
Airflow is a factor of concern when designing to meet occupant thermal comfort standards (such as ASHRAE 55). Varying rates of air movement may positively or negatively impact individuals’ perception of warmth or coolness, and hence their comfort. Air velocity interacts with air temperature, relative humidity, radiant temperature of surrounding surfaces and occupants, and occupant skin conductivity, resulting in particular thermal sensations.
Sufficient, properly-controlled and designed airflow (ventilation) is important for overall Indoor Environmental Quality (IEQ) and Indoor Air Quality (IAQ), in that it provides the necessary supply of fresh air and effectively evacuates exhaust air.
See also
Air current
Volumetric flow rate
Air flow meter
Damper (flow)
Air handling unit
Fluid dynamics
Pressure gradient force
Atmosphere of Earth
Anemometer
Computational Fluid Dynamics
Ventilation (architecture)
Natural ventilation
Infiltration (HVAC)
Particle tracking velocimetry
Laminar flow
Turbulent flow
Wind
References
Heating, ventilation, and air conditioning
Mechanical engineering | 0.792123 | 0.985561 | 0.780685 |
Speed of gravity | In classical theories of gravitation, the changes in a gravitational field propagate. A change in the distribution of energy and momentum of matter results in subsequent alteration, at a distance, of the gravitational field which it produces. In the relativistic sense, the "speed of gravity" refers to the speed of a gravitational wave, which, as predicted by general relativity and confirmed by observation of the GW170817 neutron star merger, is equal to the speed of light (c).
Introduction
The speed of gravitational waves in the general theory of relativity is equal to the speed of light in a vacuum, . Within the theory of special relativity, the constant is not only about light; instead it is the highest possible speed for any interaction in nature. Formally, is a conversion factor for changing the unit of time to the unit of space. This makes it the only speed which does not depend either on the motion of an observer or a source of light and / or gravity. Thus, the speed of "light" is also the speed of gravitational waves, and further the speed of any massless particle. Such particles include the gluon (carrier of the strong force), the photons that make up light (hence carrier of electromagnetic force), and the hypothetical gravitons (which are the presumptive field particles associated with gravity; however, an understanding of the graviton, if it exists, requires an as-yet unavailable theory of quantum gravity).
Static fields
The speed of physical changes in a gravitational or electromagnetic field should not be confused with "changes" in the behavior of static fields that are due to pure observer-effects. These changes in direction of a static field are, because of relativistic considerations, the same for an observer when a distant charge is moving, as when an observer (instead) decides to move with respect to a distant charge. Thus, constant motion of an observer with regard to a static charge and its extended static field (either a gravitational or electric field) does not change the field. For static fields, such as the electrostatic field connected with electric charge, or the gravitational field connected to a massive object, the field extends to infinity, and does not propagate. Motion of an observer does not cause the direction of such a field to change, and by symmetrical considerations, changing the observer frame so that the charge appears to be moving at a constant rate, also does not cause the direction of its field to change, but requires that it continues to "point" in the direction of the charge, at all distances from the charge.
The consequence of this is that static fields (either electric or gravitational) always point directly to the actual position of the bodies that they are connected to, without any delay that is due to any "signal" traveling (or propagating) from the charge, over a distance to an observer. This remains true if the charged bodies and their observers are made to "move" (or not), by simply changing reference frames. This fact sometimes causes confusion about the "speed" of such static fields, which sometimes appear to change infinitely quickly when the changes in the field are mere artifacts of the motion of the observer, or of observation.
In such cases, nothing actually changes infinitely quickly, save the point of view of an observer of the field. For example, when an observer begins to move with respect to a static field that already extends over light years, it appears as though "immediately" the entire field, along with its source, has begun moving at the speed of the observer. This, of course, includes the extended parts of the field. However, this "change" in the apparent behavior of the field source, along with its distant field, does not represent any sort of propagation that is faster than light.
Newtonian gravitation
Isaac Newton's formulation of a gravitational force law requires that each particle with mass respond instantaneously to every other particle with mass irrespective of the distance between them. In modern terms, Newtonian gravitation is described by the Poisson equation, according to which, when the mass distribution of a system changes, its gravitational field instantaneously adjusts. Therefore, the theory assumes the speed of gravity to be infinite. This assumption was adequate to account for all phenomena with the observational accuracy of that time. It was not until the 19th century that an anomaly in astronomical observations which could not be reconciled with the Newtonian gravitational model of instantaneous action was noted: the French astronomer Urbain Le Verrier determined in 1859 that the elliptical orbit of Mercury precesses at a significantly different rate from that predicted by Newtonian theory.
Laplace
The first attempt to combine a finite gravitational speed with Newton's theory was made by Laplace in 1805. Based on Newton's force law he considered a model in which the gravitational field is defined as a radiation field or fluid. Changes in the motion of the attracting body are transmitted by some sort of waves. Therefore, the movements of the celestial bodies should be modified in the order v/c, where v is the relative speed between the bodies and c is the speed of gravity. The effect of a finite speed of gravity goes to zero as c goes to infinity, but not as 1/c2 as it does in modern theories. This led Laplace to conclude that the speed of gravitational interactions is at least times the speed of light. This velocity was used by many in the 19th century to criticize any model based on a finite speed of gravity, like electrical or mechanical explanations of gravitation.
From a modern point of view, Laplace's analysis is incorrect. Not knowing about Lorentz invariance of static fields, Laplace assumed that when an object like the Earth is moving around the Sun, the attraction of the Earth would not be toward the instantaneous position of the Sun, but toward where the Sun had been if its position was retarded using the relative velocity (this retardation actually does happen with the optical position of the Sun, and is called annual solar aberration). Putting the Sun immobile at the origin, when the Earth is moving in an orbit of radius R with velocity v presuming that the gravitational influence moves with velocity c, moves the Sun's true position ahead of its optical position, by an amount equal to vR/c, which is the travel time of gravity from the sun to the Earth times the relative velocity of the sun and the Earth. As seen in Fig. 1, the pull of gravity (if it behaved like a wave, such as light) would then always be displaced in the direction of the Earth's velocity, so that the Earth would always be pulled toward the optical position of the Sun, rather than its actual position. This would cause a pull ahead of the Earth, which would cause the orbit of the Earth to spiral outward. Such an outspiral would be suppressed by an amount v/c compared to the force which keeps the Earth in orbit; and since the Earth's orbit is observed to be stable, Laplace's c must be very large. As is now known, it may be considered to be infinite in the limit of straight-line motion, since as a static influence it is instantaneous at distance when seen by observers at constant transverse velocity. For orbits in which velocity (direction of speed) changes slowly, it is almost infinite.
The attraction toward an object moving with a steady velocity is towards its instantaneous position with no delay, for both gravity and electric charge. In a field equation consistent with special relativity (i.e., a Lorentz invariant equation), the attraction between static charges moving with constant relative velocity is always toward the instantaneous position of the charge (in this case, the "gravitational charge" of the Sun), not the time-retarded position of the Sun. When an object is moving in orbit at a steady speed but changing velocity v, the effect on the orbit is order v2/c2, and the effect preserves energy and angular momentum, so that orbits do not decay.
Electrodynamical analogies
Early theories
At the end of the 19th century, many tried to combine Newton's force law with the established laws of electrodynamics, like those of Wilhelm Eduard Weber, Carl Friedrich Gauss, Bernhard Riemann and James Clerk Maxwell. Those theories are not invalidated by Laplace's critique, because although they are based on finite propagation speeds, they contain additional terms which maintain the stability of the planetary system. Those models were used to explain the perihelion advance of Mercury, but they could not provide exact values. One exception was Maurice Lévy in 1890, who succeeded in doing so by combining the laws of Weber and Riemann, whereby the speed of gravity is equal to the speed of light. However, those hypotheses were rejected.
However, a more important variation of those attempts was the theory of Paul Gerber, who derived in 1898 the identical formula, which was also derived later by Einstein for the perihelion advance. Based on that formula, Gerber calculated a propagation speed for gravity of , i.e. practically the speed of light. But Gerber's derivation of the formula was faulty, i.e., his conclusions did not follow from his premises, and therefore many (including Einstein) did not consider it to be a meaningful theoretical effort. Additionally, the value it predicted for the deflection of light in the gravitational field of the sun was too high by the factor 3/2.
Lorentz
In 1900, Hendrik Lorentz tried to explain gravity on the basis of his ether theory and the Maxwell equations. After proposing (and rejecting) a Le Sage type model, he assumed like Ottaviano-Fabrizio Mossotti and Johann Karl Friedrich Zöllner that the attraction of opposite charged particles is stronger than the repulsion of equal charged particles. The resulting net force is exactly what is known as universal gravitation, in which the speed of gravity is that of light. This leads to a conflict with the law of gravitation by Isaac Newton, in which it was shown by Pierre-Simon Laplace that a finite speed of gravity leads to some sort of aberration and therefore makes the orbits unstable. However, Lorentz showed that the theory is not concerned by Laplace's critique, because due to the structure of the Maxwell equations only effects in the order v2/c2 arise. But Lorentz calculated that the value for the perihelion advance of Mercury was much too low. He wrote:
In 1908, Henri Poincaré examined the gravitational theory of Lorentz and classified it as compatible with the relativity principle, but (like Lorentz) he criticized the inaccurate indication of the perihelion advance of Mercury.
Lorentz covariant models
Henri Poincaré argued in 1904 that a propagation speed of gravity which is greater than c would contradict the concept of local time (based on synchronization by light signals) and the principle of relativity. He wrote:
However, in 1905 Poincaré calculated that changes in the gravitational field can propagate with the speed of light if it is presupposed that such a theory is based on the Lorentz transformation. He wrote:
Similar models were also proposed by Hermann Minkowski (1907) and Arnold Sommerfeld (1910). However, those attempts were quickly superseded by Einstein's theory of general relativity. Whitehead's theory of gravitation (1922) explains gravitational red shift, light bending, perihelion shift and Shapiro delay.
General relativity
Background
General relativity predicts that gravitational radiation should exist and propagate as a wave at lightspeed: a slowly evolving and weak gravitational field will produce, according to general relativity (GR), effects like those of Newtonian gravitation (it does not depend on the existence of gravitons, mentioned above, or any similar force-carrying particles).
Suddenly displacing one of two gravitoelectrically interacting particles would, after a delay corresponding to lightspeed, cause the other to feel the displaced particle's absence: accelerations due to the change in quadrupole moment of star systems, like the Hulse–Taylor binary, have removed much energy (almost 2% of the energy of our own Sun's output) as gravitational waves, which would theoretically travel at the speed of light.
In GR, gravity is described by a tensor of degree two, which, in the weak gravity limit, can be described by the gravitoelectromagnetism approximation. In the following discussion the diagonal components of the tensor would be termed gravitoelectric components, and the other components will be termed gravitomagnetic.
Two gravitoelectrically interacting particle ensembles, e.g., two planets or stars moving at constant velocity with respect to each other, each feel a force toward the instantaneous position of the other body without a speed-of-light delay because Lorentz invariance demands that what a moving body in a static field sees and what a moving body that emits that field sees be symmetrical.
A moving body's seeing no aberration in a static field emanating from a "motionless body" therefore means Lorentz invariance requires that in the previously moving body's reference frame the (now moving) emitting body's field lines must not at a distance be retarded or aberred. Moving charged bodies (including bodies that emit static gravitational fields) exhibit static field lines that do not bend with distance and show no speed of light delay effects, as seen from bodies moving relative to them.
In other words, since the gravitoelectric field is, by definition, static and continuous, it does not propagate. If such a source of a static field is accelerated (for example stopped) with regard to its formerly constant velocity frame, its distant field continues to be updated as though the charged body continued with constant velocity. This effect causes the distant fields of unaccelerated moving charges to appear to be "updated" instantly for their constant velocity motion, as seen from distant positions, in the frame where the source-object is moving at constant velocity. However, as discussed, this is an effect which can be removed at any time, by transitioning to a new reference frame in which the distant charged body is now at rest.
The static and continuous gravitoelectric component of a gravitational field is not a gravitomagnetic component (gravitational radiation); see Petrov classification. The gravitoelectric field is a static field and therefore cannot superluminally transmit quantized (discrete) information, i.e., it could not constitute a well-ordered series of impulses carrying a well-defined meaning (this is the same for gravity and electromagnetism).
Aberration of field direction in general relativity, for a weakly accelerated observer
The finite speed of gravitational interaction in general relativity does not lead to the sorts of problems with the aberration of gravity that Newton was originally concerned with, because there is no such aberration in static field effects. Because the acceleration of the Earth with regard to the Sun is small (meaning, to a good approximation, the two bodies can be regarded as traveling in straight lines past each other with unchanging velocity), the orbital results calculated by general relativity are the same as those of Newtonian gravity with instantaneous action at a distance, because they are modelled by the behavior of a static field with constant-velocity relative motion, and no aberration for the forces involved. Although the calculations are considerably more complicated, one can show that a static field in general relativity does not suffer from aberration problems as seen by an unaccelerated observer (or a weakly accelerated observer, such as the Earth). Analogously, the "static term" in the electromagnetic Liénard–Wiechert potential theory of the fields from a moving charge does not suffer from either aberration or positional-retardation. Only the term corresponding to acceleration and electromagnetic emission in the Liénard–Wiechert potential shows a direction toward the time-retarded position of the emitter.
It is in fact not very easy to construct a self-consistent gravity theory in which gravitational interaction propagates at a speed other than the speed of light, which complicates discussion of this possibility.
Formulaic conventions
In general relativity the metric tensor symbolizes the gravitational potential, and Christoffel symbols of the spacetime manifold symbolize the gravitational force field. The tidal gravitational field is associated with the curvature of spacetime.
Measurements
For the reader who desires a deeper background, a comprehensive review of the definition of the speed of gravity and its measurement with high-precision astrometric and other techniques appears in the textbook Relativistic Celestial Mechanics in the Solar System.
PSR 1913+16 orbital decay
The speed of gravity (more correctly, the speed of gravitational waves) can be calculated from observations of the orbital decay rate of binary pulsars PSR 1913+16 (the Hulse–Taylor binary system noted above) and PSR B1534+12. The orbits of these binary pulsars are decaying due to loss of energy in the form of gravitational radiation. The rate of this energy loss ("gravitational damping") can be measured, and since it depends on the speed of gravity, comparing the measured values to theory shows that the speed of gravity is equal to the speed of light to within 1%. However, according to PPN formalism setting, measuring the speed of gravity by comparing theoretical results with experimental results will depend on the theory; use of a theory other than that of general relativity could in principle show a different speed, although the existence of gravitational damping at all implies that the speed cannot be infinite.
Jovian occultation of QSO J0842+1835 (contested)
In September 2002, Sergei Kopeikin and Edward Fomalont announced that they had measured the speed of gravity indirectly, using their data from VLBI measurement of the retarded position of Jupiter on its orbit during Jupiter's transit across the line-of-sight of the bright radio source quasar QSO J0842+1835. Kopeikin and Fomalont concluded that the speed of gravity is between 0.8 and 1.2 times the speed of light, which would be fully consistent with the theoretical prediction of general relativity that the speed of gravity is exactly the same as the speed of light.
Several physicists, including Clifford M. Will and Steve Carlip, have criticized these claims on the grounds that they have allegedly misinterpreted the results of their measurements. Notably, prior to the actual transit, Hideki Asada in a paper to the Astrophysical Journal Letters theorized that the proposed experiment was essentially a roundabout confirmation of the speed of light instead of the speed of gravity.
It is important to keep in mind that none of the debaters in this controversy are claiming that general relativity is "wrong". Rather, the debated issue is whether or not Kopeikin and Fomalont have really provided yet another verification of one of its fundamental predictions.
Kopeikin and Fomalont, however, continue to vigorously argue their case and the means of presenting their result at the press conference of the American Astronomical Society (AAS) that was offered after the results of the Jovian experiment had been peer-reviewed by the experts of the AAS scientific organizing committee. In a later publication by Kopeikin and Fomalont, which uses a bi-metric formalism that splits the space-time null cone in two — one for gravity and another one for light — the authors claimed that Asada's claim was theoretically unsound. The two null cones overlap in general relativity, which makes tracking the speed-of-gravity effects difficult and requires a special mathematical technique of gravitational retarded potentials, which was worked out by Kopeikin and co-authors but was never properly employed by Asada and/or the other critics.
Stuart Samuel also showed that the experiment did not actually measure the speed of gravity because the effects were too small to have been measured. A response by Kopeikin and Fomalont challenges this opinion.
GW170817 and the demise of two neutron stars
The detection of GW170817 in 2017, the finale of a neutron star inspiral observed through both gravitational waves and gamma rays, at a distance of 130 million light years, currently provides by far the best limit on the difference between the speed of light and that of gravity. Photons were detected 1.7 seconds after peak gravitational wave emission; assuming a delay of zero to 10 seconds, the difference between the speeds of gravitational and electromagnetic waves, − , is constrained to between and times the speed of light.
This also excluded some alternatives to general relativity, including variants of scalar–tensor theory, instances of Horndeski's theory, and Hořava–Lifshitz gravity.
Notes
References
Further reading
External links
Does Gravity Travel at the Speed of Light? in The Physics FAQ (also here).
Measuring the Speed of Gravity at MathPages
Hazel Muir, First speed of gravity measurement revealed, a New Scientist article on Kopeikin's original announcement.
Clifford M. Will, Has the Speed of Gravity Been Measured?.
Kevin Carlson, MU physicist defends Einstein's theory and 'speed of gravity' measurement.
Effects of gravity
History of physics | 0.785571 | 0.993773 | 0.78068 |
Displacement current | In electromagnetism, displacement current density is the quantity appearing in Maxwell's equations that is defined in terms of the rate of change of , the electric displacement field. Displacement current density has the same units as electric current density, and it is a source of the magnetic field just as actual current is. However it is not an electric current of moving charges, but a time-varying electric field. In physical materials (as opposed to vacuum), there is also a contribution from the slight motion of charges bound in atoms, called dielectric polarization.
The idea was conceived by James Clerk Maxwell in his 1861 paper On Physical Lines of Force, Part III in connection with the displacement of electric particles in a dielectric medium. Maxwell added displacement current to the electric current term in Ampère's circuital law. In his 1865 paper A Dynamical Theory of the Electromagnetic Field Maxwell used this amended version of Ampère's circuital law to derive the electromagnetic wave equation. This derivation is now generally accepted as a historical landmark in physics by virtue of uniting electricity, magnetism and optics into one single unified theory. The displacement current term is now seen as a crucial addition that completed Maxwell's equations and is necessary to explain many phenomena, most particularly the existence of electromagnetic waves.
Explanation
The electric displacement field is defined as:
where:
is the permittivity of free space;
is the electric field intensity; and
is the polarization of the medium.
Differentiating this equation with respect to time defines the displacement current density, which therefore has two components in a dielectric:(see also the "displacement current" section of the article "current density")
The first term on the right hand side is present in material media and in free space. It doesn't necessarily come from any actual movement of charge, but it does have an associated magnetic field, just as a current does due to charge motion. Some authors apply the name displacement current to the first term by itself.
The second term on the right hand side, called polarization current density, comes from the change in polarization of the individual molecules of the dielectric material. Polarization results when, under the influence of an applied electric field, the charges in molecules have moved from a position of exact cancellation. The positive and negative charges in molecules separate, causing an increase in the state of polarization . A changing state of polarization corresponds to charge movement and so is equivalent to a current, hence the term "polarization current". Thus,
This polarization is the displacement current as it was originally conceived by Maxwell. Maxwell made no special treatment of the vacuum, treating it as a material medium. For Maxwell, the effect of was simply to change the relative permittivity in the relation .
The modern justification of displacement current is explained below.
Isotropic dielectric case
In the case of a very simple dielectric material the constitutive relation holds:
where the permittivity is the product of:
, the permittivity of free space, or the electric constant; and
, the relative permittivity of the dielectric.
In the equation above, the use of accounts for
the polarization (if any) of the dielectric material.
The scalar value of displacement current may also be expressed in terms of electric flux:
The forms in terms of scalar are correct only for linear isotropic materials. For linear non-isotropic materials, becomes a matrix; even more generally, may be replaced by a tensor, which may depend upon the electric field itself, or may exhibit frequency dependence (hence dispersion).
For a linear isotropic dielectric, the polarization is given by:
where is known as the susceptibility of the dielectric to electric fields. Note that
Necessity
Some implications of the displacement current follow, which agree with experimental observation, and with the requirements of logical consistency for the theory of electromagnetism.
Generalizing Ampère's circuital law
Current in capacitors
An example illustrating the need for the displacement current arises in connection with capacitors with no medium between the plates. Consider the charging capacitor in the figure. The capacitor is in a circuit that causes equal and opposite charges to appear on the left plate and the right plate, charging the capacitor and increasing the electric field between its plates. No actual charge is transported through the vacuum between its plates. Nonetheless, a magnetic field exists between the plates as though a current were present there as well. One explanation is that a displacement current "flows" in the vacuum, and this current produces the magnetic field in the region between the plates according to Ampère's law:
where
is the closed line integral around some closed curve ;
is the magnetic field measured in teslas;
is the vector dot product;
is an infinitesimal vector line element along the curve , that is, a vector with magnitude equal to the length element of , and direction given by the tangent to the curve ;
is the magnetic constant, also called the permeability of free space; and
is the net displacement current that passes through a small surface bounded by the curve .
The magnetic field between the plates is the same as that outside the plates, so the displacement current must be the same as the conduction current in the wires, that is,
which extends the notion of current beyond a mere transport of charge.
Next, this displacement current is related to the charging of the capacitor. Consider the current in the imaginary cylindrical surface shown surrounding the left plate. A current, say , passes outward through the left surface of the cylinder, but no conduction current (no transport of real charges) crosses the right surface . Notice that the electric field between the plates increases as the capacitor charges. That is, in a manner described by Gauss's law, assuming no dielectric between the plates:
where refers to the imaginary cylindrical surface. Assuming a parallel plate capacitor with uniform electric field, and neglecting fringing effects around the edges of the plates, according to charge conservation equation
where the first term has a negative sign because charge leaves surface (the charge is decreasing), the last term has a positive sign because unit vector of surface is from left to right while the direction of electric field is from right to left, is the area of the surface . The electric field at surface is zero because surface is in the outside of the capacitor. Under the assumption of a uniform electric field distribution inside the capacitor, the displacement current density D is found by dividing by the area of the surface:
where is the current leaving the cylindrical surface (which must equal D) and D is the flow of charge per unit area into the cylindrical surface through the face .
Combining these results, the magnetic field is found using the integral form of Ampère's law with an arbitrary choice of contour provided the displacement current density term is added to the conduction current density (the Ampère-Maxwell equation):
This equation says that the integral of the magnetic field around the edge of a surface is equal to the integrated current through any surface with the same edge, plus the displacement current term through whichever surface.
As depicted in the figure to the right, the current crossing surface is entirely conduction current. Applying the Ampère-Maxwell equation to surface yields:
However, the current crossing surface is entirely displacement current. Applying this law to surface , which is bounded by exactly the same curve , but lies between the plates, produces:
Any surface that intersects the wire has current passing through it so Ampère's law gives the correct magnetic field. However a second surface bounded by the same edge could be drawn passing between the capacitor plates, therefore having no current passing through it. Without the displacement current term Ampere's law would give zero magnetic field for this surface. Therefore, without the displacement current term Ampere's law gives inconsistent results, the magnetic field would depend on the surface chosen for integration. Thus the displacement current term is necessary as a second source term which gives the correct magnetic field when the surface of integration passes between the capacitor plates. Because the current is increasing the charge on the capacitor's plates, the electric field between the plates is increasing, and the rate of change of electric field gives the correct value for the field found above.
Mathematical formulation
In a more mathematical vein, the same results can be obtained from the underlying differential equations. Consider for simplicity a non-magnetic medium where the relative magnetic permeability is unity, and the complication of magnetization current (bound current) is absent, so that and
The current leaving a volume must equal the rate of decrease of charge in a volume. In differential form this continuity equation becomes:
where the left side is the divergence of the free current density and the right side is the rate of decrease of the free charge density. However, Ampère's law in its original form states:
which implies that the divergence of the current term vanishes, contradicting the continuity equation. (Vanishing of the divergence is a result of the mathematical identity that states the divergence of a curl is always zero.) This conflict is removed by addition of the displacement current, as then:
and
which is in agreement with the continuity equation because of Gauss's law:
Wave propagation
The added displacement current also leads to wave propagation by taking the curl of the equation for magnetic field.
Substituting this form for into Ampère's law, and assuming there is no bound or free current density contributing to :
with the result:
However,
leading to the wave equation:
where use is made of the vector identity that holds for any vector field :
and the fact that the divergence of the magnetic field is zero. An identical wave equation can be found for the electric field by taking the curl:
If , , and are zero, the result is:
The electric field can be expressed in the general form:
where is the electric potential (which can be chosen to satisfy Poisson's equation) and is a vector potential (i.e. magnetic vector potential, not to be confused with surface area, as is denoted elsewhere). The component on the right hand side is the Gauss's law component, and this is the component that is relevant to the conservation of charge argument above. The second term on the right-hand side is the one relevant to the electromagnetic wave equation, because it is the term that contributes to the curl of . Because of the vector identity that says the curl of a gradient is zero, does not contribute to .
History and interpretation
Maxwell's displacement current was postulated in part III of his 1861 paper ''. Few topics in modern physics have caused as much confusion and misunderstanding as that of displacement current. This is in part due to the fact that Maxwell used a sea of molecular vortices in his derivation, while modern textbooks operate on the basis that displacement current can exist in free space. Maxwell's derivation is unrelated to the modern day derivation for displacement current in the vacuum, which is based on consistency between Ampère's circuital law for the magnetic field and the continuity equation for electric charge.
Maxwell's purpose is stated by him at (Part I, p. 161):
He is careful to point out the treatment is one of analogy:
In part III, in relation to displacement current, he says
Clearly Maxwell was driving at magnetization even though the same introduction clearly talks about dielectric polarization.
Maxwell compared the speed of electricity measured by Wilhelm Eduard Weber and Rudolf Kohlrausch (193,088 miles/second) and the speed of light determined by the Fizeau experiment (195,647 miles/second). Based on their same speed, he concluded that "light consists of transverse undulations in the same medium that is the cause of electric and magnetic phenomena."
But although the above quotations point towards a magnetic explanation for displacement current, for example, based upon the divergence of the above curl equation, Maxwell's explanation ultimately stressed linear polarization of dielectrics:
With some change of symbols (and units) combined with the results deduced in the section (, , and the material constant these equations take the familiar form between a parallel plate capacitor with uniform electric field, and neglecting fringing effects around the edges of the plates:
When it came to deriving the electromagnetic wave equation from displacement current in his 1865 paper 'A Dynamical Theory of the Electromagnetic Field', he got around the problem of the non-zero divergence associated with Gauss's law and dielectric displacement by eliminating the Gauss term and deriving the wave equation exclusively for the solenoidal magnetic field vector.
Maxwell's emphasis on polarization diverted attention towards the electric capacitor circuit, and led to the common belief that Maxwell conceived of displacement current so as to maintain conservation of charge in an electric capacitor circuit. There are a variety of debatable notions about Maxwell's thinking, ranging from his supposed desire to perfect the symmetry of the field equations to the desire to achieve compatibility with the continuity equation.
See also
Electromagnetic wave equation
Ampère's circuital law
Capacitance
References
Maxwell's papers
On Faraday's Lines of Force Maxwell's paper of 1855
Maxwell's paper of 1861
Maxwell's paper of 1864
Further reading
AM Bork Maxwell, Displacement Current, and Symmetry (1963)
AM Bork Maxwell and the Electromagnetic Wave Equation (1967)
External links
Electric current
Electricity concepts
Electrodynamics
Electromagnetism | 0.787466 | 0.991274 | 0.780594 |
Atomic physics | Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus. Atomic physics typically refers to the study of atomic structure and the interaction between atoms. It is primarily concerned with the way in which electrons are arranged around the nucleus and
the processes by which these arrangements change. This comprises ions, neutral atoms and, unless otherwise stated, it can be assumed that the term atom includes ions.
The term atomic physics can be associated with nuclear power and nuclear weapons, due to the synonymous use of atomic and nuclear in standard English. Physicists distinguish between atomic physics—which deals with the atom as a system consisting of a nucleus and electrons—and nuclear physics, which studies nuclear reactions and special properties of atomic nuclei.
As with many scientific fields, strict delineation can be highly contrived and atomic physics is often considered in the wider context of atomic, molecular, and optical physics. Physics research groups are usually so classified.
Isolated atoms
Atomic physics primarily considers atoms in isolation. Atomic models will consist of a single nucleus that may be surrounded by one or more bound electrons. It is not concerned with the formation of molecules (although much of the physics is identical), nor does it examine atoms in a solid state as condensed matter. It is concerned with processes such as ionization and excitation by photons or collisions with atomic particles.
While modelling atoms in isolation may not seem realistic, if one considers atoms in a gas or plasma then the time-scales for atom-atom interactions are huge in comparison to the atomic processes that are generally considered. This means that the individual atoms can be treated as if each were in isolation, as the vast majority of the time they are. By this consideration, atomic physics provides the underlying theory in plasma physics and atmospheric physics, even though both deal with very large numbers of atoms.
Electronic configuration
Electrons form notional shells around the nucleus. These are normally in a ground state but can be excited by the absorption of energy from light (photons), magnetic fields, or interaction with a colliding particle (typically ions or other electrons).
Electrons that populate a shell are said to be in a bound state. The energy necessary to remove an electron from its shell (taking it to infinity) is called the binding energy. Any quantity of energy absorbed by the electron in excess of this amount is converted to kinetic energy according to the conservation of energy. The atom is said to have undergone the process of ionization.
If the electron absorbs a quantity of energy less than the binding energy, it will be transferred to an excited state. After a certain time, the electron in an excited state will "jump" (undergo a transition) to a lower state. In a neutral atom, the system will emit a photon of the difference in energy, since energy is conserved.
If an inner electron has absorbed more than the binding energy (so that the atom ionizes), then a more outer electron may undergo a transition to fill the inner orbital. In this case, a visible photon or a characteristic X-ray is emitted, or a phenomenon known as the Auger effect may take place, where the released energy is transferred to another bound electron, causing it to go into the continuum. The Auger effect allows one to multiply ionize an atom with a single photon.
There are rather strict selection rules as to the electronic configurations that can be reached by excitation by light — however, there are no such rules for excitation by collision processes.
History and developments
One of the earliest steps towards atomic physics was the recognition that matter was composed
of atoms. It forms a part of the texts written in 6th century BC to 2nd century BC, such as those of Democritus or written by . This theory was later developed in the modern sense of the basic unit of a chemical element by the British chemist and physicist John Dalton in the 18th century. At this stage, it wasn't clear what atoms were, although they could be described and classified by their properties (in bulk). The invention of the periodic system of elements by Dmitri Mendeleev was another great step forward.
The true beginning of atomic physics is marked by the discovery of spectral lines and attempts to describe the phenomenon, most notably by Joseph von Fraunhofer. The study of these lines led to the Bohr atom model and to the birth of quantum mechanics. In seeking to explain atomic spectra, an entirely new mathematical model of matter was revealed. As far as atoms and their electron shells were concerned, not only did this yield a better overall description, i.e. the atomic orbital model, but it also provided a new theoretical basis for chemistry
(quantum chemistry) and spectroscopy.
Since the Second World War, both theoretical and experimental fields have advanced at a rapid pace. This can be attributed to progress in computing technology, which has allowed larger and more sophisticated models of atomic structure and associated collision processes. Similar technological advances in accelerators, detectors, magnetic field generation and lasers have greatly assisted experimental work.
Significant atomic physicists
See also
Particle physics
Isomeric shift
Atomism
Bibliography
References
External links
MIT-Harvard Center for Ultracold Atoms
Stanford QFARM Initiative for Quantum Science & Enginneering
Joint Quantum Institute at University of Maryland and NIST
Atomic Physics on the Internet
JILA (Atomic Physics)
ORNL Physics Division
Atomic, molecular, and optical physics | 0.788094 | 0.990462 | 0.780577 |
Statics | Statics is the branch of classical mechanics that is concerned with the analysis of force and torque acting on a physical system that does not experience an acceleration, but rather is in equilibrium with its environment.
If is the total of the forces acting on the system, is the mass of the system and is the acceleration of the system, Newton's second law states that (the bold font indicates a vector quantity, i.e. one with both magnitude and direction). If , then . As for a system in static equilibrium, the acceleration equals zero, the system is either at rest, or its center of mass moves at constant velocity.
The application of the assumption of zero acceleration to the summation of moments acting on the system leads to , where is the summation of all moments acting on the system, is the moment of inertia of the mass and is the angular acceleration of the system. For a system where , it is also true that
Together, the equations (the 'first condition for equilibrium') and (the 'second condition for equilibrium') can be used to solve for unknown quantities acting on the system.
History
Archimedes (c. 287–c. 212 BC) did pioneering work in statics.
Later developments in the field of statics are found in works of Thebit.
Background
Force
Force is the action of one body on another. A force is either a push or a pull, and it tends to move a body in the direction of its action. The action of a force is characterized by its magnitude, by the direction of its action, and by its point of application (or point of contact). Thus, force is a vector quantity, because its effect depends on the direction as well as on the magnitude of the action.
Forces are classified as either contact or body forces. A contact force is produced by direct physical contact; an example is the force exerted on a body by a supporting surface. A body force is generated by virtue of the position of a body within a force field such as a gravitational, electric, or magnetic field and is independent of contact with any other body; an example of a body force is the weight of a body in the Earth's gravitational field.
Moment of a force
In addition to the tendency to move a body in the direction of its application, a force can also tend to rotate a body about an axis. The axis may be any line which neither intersects nor is parallel to the line of action of the force. This rotational tendency is known as moment of force (M). Moment is also referred to as torque.
Moment about a point
The magnitude of the moment of a force at a point O, is equal to the perpendicular distance from O to the line of action of F, multiplied by the magnitude of the force: , where
F = the force applied
d = the perpendicular distance from the axis to the line of action of the force. This perpendicular distance is called the moment arm.
The direction of the moment is given by the right hand rule, where counter clockwise (CCW) is out of the page, and clockwise (CW) is into the page. The moment direction may be accounted for by using a stated sign convention, such as a plus sign (+) for counterclockwise moments and a minus sign (−) for clockwise moments, or vice versa. Moments can be added together as vectors.
In vector format, the moment can be defined as the cross product between the radius vector, r (the vector from point O to the line of action), and the force vector, F:
Varignon's theorem
Varignon's theorem states that the moment of a force about any point is equal to the sum of the moments of the components of the force about the same point.
Equilibrium equations
The static equilibrium of a particle is an important concept in statics. A particle is in equilibrium only if the resultant of all forces acting on the particle is equal to zero. In a rectangular coordinate system the equilibrium equations can be represented by three scalar equations, where the sums of forces in all three directions are equal to zero. An engineering application of this concept is determining the tensions of up to three cables under load, for example the forces exerted on each cable of a hoist lifting an object or of guy wires restraining a hot air balloon to the ground.
Moment of inertia
In classical mechanics, moment of inertia, also called mass moment, rotational inertia, polar moment of inertia of mass, or the angular mass, (SI units kg·m²) is a measure of an object's resistance to changes to its rotation. It is the inertia of a rotating body with respect to its rotation. The moment of inertia plays much the same role in rotational dynamics as mass does in linear dynamics, describing the relationship between angular momentum and angular velocity, torque and angular acceleration, and several other quantities. The symbols I and J are usually used to refer to the moment of inertia or polar moment of inertia.
While a simple scalar treatment of the moment of inertia suffices for many situations, a more advanced tensor treatment allows the analysis of such complicated systems as spinning tops and gyroscopic motion.
The concept was introduced by Leonhard Euler in his 1765 book Theoria motus corporum solidorum seu rigidorum; he discussed the moment of inertia and many related concepts, such as the principal axis of inertia.
Applications
Solids
Statics is used in the analysis of structures, for instance in architectural and structural engineering. Strength of materials is a related field of mechanics that relies heavily on the application of static equilibrium. A key concept is the center of gravity of a body at rest: it represents an imaginary point at which all the mass of a body resides. The position of the point relative to the foundations on which a body lies determines its stability in response to external forces. If the center of gravity exists outside the foundations, then the body is unstable because there is a torque acting: any small disturbance will cause the body to fall or topple. If the center of gravity exists within the foundations, the body is stable since no net torque acts on the body. If the center of gravity coincides with the foundations, then the body is said to be metastable.
Fluids
Hydrostatics, also known as fluid statics, is the study of fluids at rest (i.e. in static equilibrium). The characteristic of any fluid at rest is that the force exerted on any particle of the fluid is the same at all points at the same depth (or altitude) within the fluid. If the net force is greater than zero the fluid will move in the direction of the resulting force. This concept was first formulated in a slightly extended form by French mathematician and philosopher Blaise Pascal in 1647 and became known as Pascal's Law. It has many important applications in hydraulics. Archimedes, Abū Rayhān al-Bīrūnī, Al-Khazini and Galileo Galilei were also major figures in the development of hydrostatics.
See also
Cremona diagram
Dynamics
Solid mechanics
Notes
References
External links | 0.786633 | 0.992296 | 0.780572 |
Conservative force | In physics, a conservative force is a force with the property that the total work done by the force in moving a particle between two points is independent of the path taken. Equivalently, if a particle travels in a closed loop, the total work done (the sum of the force acting along the path multiplied by the displacement) by a conservative force is zero.
A conservative force depends only on the position of the object. If a force is conservative, it is possible to assign a numerical value for the potential at any point and conversely, when an object moves from one location to another, the force changes the potential energy of the object by an amount that does not depend on the path taken, contributing to the mechanical energy and the overall conservation of energy. If the force is not conservative, then defining a scalar potential is not possible, because taking different paths would lead to conflicting potential differences between the start and end points.
Gravitational force is an example of a conservative force, while frictional force is an example of a non-conservative force.
Other examples of conservative forces are: force in elastic spring, electrostatic force between two electric charges, and magnetic force between two magnetic poles. The last two forces are called central forces as they act along the line joining the centres of two charged/magnetized bodies. A central force is conservative if and only if it is spherically symmetric.
For conservative forces,
where is the conservative force, is the potential energy, and is the position.
Informal definition
Informally, a conservative force can be thought of as a force that conserves mechanical energy. Suppose a particle starts at point A, and there is a force F acting on it. Then the particle is moved around by other forces, and eventually ends up at A again. Though the particle may still be moving, at that instant when it passes point A again, it has traveled a closed path. If the net work done by F at this point is 0, then F passes the closed path test. Any force that passes the closed path test for all possible closed paths is classified as a conservative force.
The gravitational force, spring force, magnetic force (according to some definitions, see below) and electric force (at least in a time-independent magnetic field, see Faraday's law of induction for details) are examples of conservative forces, while friction and air drag are classical examples of non-conservative forces.
For non-conservative forces, the mechanical energy that is lost (not conserved) has to go somewhere else, by conservation of energy. Usually the energy is turned into heat, for example the heat generated by friction. In addition to heat, friction also often produces some sound energy. The water drag on a moving boat converts the boat's mechanical energy into not only heat and sound energy, but also wave energy at the edges of its wake. These and other energy losses are irreversible because of the second law of thermodynamics.
Path independence
A direct consequence of the closed path test is that the work done by a conservative force on a particle moving between any two points does not depend on the path taken by the particle.
This is illustrated in the figure to the right: The work done by the gravitational force on an object depends only on its change in height because the gravitational force is conservative. The work done by a conservative force is equal to the negative of change in potential energy during that process. For a proof, imagine two paths 1 and 2, both going from point A to point B. The variation of energy for the particle, taking path 1 from A to B and then path 2 backwards from B to A, is 0; thus, the work is the same in path 1 and 2, i.e., the work is independent of the path followed, as long as it goes from A to B.
For example, if a child slides down a frictionless slide, the work done by the gravitational force on the child from the start of the slide to the end is independent of the shape of the slide; it only depends on the vertical displacement of the child.
Mathematical description
A force field F, defined everywhere in space (or within a simply-connected volume of space), is called a conservative force or conservative vector field if it meets any of these three equivalent conditions:
The curl of F is the zero vector: where in two dimensions this reduces to:
There is zero net work (W) done by the force when moving a particle through a trajectory that starts and ends in the same place:
The force can be written as the negative gradient of a potential, :
The term conservative force comes from the fact that when a conservative force exists, it conserves mechanical energy. The most familiar conservative forces are gravity, the electric force (in a time-independent magnetic field, see Faraday's law), and spring force.
Many forces (particularly those that depend on velocity) are not force fields. In these cases, the above three conditions are not mathematically equivalent. For example, the magnetic force satisfies condition 2 (since the work done by a magnetic field on a charged particle is always zero), but does not satisfy condition 3, and condition 1 is not even defined (the force is not a vector field, so one cannot evaluate its curl). Accordingly, some authors classify the magnetic force as conservative, while others do not. The magnetic force is an unusual case; most velocity-dependent forces, such as friction, do not satisfy any of the three conditions, and therefore are unambiguously nonconservative.
Non-conservative force
Despite conservation of total energy, non-conservative forces can arise in classical physics due to neglected degrees of freedom or from time-dependent potentials. Many non-conservative forces may be perceived as macroscopic effects of small-scale conservative forces. For instance, friction may be treated without violating conservation of energy by considering the motion of individual molecules; however, that means every molecule's motion must be considered rather than handling it through statistical methods. For macroscopic systems the non-conservative approximation is far easier to deal with than millions of degrees of freedom.
Examples of non-conservative forces are friction and non-elastic material stress. Friction has the effect of transferring some of the energy from the large-scale motion of the bodies to small-scale movements in their interior, and therefore appear non-conservative on a large scale. General relativity is non-conservative, as seen in the anomalous precession of Mercury's orbit. However, general relativity does conserve a stress–energy–momentum pseudotensor.
See also
Conservative vector field
Conservative system
References
Force | 0.784707 | 0.994641 | 0.780502 |
Applied physics | Applied physics is the application of physics to solve scientific or engineering problems. It is usually considered a bridge or a connection between physics and engineering.
"Applied" is distinguished from "pure" by a subtle combination of factors, such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems and with the application of physics in other areas of science and high technology.
Examples of research and development areas
Accelerator physics
Acoustics
Atmospheric physics
Biophysics
Brain–computer interfacing
Chemistry
Chemical physics
Differentiable programming
Artificial intelligence
Scientific computing
Engineering physics
Chemical engineering
Electrical engineering
Electronics
Sensors
Transistors
Materials science and engineering
Metamaterials
Nanotechnology
Semiconductors
Thin films
Mechanical engineering
Aerospace engineering
Astrodynamics
Electromagnetic propulsion
Fluid mechanics
Military engineering
Lidar
Radar
Sonar
Stealth technology
Nuclear engineering
Fission reactors
Fusion reactors
Optical engineering
Photonics
Cavity optomechanics
Lasers
Photonic crystals
Geophysics
Materials physics
Medical physics
Health physics
Radiation dosimetry
Medical imaging
Magnetic resonance imaging
Radiation therapy
Microscopy
Scanning probe microscopy
Atomic force microscopy
Scanning tunneling microscopy
Scanning electron microscopy
Transmission electron microscopy
Nuclear physics
Fission
Fusion
Optical physics
Nonlinear optics
Quantum optics
Plasma physics
Quantum technology
Quantum computing
Quantum cryptography
Renewable energy
Space physics
Spectroscopy
See also
Applied science
Applied mathematics
Engineering
Engineering Physics
High Technology
References
Engineering disciplines | 0.786327 | 0.99234 | 0.780303 |
Latent heat | Latent heat (also known as latent energy or heat of transformation) is energy released or absorbed, by a body or a thermodynamic system, during a constant-temperature process—usually a first-order phase transition, like melting or condensation.
Latent heat can be understood as hidden energy which is supplied or extracted to change the state of a substance without changing its temperature or pressure. This includes the latent heat of fusion (solid to liquid), the latent heat of vaporization (liquid to gas) and the latent heat of sublimation (solid to gas).
The term was introduced around 1762 by Scottish chemist Joseph Black. Black used the term in the context of calorimetry where a heat transfer caused a volume change in a body while its temperature was constant.
In contrast to latent heat, sensible heat is energy transferred as heat, with a resultant temperature change in a body.
Usage
The terms sensible heat and latent heat refer to energy transferred between a body and its surroundings, defined by the occurrence or non-occurrence of temperature change; they depend on the properties of the body. Sensible heat is sensed or felt in a process as a change in the body's temperature. Latent heat is energy transferred in a process without change of the body's temperature, for example, in a phase change (solid/liquid/gas).
Both sensible and latent heats are observed in many processes of transfer of energy in nature. Latent heat is associated with the change of phase of atmospheric or ocean water, vaporization, condensation, freezing or melting, whereas sensible heat is energy transferred that is evident in change of the temperature of the atmosphere or ocean, or ice, without those phase changes, though it is associated with changes of pressure and volume.
The original usage of the term, as introduced by Black, was applied to systems that were intentionally held at constant temperature. Such usage referred to latent heat of expansion and several other related latent heats. These latent heats are defined independently of the conceptual framework of thermodynamics.
When a body is heated at constant temperature by thermal radiation in a microwave field for example, it may expand by an amount described by its latent heat with respect to volume or latent heat of expansion, or increase its pressure by an amount described by its latent heat with respect to pressure.
Latent heat is energy released or absorbed by a body or a thermodynamic system during a constant-temperature process. Two common forms of latent heat are latent heat of fusion (melting) and latent heat of vaporization (boiling). These names describe the direction of energy flow when changing from one phase to the next: from solid to liquid, and liquid to gas.
In both cases the change is endothermic, meaning that the system absorbs energy. For example, when water evaporates, an input of energy is required for the water molecules to overcome the forces of attraction between them and make the transition from water to vapor.
If the vapor then condenses to a liquid on a surface, then the vapor's latent energy absorbed during evaporation is released as the liquid's sensible heat onto the surface.
The large value of the enthalpy of condensation of water vapor is the reason that steam is a far more effective heating medium than boiling water, and is more hazardous.
Meteorology
In meteorology, latent heat flux is the flux of energy from the Earth's surface to the atmosphere that is associated with evaporation or transpiration of water at the surface and subsequent condensation of water vapor in the troposphere. It is an important component of Earth's surface energy budget. Latent heat flux has been commonly measured with the Bowen ratio technique, or more recently since the mid-1900s by the eddy covariance method.
History
Background
Evaporative cooling
In 1748, an account was published in The Edinburgh Physical and Literary Essays of an experiment by the Scottish physician and chemist William Cullen. Cullen had used an air pump to lower the pressure in a container with diethyl ether. No heat was withdrawn from the ether, yet the ether boiled, but its temperature decreased. And in 1758, on a warm day in Cambridge, England, Benjamin Franklin and fellow scientist John Hadley experimented by continually wetting the ball of a mercury thermometer with ether and using bellows to evaporate the ether. With each subsequent evaporation, the thermometer read a lower temperature, eventually reaching . Another thermometer showed that the room temperature was constant at . In his letter Cooling by Evaporation, Franklin noted that, "One may see the possibility of freezing a man to death on a warm summer's day."
Latent heat
The English word latent comes from Latin latēns, meaning lying hidden. The term latent heat was introduced into calorimetry around 1750 by Joseph Black, commissioned by producers of Scotch whisky in search of ideal quantities of fuel and water for their distilling process to study system changes, such as of volume and pressure, when the thermodynamic system was held at constant temperature in a thermal bath.
It was known that when the air temperature rises above freezing—air then becoming the obvious heat source—snow melts very slowly and the temperature of the melted snow is close to its freezing point. In 1757, Black started to investigate if heat, therefore, was required for the melting of a solid, independent of any rise in temperature. As far Black knew, the general view at that time was that melting was inevitably accompanied by a small increase in temperature, and that no more heat was required than what the increase in temperature would require in itself. Soon, however, Black was able to show that much more heat was required during melting than could be explained by the increase in temperature alone. He was also able to show that heat is released by a liquid during its freezing; again, much more than could be explained by the decrease of its temperature alone.
Black would compare the change in temperature of two identical quantities of water, heated by identical means, one of which was, say, melted from ice, whereas the other was heated from merely cold liquid state. By comparing the resulting temperatures, he could conclude that, for instance, the temperature of the sample melted from ice was 140 °F lower than the other sample, thus melting the ice absorbed 140 "degrees of heat" that could not be measured by the thermometer, yet needed to be supplied, thus it was "latent" (hidden). Black also deduced that as much latent heat as was supplied into boiling the distillate (thus giving the quantity of fuel needed) also had to be absorbed to condense it again (thus giving the cooling water required).
Quantifying latent heat
In 1762, Black announced the following research and results to a society of professors at the University of Glasgow. Black had placed equal masses of ice at 32 °F (0 °C) and water at 33 °F (0.6 °C) respectively in two identical, well separated containers. The water and the ice were both evenly heated to 40 °F by the air in the room, which was at a constant 47 °F (8 °C). The water had therefore received 40 – 33 = 7 “degrees of heat”. The ice had been heated for 21 times longer and had therefore received 7 × 21 = 147 “degrees of heat”. The temperature of the ice had increased by 8 °F. The ice now stored, as it were, an additional 8 “degrees of heat” in a form which Black called sensible heat, manifested as temperature, which could be felt and measured. 147 – 8 = 139 “degrees of heat” were, so to speak, stored as latent heat, not manifesting itself. (In modern thermodynamics the idea of heat contained has been abandoned, so sensible heat and latent heat have been redefined. They do not reside anywhere.)
Black next showed that a water temperature of 176 °F was needed to melt an equal mass of ice until it was all 32 °F. So now 176 – 32 = 144 “degrees of heat” seemed to be needed to melt the ice. The modern value for the heat of fusion of ice would be 143 “degrees of heat” on the same scale (79.5 “degrees of heat Celsius”).
Finally Black increased the temperature of and vaporized respectively two equal masses of water through even heating. He showed that 830 “degrees of heat” was needed for the vaporization; again based on the time required. The modern value for the heat of vaporization of water would be 967 “degrees of heat” on the same scale.
James Prescott Joule
Later, James Prescott Joule characterised latent energy as the energy of interaction in a given configuration of particles, i.e. a form of potential energy, and the sensible heat as an energy that was indicated by the thermometer, relating the latter to thermal energy.
Specific latent heat
A specific latent heat (L) expresses the amount of energy in the form of heat (Q) required to completely effect a phase change of a unit of mass (m), usually , of a substance as an intensive property:
Intensive properties are material characteristics and are not dependent on the size or extent of the sample. Commonly quoted and tabulated in the literature are the specific latent heat of fusion and the specific latent heat of vaporization for many substances.
From this definition, the latent heat for a given mass of a substance is calculated by
where:
Q is the amount of energy released or absorbed during the change of phase of the substance (in kJ or in BTU),
m is the mass of the substance (in kg or in lb), and
L is the specific latent heat for a particular substance (in kJ kg−1 or in BTU lb−1), either Lf for fusion, or Lv for vaporization.
Table of specific latent heats
The following table shows the specific latent heats and change of phase temperatures (at standard pressure) of some common fluids and gases.
Specific latent heat for condensation of water in clouds
The specific latent heat of condensation of water in the temperature range from −25 °C to 40 °C is approximated by the following empirical cubic function:
where the temperature is taken to be the numerical value in °C.
For sublimation and deposition from and into ice, the specific latent heat is almost constant in the temperature range from −40 °C to 0 °C and can be approximated by the following empirical quadratic function:
Variation with temperature (or pressure)
As the temperature (or pressure) rises to the critical point, the latent heat of vaporization falls to zero.
See also
Bowen ratio
Eddy covariance flux (eddy correlation, eddy flux)
Sublimation (physics)
Specific heat capacity
Enthalpy of fusion
Enthalpy of vaporization
Ton of refrigeration -- the power required to freeze or melt 2000 lb of water in 24 hours
Notes
References
Thermochemistry
Atmospheric thermodynamics
Thermodynamics
Physical phenomena | 0.782269 | 0.99738 | 0.780219 |
Introduction to Electrodynamics | Introduction to Electrodynamics is a textbook by physicist David J. Griffiths. Generally regarded as a standard undergraduate text on the subject, it began as lecture notes that have been perfected over time. Its most recent edition, the fifth, was published in 2023 by Cambridge University. This book uses SI units (the mks convention) exclusively. A table for converting between SI and Gaussian units is given in Appendix C.
Griffiths said he was able to reduce the price of his textbook on quantum mechanics simply by changing the publisher, from Pearson to Cambridge University Press. He has done the same with this one. (See the ISBN in the box to the right.)
Table of contents (5th edition)
Preface
Advertisement
Chapter 1: Vector Analysis
Chapter 2: Electrostatics
Chapter 3: Potentials
Chapter 4: Electric Fields in Matter
Chapter 5: Magnetostatics
Chapter 6: Magnetic Fields in Matter
Chapter 7: Electrodynamics
Intermission
Chapter 8: Conservation Laws
Chapter 9: Electromagnetic Waves
Chapter 10: Potentials and Fields
Chapter 11: Radiation
Chapter 12: Electrodynamics and Relativity
Appendix A: Vector Calculus in Curvilinear Coordinates
Appendix B: The Helmholtz Theorem
Appendix C: Units
Index
Reception
Paul D. Scholten, a professor at Miami University (Ohio), opined that the first edition of this book offers a streamlined, though not always in-depth, coverage of the fundamental physics of electrodynamics. Special topics such as superconductivity or plasma physics are not mentioned. Breaking with tradition, Griffiths did not give solutions to all the odd-numbered questions in the book. Another unique feature of the first edition is the informal, even emotional, tone. The author sometimes referred to the reader directly. Physics received the primary focus. Equations are derived and explained, and common misconceptions are addressed.
According to Robert W. Scharstein from the Department of Electrical Engineering at the University of Alabama, the mathematics used in the third edition is just enough to convey the subject and the problems are valuable teaching tools that do not involve the "plug and chug disease." Although students of electrical engineering are not expected to encounter complicated boundary-value problems in their career, this book is useful to them as well, because of its emphasis on conceptual rather than mathematical issues. He argued that with this book, it is possible to skip the more mathematically involved sections to the more conceptually interesting topics, such as antennas. Moreover, the tone is clear and entertaining. Using this book "rejuvenated" his enthusiasm for teaching the subject.Colin Inglefield, an associate professor of physics at Weber State University (Utah), commented that the third edition is notable for its informal and conversational style that may appeal to a large class of students. The ordering of its chapters and its contents are fairly standard and are similar to texts at the same level. The first chapter offers a valuable review of vector calculus, which is essential for understanding this subject. While most other authors, including those aimed at a more advanced audience, denote the distance from the source point to the field point by , Griffiths uses a script (see figure). Unlike some comparable books, the level of mathematical sophistication is not particularly high. For example, Green's functions are not anywhere mentioned. Instead, physical intuition and conceptual understanding are emphasized. In fact, care is taken to address common misconceptions and pitfalls. It contains no computer exercises. Nevertheless, it is perfectly adequate for undergraduate instruction in physics. As of June 2005, Inglefield has taught three semesters using this book.
Physicists Yoni Kahn of Princeton University and Adam Anderson of the Fermi National Accelerator Laboratory indicated that Griffiths' Electrodynamics offers a dependable treatment of all materials in the electromagnetism section of the Physics Graduate Record Examinations (Physics GRE) except circuit analysis.
Editions
See also
Introduction to Quantum Mechanics (textbook) by the same author
Classical Electrodynamics (textbook) by John David Jackson, a commonly used graduate-level textbook.
List of textbooks in electromagnetism
List of textbooks on classical and quantum mechanics
List of textbooks in thermodynamics and statistical mechanics
List of books on general relativity
Notes
References
Further reading
A graduate textbook.
Electromagnetism
Physics textbooks
Electrodynamics
1981 non-fiction books
Undergraduate education | 0.793879 | 0.982794 | 0.780219 |
Moment (physics) | A moment is a mathematical expression involving the product of a distance and a physical quantity such as a force or electric charge. Moments are usually defined with respect to a fixed reference point and refer to physical quantities located some distance from the reference point. For example, the moment of force, often called torque, is the product of a force on an object and the distance from the reference point to the object. In principle, any physical quantity can be multiplied by a distance to produce a moment. Commonly used quantities include forces, masses, and electric charge distributions; a list of examples is provided later.
Elaboration
In its most basic form, a moment is the product of the distance to a point, raised to a power, and a physical quantity
(such as force or electrical charge) at that point:
where is the physical quantity such as a force applied at a point, or a point charge, or a point mass, etc. If the quantity is not concentrated solely at a single point, the moment is the integral of that quantity's density over space:
where is the distribution of the density of charge, mass, or whatever quantity is being considered.
More complex forms take into account the angular relationships between the distance and the physical quantity, but the above equations capture the essential feature of a moment, namely the existence of an underlying or equivalent term. This implies that there are multiple moments (one for each value of n) and that the moment generally depends on the reference point from which the distance is measured, although for certain moments (technically, the lowest non-zero moment) this dependence vanishes and the moment becomes independent of the reference point.
Each value of n corresponds to a different moment: the 1st moment corresponds to n = 1; the 2nd moment to n = 2, etc. The 0th moment (n = 0) is sometimes called the monopole moment; the 1st moment (n = 1) is sometimes called the dipole moment, and the 2nd moment (n = 2) is sometimes called the quadrupole moment, especially in the context of electric charge distributions.
Examples
The moment of force, or torque, is a first moment: , or, more generally, .
Similarly, angular momentum is the 1st moment of momentum: . Momentum itself is not a moment.
The electric dipole moment is also a 1st moment: for two opposite point charges or for a distributed charge with charge density .
Moments of mass:
The total mass is the zeroth moment of mass.
The center of mass is the 1st moment of mass normalized by total mass: for a collection of point masses, or for an object with mass distribution .
The moment of inertia is the 2nd moment of mass: for a point mass, for a collection of point masses, or for an object with mass distribution . The center of mass is often (but not always) taken as the reference point.
Multipole moments
Assuming a density function that is finite and localized to a particular region, outside that region a 1/r potential may be expressed as a series of spherical harmonics:
The coefficients are known as multipole moments, and take the form:
where expressed in spherical coordinates is a variable of integration. A more complete treatment may be found in pages describing multipole expansion or spherical multipole moments. (The convention in the above equations was taken from Jackson – the conventions used in the referenced pages may be slightly different.)
When represents an electric charge density, the are, in a sense, projections of the moments of electric charge: is the monopole moment; the are projections of the dipole moment, the are projections of the quadrupole moment, etc.
Applications of multipole moments
The multipole expansion applies to 1/r scalar potentials, examples of which include the electric potential and the gravitational potential. For these potentials, the expression can be used to approximate the strength of a field produced by a localized distribution of charges (or mass) by calculating the first few moments. For sufficiently large r, a reasonable approximation can be obtained from just the monopole and dipole moments. Higher fidelity can be achieved by calculating higher order moments. Extensions of the technique can be used to calculate interaction energies and intermolecular forces.
The technique can also be used to determine the properties of an unknown distribution . Measurements pertaining to multipole moments may be taken and used to infer properties of the underlying distribution. This technique applies to small objects such as molecules,
but has also been applied to the universe itself, being for example the technique employed by the WMAP and Planck experiments to analyze the cosmic microwave background radiation.
History
In works believed to stem from Ancient Greece, the concept of a moment is alluded to by the word ῥοπή (rhopḗ, "inclination") and composites like ἰσόρροπα (isorropa, "of equal inclinations"). The context of these works is mechanics and geometry involving the lever. In particular, in extant works attributed to Archimedes, the moment is pointed out in phrasings like:
"Commensurable magnitudes [A and B] are equally balanced if their distances [to the center Γ, i.e., ΑΓ and ΓΒ] are inversely proportional to their weights."
Moreover, in extant texts such as The Method of Mechanical Theorems, moments are used to infer the center of gravity, area, and volume of geometric figures.
In 1269, William of Moerbeke translates various works of Archimedes and Eutocious into Latin. The term ῥοπή is transliterated into ropen.
Around 1450, Jacobus Cremonensis translates ῥοπή in similar texts into the Latin term momentum ( "movement"). The same term is kept in a 1501 translation by Giorgio Valla, and subsequently by Francesco Maurolico, Federico Commandino, Guidobaldo del Monte, Adriaan van Roomen, Florence Rivault, Francesco Buonamici, Marin Mersenne, and Galileo Galilei. That said, why was the word momentum chosen for the translation? One clue, according to Treccani, is that momento in Medieval Italy, the place the early translators lived, in a transferred sense meant both a "moment of time" and a "moment of weight" (a small amount of weight that turns the scale).
In 1554, Francesco Maurolico clarifies the Latin term momentum in the work Prologi sive sermones. Here is a Latin to English translation as given by Marshall Clagett:
"[...] equal weights at unequal distances do not weigh equally, but unequal weights [at these unequal distances may] weigh equally. For a weight suspended at a greater distance is heavier, as is obvious in a balance. Therefore, there exists a certain third kind of power or third difference of magnitude—one that differs from both body and weight—and this they call moment. Therefore, a body acquires weight from both quantity [i.e., size] and quality [i.e., material], but a weight receives its moment from the distance at which it is suspended. Therefore, when distances are reciprocally proportional to weights, the moments [of the weights] are equal, as Archimedes demonstrated in The Book on Equal Moments. Therefore, weights or [rather] moments like other continuous quantities, are joined at some common terminus, that is, at something common to both of them like the center of weight, or at a point of equilibrium. Now the center of gravity in any weight is that point which, no matter how often or whenever the body is suspended, always inclines perpendicularly toward the universal center.
In addition to body, weight, and moment, there is a certain fourth power, which can be called impetus or force. Aristotle investigates it in On Mechanical Questions, and it is completely different from [the] three aforesaid [powers or magnitudes]. [...]"
in 1586, Simon Stevin uses the Dutch term staltwicht ("parked weight") for momentum in De Beghinselen Der Weeghconst.
In 1632, Galileo Galilei publishes Dialogue Concerning the Two Chief World Systems and uses the Italian momento with many meanings, including the one of his predecessors.
In 1643, Thomas Salusbury translates some of Galilei's works into English. Salusbury translates Latin momentum and Italian momento into the English term moment.
In 1765, the Latin term momentum inertiae (English: moment of inertia) is used by Leonhard Euler to refer to one of Christiaan Huygens's quantities in Horologium Oscillatorium. Huygens 1673 work involving finding the center of oscillation had been stimulated by Marin Mersenne, who suggested it to him in 1646.
In 1811, the French term moment d'une force (English: moment of force) with respect to a point and plane is used by Siméon Denis Poisson in Traité de mécanique. An English translation appears in 1842.
In 1884, the term torque is suggested by James Thomson in the context of measuring rotational forces of machines (with propellers and rotors). Today, a dynamometer is used to measure the torque of machines.
In 1893, Karl Pearson uses the term n-th moment and in the context of curve-fitting scientific measurements. Pearson wrote in response to John Venn, who, some years earlier, observed a peculiar pattern involving meteorological data and asked for an explanation of its cause. In Pearson's response, this analogy is used: the mechanical "center of gravity" is the mean and the "distance" is the deviation from the mean. This later evolved into moments in mathematics. The analogy between the mechanical concept of a moment and the statistical function involving the sum of the th powers of deviations was noticed by several earlier, including Laplace, Kramp, Gauss, Encke, Czuber, Quetelet, and De Forest.
See also
Torque (or moment of force), see also the article couple (mechanics)
Moment (mathematics)
Mechanical equilibrium, applies when an object is balanced so that the sum of the clockwise moments about a pivot is equal to the sum of the anticlockwise moments about the same pivot
Moment of inertia , analogous to mass in discussions of rotational motion. It is a measure of an object's resistance to changes in its rotation rate
Moment of momentum , the rotational analog of linear momentum.
Magnetic moment , a dipole moment measuring the strength and direction of a magnetic source.
Electric dipole moment, a dipole moment measuring the charge difference and direction between two or more charges. For example, the electric dipole moment between a charge of –q and q separated by a distance of d is
Bending moment, a moment that results in the bending of a structural element
First moment of area, a property of an object related to its resistance to shear stress
Second moment of area, a property of an object related to its resistance to bending and deflection
Polar moment of inertia, a property of an object related to its resistance to torsion
Image moments, statistical properties of an image
Seismic moment, quantity used to measure the size of an earthquake
Plasma moments, fluid description of plasma in terms of density, velocity and pressure
List of area moments of inertia
List of moments of inertia
Multipole expansion
Spherical multipole moments
Notes
References
External links
A dictionary definition of moment.
Length
Physical quantities
Multiplication
el:Ροπή
sq:Momenti | 0.783148 | 0.996251 | 0.780211 |
Damping | In physical systems, damping is the loss of energy of an oscillating system by dissipation. Damping is an influence within or upon an oscillatory system that has the effect of reducing or preventing its oscillation. Examples of damping include viscous damping in a fluid (see viscous drag), surface friction, radiation, resistance in electronic oscillators, and absorption and scattering of light in optical oscillators. Damping not based on energy loss can be important in other oscillating systems such as those that occur in biological systems and bikes (ex. Suspension (mechanics)). Damping is not to be confused with friction, which is a type of dissipative force acting on a system. Friction can cause or be a factor of damping.
The damping ratio is a dimensionless measure describing how oscillations in a system decay after a disturbance. Many systems exhibit oscillatory behavior when they are disturbed from their position of static equilibrium. A mass suspended from a spring, for example, might, if pulled and released, bounce up and down. On each bounce, the system tends to return to its equilibrium position, but overshoots it. Sometimes losses (e.g. frictional) damp the system and can cause the oscillations to gradually decay in amplitude towards zero or attenuate. The damping ratio is a measure describing how rapidly the oscillations decay from one bounce to the next.
The damping ratio is a system parameter, denoted by ("zeta"), that can vary from undamped, underdamped through critically damped to overdamped.
The behaviour of oscillating systems is often of interest in a diverse range of disciplines that include control engineering, chemical engineering, mechanical engineering, structural engineering, and electrical engineering. The physical quantity that is oscillating varies greatly, and could be the swaying of a tall building in the wind, or the speed of an electric motor, but a normalised, or non-dimensionalised approach can be convenient in describing common aspects of behavior.
Oscillation cases
Depending on the amount of damping present, a system exhibits different oscillatory behaviors and speeds.
Where the spring–mass system is completely lossless, the mass would oscillate indefinitely, with each bounce of equal height to the last. This hypothetical case is called undamped.
If the system contained high losses, for example if the spring–mass experiment were conducted in a viscous fluid, the mass could slowly return to its rest position without ever overshooting. This case is called overdamped.
Commonly, the mass tends to overshoot its starting position, and then return, overshooting again. With each overshoot, some energy in the system is dissipated, and the oscillations die towards zero. This case is called underdamped.
Between the overdamped and underdamped cases, there exists a certain level of damping at which the system will just fail to overshoot and will not make a single oscillation. This case is called critical damping. The key difference between critical damping and overdamping is that, in critical damping, the system returns to equilibrium in the minimum amount of time.
Damped sine wave
A damped sine wave or damped sinusoid is a sinusoidal function whose amplitude approaches zero as time increases. It corresponds to the underdamped case of damped second-order systems, or underdamped second-order differential equations.
Damped sine waves are commonly seen in science and engineering, wherever a harmonic oscillator is losing energy faster than it is being supplied.
A true sine wave starting at time = 0 begins at the origin (amplitude = 0). A cosine wave begins at its maximum value due to its phase difference from the sine wave. A given sinusoidal waveform may be of intermediate phase, having both sine and cosine components. The term "damped sine wave" describes all such damped waveforms, whatever their initial phase.
The most common form of damping, which is usually assumed, is the form found in linear systems. This form is exponential damping, in which the outer envelope of the successive peaks is an exponential decay curve. That is, when you connect the maximum point of each successive curve, the result resembles an exponential decay function. The general equation for an exponentially damped sinusoid may be represented as:
where:
is the instantaneous amplitude at time ;
is the initial amplitude of the envelope;
is the decay rate, in the reciprocal of the time units of the independent variable ;
is the phase angle at ;
is the angular frequency.
Other important parameters include:
Frequency: , the number of cycles per time unit. It is expressed in inverse time units , or hertz.
Time constant: , the time for the amplitude to decrease by the factor of e.
Half-life is the time it takes for the exponential amplitude envelope to decrease by a factor of 2. It is equal to which is approximately .
Damping ratio: is a non-dimensional characterization of the decay rate relative to the frequency, approximately , or exactly .
Q factor: is another non-dimensional characterization of the amount of damping; high Q indicates slow damping relative to the oscillation.
Damping ratio definition
The damping ratio is a parameter, usually denoted by ζ (Greek letter zeta), that characterizes the frequency response of a second-order ordinary differential equation. It is particularly important in the study of control theory. It is also important in the harmonic oscillator. In general, systems with higher damping ratios (one or greater) will demonstrate more of a damping effect. Underdamped systems have a value of less than one. Critically damped systems have a damping ratio of exactly 1, or at least very close to it.
The damping ratio provides a mathematical means of expressing the level of damping in a system relative to critical damping. For a damped harmonic oscillator with mass m, damping coefficient c, and spring constant k, it can be defined as the ratio of the damping coefficient in the system's differential equation to the critical damping coefficient:
where the system's equation of motion is
.
and the corresponding critical damping coefficient is
or
where
is the natural frequency of the system.
The damping ratio is dimensionless, being the ratio of two coefficients of identical units.
Derivation
Using the natural frequency of a harmonic oscillator and the definition of the damping ratio above, we can rewrite this as:
This equation is more general than just the mass–spring system, and also applies to electrical circuits and to other domains. It can be solved with the approach
where C and s are both complex constants, with s satisfying
Two such solutions, for the two values of s satisfying the equation, can be combined to make the general real solutions, with oscillatory and decaying properties in several regimes:
Undamped Is the case where corresponds to the undamped simple harmonic oscillator, and in that case the solution looks like , as expected. This case is extremely rare in the natural world with the closest examples being cases where friction was purposefully reduced to minimal values.
Underdamped If s is a pair of complex values, then each complex solution term is a decaying exponential combined with an oscillatory portion that looks like . This case occurs for , and is referred to as underdamped (e.g., bungee cable).
Overdamped If s is a pair of real values, then the solution is simply a sum of two decaying exponentials with no oscillation. This case occurs for , and is referred to as overdamped. Situations where overdamping is practical tend to have tragic outcomes if overshooting occurs, usually electrical rather than mechanical. For example, landing a plane in autopilot: if the system overshoots and releases landing gear too late, the outcome would be a disaster.
Critically damped The case where is the border between the overdamped and underdamped cases, and is referred to as critically damped. This turns out to be a desirable outcome in many cases where engineering design of a damped oscillator is required (e.g., a door closing mechanism).
Q factor and decay rate
The Q factor, damping ratio ζ, and exponential decay rate α are related such that
When a second-order system has (that is, when the system is underdamped), it has two complex conjugate poles that each have a real part of ; that is, the decay rate parameter represents the rate of exponential decay of the oscillations. A lower damping ratio implies a lower decay rate, and so very underdamped systems oscillate for long times. For example, a high quality tuning fork, which has a very low damping ratio, has an oscillation that lasts a long time, decaying very slowly after being struck by a hammer.
Logarithmic decrement
For underdamped vibrations, the damping ratio is also related to the logarithmic decrement . The damping ratio can be found for any two peaks, even if they are not adjacent. For adjacent peaks:
where
where x0 and x1 are amplitudes of any two successive peaks.
As shown in the right figure:
where , are amplitudes of two successive positive peaks and , are amplitudes of two successive negative peaks.
Percentage overshoot
In control theory, overshoot refers to an output exceeding its final, steady-state value. For a step input, the percentage overshoot (PO) is the maximum value minus the step value divided by the step value. In the case of the unit step, the overshoot is just the maximum value of the step response minus one.
The percentage overshoot (PO) is related to damping ratio (ζ) by:
Conversely, the damping ratio (ζ) that yields a given percentage overshoot is given by:
Examples and applications
Viscous drag
When an object is falling through the air, the only force opposing its freefall is air resistance. An object falling through water or oil would slow down at a greater rate, until eventually reaching a steady-state velocity as the drag force comes into equilibrium with the force from gravity. This is the concept of viscous drag, which for example is applied in automatic doors or anti-slam doors.
Damping in electrical systems
Electrical systems that operate with alternating current (AC) use resistors to damp LC resonant circuits.
Magnetic damping and Magnetorheological damping
Kinetic energy that causes oscillations is dissipated as heat by electric eddy currents which are induced by passing through a magnet's poles, either by a coil or aluminum plate. Eddy currents are a key component of electromagnetic induction where they set up a magnetic flux directly opposing the oscillating movement, creating a resistive force. In other words, the resistance caused by magnetic forces slows a system down. An example of this concept being applied is the brakes on roller coasters.
Magnetorheological Dampers (MR Dampers) use Magnetorheological fluid, which changes viscosity when subjected to a magnetic field. In this case, Magnetorheological damping may be considered an interdisciplinary form of damping with both viscous and magnetic damping mechanisms.
References
"Damping". Encyclopædia Britannica.
OpenStax, College. "Physics". Lumen.
Dimensionless numbers of mechanics
Engineering ratios
Ordinary differential equations
Mathematical analysis
Classical mechanics | 0.781998 | 0.997689 | 0.780191 |
METRIC | METRIC (Mapping EvapoTranspiration at high Resolution with Internalized Calibration) is a computer model developed by the University of Idaho, that uses Landsat satellite data to compute and map evapotranspiration (ET). METRIC calculates ET as a residual of the surface energy balance, where ET is estimated by keeping account of total net short wave and long wave radiation at the vegetation or soil surface, the amount of heat conducted into soil, and the amount of heat convected into the air above the surface. The difference in these three terms represents the amount of energy absorbed during the conversion of liquid water to vapor, which is ET. METRIC expresses near-surface temperature gradients used in heat convection as indexed functions of radiometric surface temperature, thereby eliminating the need for absolutely accurate surface temperature and the need for air-temperature measurements.
The surface energy balance is internally calibrated using ground-based reference ET that is based on local weather or gridded weather data sets to reduce computational biases inherent to remote sensing-based energy balance. Slope and aspect functions and temperature lapsing are used for application to mountainous terrain. METRIC algorithms are designed for relatively routine application by trained engineers and other technical professionals who possess a familiarity with energy balance and basic radiation physics. The primary inputs for the model are short-wave and long-wave thermal images from a satellite e.g., Landsat and MODIS, a digital elevation model, and ground-based weather data measured within or near the area of interest. ET “maps” i.e., images via METRIC provide the means to quantify ET on a field-by-field basis in terms of both the rate and spatial distribution. The use of surface energy balance can detect reduced ET caused by water shortage.
In the decade since Idaho introduced METRIC, it has been adopted for use in Montana, California, New Mexico, Utah, Wyoming, Texas, Nebraska, Colorado, Nevada, and Oregon. The mapping method has enabled these states to negotiate Native American water rights; assess agriculture to urban water transfers; manage aquifer depletion, monitor water right compliance; and protect endangered species.
See also
SEBAL, uses the surface energy balance to estimate aspects of the hydrological cycle. SEBAL maps evapotranspiration, biomass growth, water deficit and soil moisture
BAITSSS, evapotranspiration (ET) computer model which determines water use, primarily in agriculture landscape, using remote sensing-based information
References
Allen, R.G., M. Tasumi and R. Trezza. 2007. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) – Model. ASCE J. Irrigation and Drainage Engineering 133(4):380-394.
Allen, R.G., M. Tasumi, A.T. Morse, R. Trezza, W. Kramber, I. Lorite and C.W. Robison. 2007. Satellite-based energy balance for mapping evapotranspiration with internalized calibration (METRIC) – Applications. ASCE J. Irrigation and Drainage Engineering 133(4):395-406.
Numerical climate and weather models
Remote sensing
Computer-aided engineering software
Hydrology models
Environmental soil science | 0.787067 | 0.991136 | 0.78009 |
Mechanics | Mechanics is the area of physics concerned with the relationships between force, matter, and motion among physical objects. Forces applied to objects result in displacements, which are changes of an object's position relative to its environment.
Theoretical expositions of this branch of physics has its origins in Ancient Greece, for instance, in the writings of Aristotle and Archimedes (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo Galilei, Johannes Kepler, Christiaan Huygens, and Isaac Newton laid the foundation for what is now known as classical mechanics.
As a branch of classical physics, mechanics deals with bodies that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as the physical science that deals with the motion of and forces on bodies not in the quantum realm.
History
Antiquity
The ancient Greek philosophers were among the first to propose that abstract principles govern nature. The main theory of mechanics in antiquity was Aristotelian mechanics, though an alternative theory is exposed in the pseudo-Aristotelian Mechanical Problems, often attributed to one of his successors.
There is another tradition that goes back to the ancient Greeks where mathematics is used more extensively to analyze bodies statically or dynamically, an approach that may have been stimulated by prior work of the Pythagorean Archytas. Examples of this tradition include pseudo-Euclid (On the Balance), Archimedes (On the Equilibrium of Planes, On Floating Bodies), Hero (Mechanica), and Pappus (Collection, Book VIII).
Medieval age
In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, which was discussed by Hipparchus and Philoponus.
Persian Islamic polymath Ibn Sīnā published his theory of motion in The Book of Healing (1020). He said that an impetus is imparted to a projectile by the thrower, and viewed it as persistent, requiring external forces such as air resistance to dissipate it. Ibn Sina made distinction between 'force' and 'inclination' (called "mayl"), and argued that an object gained mayl when the object is in opposition to its natural motion. So he concluded that continuation of motion is attributed to the inclination that is transferred to the object, and that object will be in motion until the mayl is spent. He also claimed that a projectile in a vacuum would not stop unless it is acted upon, consistent with Newton's first law of motion.
On the question of a body subject to a constant (uniform) force, the 12th-century Jewish-Arab scholar Hibat Allah Abu'l-Barakat al-Baghdaadi (born Nathanel, Iraqi, of Baghdad) stated that constant force imparts constant acceleration. According to Shlomo Pines, al-Baghdaadi's theory of motion was "the oldest negation of Aristotle's fundamental dynamic law [namely, that a constant force produces a uniform motion], [and is thus an] anticipation in a vague fashion of the fundamental law of classical mechanics [namely, that a force applied continuously produces acceleration]."
Influenced by earlier writers such as Ibn Sina and al-Baghdaadi, the 14th-century French priest Jean Buridan developed the theory of impetus, which later developed into the modern theories of inertia, velocity, acceleration and momentum. This work and others was developed in 14th-century England by the Oxford Calculators such as Thomas Bradwardine, who studied and formulated various laws regarding falling bodies. The concept that the main properties of a body are uniformly accelerated motion (as of falling bodies) was worked out by the 14th-century Oxford Calculators.
Early modern age
Two central figures in the early modern age are Galileo Galilei and Isaac Newton. Galileo's final statement of his mechanics, particularly of falling bodies, is his Two New Sciences (1638). Newton's 1687 Philosophiæ Naturalis Principia Mathematica provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics.
There is some dispute over priority of various ideas: Newton's Principia is certainly the seminal work and has been tremendously influential, and many of the mathematics results therein could not have been stated earlier without the development of the calculus. However, many of the ideas, particularly as pertain to inertia and falling bodies, had been developed by prior scholars such as Christiaan Huygens and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are equivalent to modern statements or sufficient proof, or instead similar to modern statements and hypotheses is often debatable.
Modern age
Two main modern developments in mechanics are general relativity of Einstein, and quantum mechanics, both developed in the 20th century based in part on earlier 19th-century ideas. The development in the modern continuum mechanics, particularly in the areas of elasticity, plasticity, fluid dynamics, electrodynamics, and thermodynamics of deformable media, started in the second half of the 20th century.
Types of mechanical bodies
The often-used term body needs to stand for a wide assortment of objects, including particles, projectiles, spacecraft, stars, parts of machinery, parts of solids, parts of fluids (gases and liquids), etc.
Other distinctions between the various sub-disciplines of mechanics concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom, such as orientation in space.
Otherwise, bodies may be semi-rigid, i.e. elastic, or non-rigid, i.e. fluid. These subjects have both classical and quantum divisions of study.
For instance, the motion of a spacecraft, regarding its orbit and attitude (rotation), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics.
Sub-disciplines
The following are the three main designations consisting of various subjects that are studied in mechanics.
Note that there is also the "theory of fields" which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether it be classical fields or quantum fields. But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields (electromagnetic or gravitational), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function.
Classical
The following are described as forming classical mechanics:
Newtonian mechanics, the original theory of motion (kinematics) and forces (dynamics)
Analytical mechanics is a reformulation of Newtonian mechanics with an emphasis on system energy, rather than on forces. There are two main branches of analytical mechanics:
Hamiltonian mechanics, a theoretical formalism, based on the principle of conservation of energy
Lagrangian mechanics, another theoretical formalism, based on the principle of the least action
Classical statistical mechanics generalizes ordinary classical mechanics to consider systems in an unknown state; often used to derive thermodynamic properties.
Celestial mechanics, the motion of bodies in space: planets, comets, stars, galaxies, etc.
Astrodynamics, spacecraft navigation, etc.
Solid mechanics, elasticity, plasticity, or viscoelasticity exhibited by deformable solids
Fracture mechanics
Acoustics, sound (density, variation, propagation) in solids, fluids and gases
Statics, semi-rigid bodies in mechanical equilibrium
Fluid mechanics, the motion of fluids
Soil mechanics, mechanical behavior of soils
Continuum mechanics, mechanics of continua (both solid and fluid)
Hydraulics, mechanical properties of liquids
Fluid statics, liquids in equilibrium
Applied mechanics (also known as engineering mechanics)
Biomechanics, solids, fluids, etc. in biology
Biophysics, physical processes in living organisms
Relativistic or Einsteinian mechanics
Quantum
The following are categorized as being part of quantum mechanics:
Schrödinger wave mechanics, used to describe the movements of the wavefunction of a single particle.
Matrix mechanics is an alternative formulation that allows considering systems with a finite-dimensional state space.
Quantum statistical mechanics generalizes ordinary quantum mechanics to consider systems in an unknown state; often used to derive thermodynamic properties.
Particle physics, the motion, structure, and reactions of particles
Nuclear physics, the motion, structure, and reactions of nuclei
Condensed matter physics, quantum gases, solids, liquids, etc.
Historically, classical mechanics had been around for nearly a quarter millennium before quantum mechanics developed. Classical mechanics originated with Isaac Newton's laws of motion in Philosophiæ Naturalis Principia Mathematica, developed over the seventeenth century. Quantum mechanics developed later, over the nineteenth century, precipitated by Planck's postulate and Albert Einstein's explanation of the photoelectric effect. Both fields are commonly held to constitute the most certain knowledge that exists about physical nature.
Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Essential in this respect is the extensive use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them.
Quantum mechanics is of a bigger scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers, i.e. if quantum mechanics is applied to large systems (for e.g. a baseball), the result would almost be the same if classical mechanics had been applied. Quantum mechanics has superseded classical mechanics at the foundation level and is indispensable for the explanation and prediction of processes at the molecular, atomic, and sub-atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult (mainly due to computational limits) in quantum mechanics and hence remains useful and well used.
Modern descriptions of such behavior begin with a careful definition of such quantities as displacement (distance moved), time, velocity, acceleration, mass, and force. Until about 400 years ago, however, motion was explained from a very different point of view. For example, following the ideas of Greek philosopher and scientist Aristotle, scientists reasoned that a cannonball falls down because its natural position is in the Earth; the Sun, the Moon, and the stars travel in circles around the Earth because it is the nature of heavenly objects to travel in perfect circles.
Often cited as father to modern science, Galileo brought together the ideas of other great thinkers of his time and began to calculate motion in terms of distance travelled from some starting position and the time that it took. He showed that the speed of falling objects increases steadily during the time of their fall. This acceleration is the same for heavy objects as for light ones, provided air friction (air resistance) is discounted. The English mathematician and physicist Isaac Newton improved this analysis by defining force and mass and relating these to acceleration. For objects traveling at speeds close to the speed of light, Newton's laws were superseded by Albert Einstein's theory of relativity. [A sentence illustrating the computational complication of Einstein's theory of relativity.] For atomic and subatomic particles, Newton's laws were superseded by quantum theory. For everyday phenomena, however, Newton's three laws of motion remain the cornerstone of dynamics, which is the study of what causes motion.
Relativistic
Akin to the distinction between quantum and classical mechanics, Albert Einstein's general and special theories of relativity have expanded the scope of Newton and Galileo's formulation of mechanics. The differences between relativistic and Newtonian mechanics become significant and even dominant as the velocity of a body approaches the speed of light. For instance, in Newtonian mechanics, the kinetic energy of a free particle is , whereas in relativistic mechanics, it is (where is the Lorentz factor; this formula reduces to the Newtonian expression in the low energy limit).
For high-energy processes, quantum mechanics must be adjusted to account for special relativity; this has led to the development of quantum field theory.
Professional organizations
Applied Mechanics Division, American Society of Mechanical Engineers
Fluid Dynamics Division, American Physical Society
Society for Experimental Mechanics
Institution of Mechanical Engineers is the United Kingdom's qualifying body for mechanical engineers and has been the home of Mechanical Engineers for over 150 years.
International Union of Theoretical and Applied Mechanics
See also
Action principles
Applied mechanics
Dynamics
Engineering
Index of engineering science and mechanics articles
Kinematics
Kinetics
Non-autonomous mechanics
Statics
Wiesen Test of Mechanical Aptitude (WTMA)
References
Further reading
Robert Stawell Ball (1871) Experimental Mechanics from Google books.
Practical Mechanics for Boys (1914) by James Slough Zerbe.
External links
Physclips: Mechanics with animations and video clips from the University of New South Wales
The Archimedes Project
Articles containing video clips | 0.782932 | 0.996334 | 0.780062 |
Rigid body dynamics | In the physical science of dynamics, rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. The assumption that the bodies are rigid (i.e. they do not deform under the action of applied forces) simplifies analysis, by reducing the parameters that describe the configuration of the system to the translation and rotation of reference frames attached to each body. This excludes bodies that display fluid, highly elastic, and plastic behavior.
The dynamics of a rigid body system is described by the laws of kinematics and by the application of Newton's second law (kinetics) or their derivative form, Lagrangian mechanics. The solution of these equations of motion provides a description of the position, the motion and the acceleration of the individual components of the system, and overall the system itself, as a function of time. The formulation and solution of rigid body dynamics is an important tool in the computer simulation of mechanical systems.
Planar rigid body dynamics
If a system of particles moves parallel to a fixed plane, the system is said to be constrained to planar movement. In this case, Newton's laws (kinetics) for a rigid system of N particles, P, i=1,...,N, simplify because there is no movement in the k direction. Determine the resultant force and torque at a reference point R, to obtain
where r denotes the planar trajectory of each particle.
The kinematics of a rigid body yields the formula for the acceleration of the particle P in terms of the position R and acceleration A of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as,
For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along k perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors e from the reference point R to a point r and the unit vectors , so
This yields the resultant force on the system as
and torque as
where and is the unit vector perpendicular to the plane for all of the particles P.
Use the center of mass C as the reference point, so these equations for Newton's laws simplify to become
where is the total mass and I is the moment of inertia about an axis perpendicular to the movement of the rigid system and through the center of mass.
Rigid body in three dimensions
Orientation or attitude descriptions
Several methods to describe orientations of a rigid body in three dimensions have been developed. They are summarized in the following sections.
Euler angles
The first attempt to represent an orientation is attributed to Leonhard Euler. He imagined three reference frames that could rotate one around the other, and realized that by starting with a fixed reference frame and performing three rotations, he could get any other reference frame in the space (using two rotations to fix the vertical axis and another to fix the other two axes). The values of these three rotations are called Euler angles. Commonly, is used to denote precession, nutation, and intrinsic rotation.
Tait–Bryan angles
These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles. Mathematically they constitute a set of six possibilities inside the twelve possible sets of Euler angles, the ordering being the one best used for describing the orientation of a vehicle such as an airplane. In aerospace engineering they are usually referred to as Euler angles.
Orientation vector
Euler also realized that the composition of two rotations is equivalent to a single rotation about a different fixed axis (Euler's rotation theorem). Therefore, the composition of the former three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed.
Based on this fact he introduced a vectorial way to describe any rotation, with a vector on the rotation axis and module equal to the value of the angle. Therefore, any orientation can be represented by a rotation vector (also called Euler vector) that leads to it from the reference frame. When used to represent an orientation, the rotation vector is commonly called orientation vector, or attitude vector.
A similar method, called axis-angle representation, describes a rotation or orientation using a unit vector aligned with the rotation axis, and a separate value to indicate the angle (see figure).
Orientation matrix
With the introduction of matrices the Euler theorems were rewritten. The rotations were described by orthogonal matrices referred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a rotation matrix is commonly called orientation matrix, or attitude matrix.
The above-mentioned Euler vector is the eigenvector of a rotation matrix (a rotation matrix has a unique real eigenvalue).
The product of two rotation matrices is the composition of rotations. Therefore, as before, the orientation can be given as the rotation from the initial frame to achieve the frame that we want to describe.
The configuration space of a non-symmetrical object in n-dimensional space is SO(n) × Rn. Orientation may be visualized by attaching a basis of tangent vectors to an object. The direction in which each vector points determines its orientation.
Orientation quaternion
Another way to describe rotations is using rotation quaternions, also called versors. They are equivalent to rotation matrices and rotation vectors. With respect to rotation vectors, they can be more easily converted to and from matrices. When used to represent orientations, rotation quaternions are typically called orientation quaternions or attitude quaternions.
Newton's second law in three dimensions
To consider rigid body dynamics in three-dimensional space, Newton's second law must be extended to define the relationship between the movement of a rigid body and the system of forces and torques that act on it.
Newton formulated his second law for a particle as, "The change of motion of an object is proportional to the force impressed and is made in the direction of the straight line in which the force is impressed." Because Newton generally referred to mass times velocity as the "motion" of a particle, the phrase "change of motion" refers to the mass times acceleration of the particle, and so this law is usually written as
where F is understood to be the only external force acting on the particle, m is the mass of the particle, and a is its acceleration vector. The extension of Newton's second law to rigid bodies is achieved by considering a rigid system of particles.
Rigid system of particles
If a system of N particles, Pi, i=1,...,N, are assembled into a rigid body, then Newton's second law can be applied to each of the particles in the body. If Fi is the external force applied to particle Pi with mass mi, then
where Fij is the internal force of particle Pj acting on particle Pi that maintains the constant distance between these particles.
An important simplification to these force equations is obtained by introducing the resultant force and torque that acts on the rigid system. This resultant force and torque is obtained by choosing one of the particles in the system as a reference point, R, where each of the external forces are applied with the addition of an associated torque. The resultant force F and torque T are given by the formulas,
where Ri is the vector that defines the position of particle Pi.
Newton's second law for a particle combines with these formulas for the resultant force and torque to yield,
where the internal forces Fij cancel in pairs. The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration a of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as,
Mass properties
The mass properties of the rigid body are represented by its center of mass and inertia matrix. Choose the reference point R so that it satisfies the condition
then it is known as the center of mass of the system.
The inertia matrix [IR] of the system relative to the reference point R is defined by
where is the column vector ; is its transpose, and is the 3 by 3 identity matrix.
is the scalar product of with itself, while is the tensor product of with itself.
Force-torque equations
Using the center of mass and inertia matrix, the force and torque equations for a single rigid body take the form
and are known as Newton's second law of motion for a rigid body.
The dynamics of an interconnected system of rigid bodies, , , is formulated by isolating each rigid body and introducing the interaction forces. The resultant of the external and interaction forces on each body, yields the force-torque equations
Newton's formulation yields 6M equations that define the dynamics of a system of M rigid bodies.
Rotation in three dimensions
A rotating object, whether under the influence of torques or not, may exhibit the behaviours of precession and nutation.
The fundamental equation describing the behavior of a rotating solid body is Euler's equation of motion:
where the pseudovectors τ and L are, respectively, the torques on the body and its angular momentum, the scalar I is its moment of inertia, the vector ω is its angular velocity, the vector α is its angular acceleration, D is the differential in an inertial reference frame and d is the differential in a relative reference frame fixed with the body.
The solution to this equation when there is no applied torque is discussed in the articles Euler's equation of motion and Poinsot's ellipsoid.
It follows from Euler's equation that a torque τ applied perpendicular to the axis of rotation, and therefore perpendicular to L, results in a rotation about an axis perpendicular to both τ and L. This motion is called precession. The angular velocity of precession ΩP is given by the cross product:
Precession can be demonstrated by placing a spinning top with its axis horizontal and supported loosely (frictionless toward precession) at one end. Instead of falling, as might be expected, the top appears to defy gravity by remaining with its axis horizontal, when the other end of the axis is left unsupported and the free end of the axis slowly describes a circle in a horizontal plane, the resulting precession turning. This effect is explained by the above equations. The torque on the top is supplied by a couple of forces: gravity acting downward on the device's centre of mass, and an equal force acting upward to support one end of the device. The rotation resulting from this torque is not downward, as might be intuitively expected, causing the device to fall, but perpendicular to both the gravitational torque (horizontal and perpendicular to the axis of rotation) and the axis of rotation (horizontal and outwards from the point of support), i.e., about a vertical axis, causing the device to rotate slowly about the supporting point.
Under a constant torque of magnitude τ, the speed of precession ΩP is inversely proportional to L, the magnitude of its angular momentum:
where θ is the angle between the vectors ΩP and L. Thus, if the top's spin slows down (for example, due to friction), its angular momentum decreases and so the rate of precession increases. This continues until the device is unable to rotate fast enough to support its own weight, when it stops precessing and falls off its support, mostly because friction against precession cause another precession that goes to cause the fall.
By convention, these three vectors - torque, spin, and precession - are all oriented with respect to each other according to the right-hand rule.
Virtual work of forces acting on a rigid body
An alternate formulation of rigid body dynamics that has a number of convenient features is obtained by considering the virtual work of forces acting on a rigid body.
The virtual work of forces acting at various points on a single rigid body can be calculated using the velocities of their point of application and the resultant force and torque. To see this, let the forces F1, F2 ... Fn act on the points R1, R2 ... Rn in a rigid body.
The trajectories of Ri, are defined by the movement of the rigid body. The velocity of the points Ri along their trajectories are
where ω is the angular velocity vector of the body.
Virtual work
Work is computed from the dot product of each force with the displacement of its point of contact
If the trajectory of a rigid body is defined by a set of generalized coordinates , , then the virtual displacements are given by
The virtual work of this system of forces acting on the body in terms of the generalized coordinates becomes
or collecting the coefficients of
Generalized forces
For simplicity consider a trajectory of a rigid body that is specified by a single generalized coordinate q, such as a rotation angle, then the formula becomes
Introduce the resultant force F and torque T so this equation takes the form
The quantity Q defined by
is known as the generalized force associated with the virtual displacement δq. This formula generalizes to the movement of a rigid body defined by more than one generalized coordinate, that is
where
It is useful to note that conservative forces such as gravity and spring forces are derivable from a potential function , known as a potential energy. In this case the generalized forces are given by
D'Alembert's form of the principle of virtual work
The equations of motion for a mechanical system of rigid bodies can be determined using D'Alembert's form of the principle of virtual work. The principle of virtual work is used to study the static equilibrium of a system of rigid bodies, however by introducing acceleration terms in Newton's laws this approach is generalized to define dynamic equilibrium.
Static equilibrium
The static equilibrium of a mechanical system rigid bodies is defined by the condition that the virtual work of the applied forces is zero for any virtual displacement of the system. This is known as the principle of virtual work. This is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is Qi=0.
Let a mechanical system be constructed from rigid bodies, Bi, , and let the resultant of the applied forces on each body be the force-torque pairs, and , . Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity and angular velocities , , for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom.
The virtual work of the forces and torques, and , applied to this one degree of freedom system is given by
where
is the generalized force acting on this one degree of freedom system.
If the mechanical system is defined by m generalized coordinates, , , then the system has m degrees of freedom and the virtual work is given by,
where
is the generalized force associated with the generalized coordinate . The principle of virtual work states that static equilibrium occurs when these generalized forces acting on the system are zero, that is
These equations define the static equilibrium of the system of rigid bodies.
Generalized inertia forces
Consider a single rigid body which moves under the action of a resultant force F and torque T, with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force associated with the generalized coordinate is given by
This inertia force can be computed from the kinetic energy of the rigid body,
by using the formula
A system of rigid bodies with m generalized coordinates has the kinetic energy
which can be used to calculate the m generalized inertia forces
Dynamic equilibrium
D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that
for any set of virtual displacements . This condition yields equations,
which can also be written as
The result is a set of m equations of motion that define the dynamics of the rigid body system.
Lagrange's equations
If the generalized forces Qj are derivable from a potential energy , then these equations of motion take the form
In this case, introduce the Lagrangian, , so these equations of motion become
These are known as Lagrange's equations of motion.
Linear and angular momentum
System of particles
The linear and angular momentum of a rigid system of particles is formulated by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi, be located at the coordinates ri and velocities vi. Select a reference point R and compute the relative position and velocity vectors,
The total linear and angular momentum vectors relative to the reference point R are
and
If R is chosen as the center of mass these equations simplify to
Rigid system of particles
To specialize these formulas to a rigid body, assume the particles are rigidly connected to each other so P, i=1,...,n are located by the coordinates r and velocities v. Select a reference point R and compute the relative position and velocity vectors,
where ω is the angular velocity of the system.
The linear momentum and angular momentum of this rigid system measured relative to the center of mass R is
These equations simplify to become,
where M is the total mass of the system and [I] is the moment of inertia matrix defined by
where [ri − R] is the skew-symmetric matrix constructed from the vector ri − R.
Applications
For the analysis of robotic systems
For the biomechanical analysis of animals, humans or humanoid systems
For the analysis of space objects
For the understanding of strange motions of rigid bodies.
For the design and development of dynamics-based sensors, such as gyroscopic sensors.
For the design and development of various stability enhancement applications in automobiles.
For improving the graphics of video games which involves rigid bodies
See also
Analytical mechanics
Analytical dynamics
Calculus of variations
Classical mechanics
Dynamics (mechanics)
History of classical mechanics
Lagrangian mechanics
Lagrangian
Hamiltonian mechanics
Rigid body
Rigid transformation
Rigid rotor
Soft-body dynamics
Multibody system
Polhode
Herpolhode
Precession
Poinsot's ellipsoid
Gyroscope
Physics engine
Physics processing unit
Physics Abstraction Layer – Unified multibody simulator
RigidChips – Japanese rigid-body simulator
Euler's Equation
References
Further reading
E. Leimanis (1965). The General Problem of the Motion of Coupled Rigid Bodies about a Fixed Point. (Springer, New York).
W. B. Heard (2006). Rigid Body Mechanics: Mathematics, Physics and Applications. (Wiley-VCH).
External links
Chris Hecker's Rigid Body Dynamics Information
Physically Based Modeling: Principles and Practice
DigitalRune Knowledge Base contains a master thesis and a collection of resources about rigid body dynamics.
F. Klein, "Note on the connection between line geometry and the mechanics of rigid bodies" (English translation)
F. Klein, "On Sir Robert Ball's theory of screws" (English translation)
E. Cotton, "Application of Cayley geometry to the geometric study of the displacement of a solid around a fixed point" (English translation)
Rigid bodies
Rigid bodies mechanics
Engineering mechanics
Rotational symmetry | 0.787847 | 0.990085 | 0.780035 |
Elastic collision | In physics, an elastic collision is an encounter (collision) between two bodies in which the total kinetic energy of the two bodies remains the same. In an ideal, perfectly elastic collision, there is no net conversion of kinetic energy into other forms such as heat, noise, or potential energy.
During the collision of small objects, kinetic energy is first converted to potential energy associated with a repulsive or attractive force between the particles (when the particles move against this force, i.e. the angle between the force and the relative velocity is obtuse), then this potential energy is converted back to kinetic energy (when the particles move with this force, i.e. the angle between the force and the relative velocity is acute).
Collisions of atoms are elastic, for example Rutherford backscattering.
A useful special case of elastic collision is when the two bodies have equal mass, in which case they will simply exchange their momenta.
The molecules—as distinct from atoms—of a gas or liquid rarely experience perfectly elastic collisions because kinetic energy is exchanged between the molecules’ translational motion and their internal degrees of freedom with each collision. At any instant, half the collisions are, to a varying extent, inelastic collisions (the pair possesses less kinetic energy in their translational motions after the collision than before), and half could be described as “super-elastic” (possessing more kinetic energy after the collision than before). Averaged across the entire sample, molecular collisions can be regarded as essentially elastic as long as Planck's law forbids energy from being carried away by black-body photons.
In the case of macroscopic bodies, perfectly elastic collisions are an ideal never fully realized, but approximated by the interactions of objects such as billiard balls.
When considering energies, possible rotational energy before and/or after a collision may also play a role.
Equations
One-dimensional Newtonian
In any collision, momentum is conserved; but in an elastic collision, kinetic energy is also conserved. Consider particles A and B with masses mA, mB, and velocities vA1, vB1 before collision, vA2, vB2 after collision. The conservation of momentum before and after the collision is expressed by:
Likewise, the conservation of the total kinetic energy is expressed by:
These equations may be solved directly to find when are known:
Alternatively the final velocity of a particle, v2 (vA2 or vB2) is expressed by:
Where:
e is the coefficient of restitution.
vCoM is the velocity of the center of mass of the system of two particles.
v1 (vA1 or vB1) is the initial velocity of the particle.
If both masses are the same, we have a trivial solution:
This simply corresponds to the bodies exchanging their initial velocities with each other.
As can be expected, the solution is invariant under adding a constant to all velocities (Galilean relativity), which is like using a frame of reference with constant translational velocity. Indeed, to derive the equations, one may first change the frame of reference so that one of the known velocities is zero, determine the unknown velocities in the new frame of reference, and convert back to the original frame of reference.
Examples
Before collision
Ball A: mass = 3 kg, velocity = 4 m/s
Ball B: mass = 5 kg, velocity = 0 m/s
After collision
Ball A: velocity = −1 m/s
Ball B: velocity = 3 m/s
Another situation:
The following illustrate the case of equal mass, .
In the limiting case where is much larger than , such as a ping-pong paddle hitting a ping-pong ball or an SUV hitting a trash can, the heavier mass hardly changes velocity, while the lighter mass bounces off, reversing its velocity plus approximately twice that of the heavy one.
In the case of a large , the value of is small if the masses are approximately the same: hitting a much lighter particle does not change the velocity much, hitting a much heavier particle causes the fast particle to bounce back with high speed. This is why a neutron moderator (a medium which slows down fast neutrons, thereby turning them into thermal neutrons capable of sustaining a chain reaction) is a material full of atoms with light nuclei which do not easily absorb neutrons: the lightest nuclei have about the same mass as a neutron.
Derivation of solution
To derive the above equations for rearrange the kinetic energy and momentum equations:
Dividing each side of the top equation by each side of the bottom equation, and using gives:
That is, the relative velocity of one particle with respect to the other is reversed by the collision.
Now the above formulas follow from solving a system of linear equations for regarding as constants:
Once is determined, can be found by symmetry.
Center of mass frame
With respect to the center of mass, both velocities are reversed by the collision: a heavy particle moves slowly toward the center of mass, and bounces back with the same low speed, and a light particle moves fast toward the center of mass, and bounces back with the same high speed.
The velocity of the center of mass does not change by the collision. To see this, consider the center of mass at time before collision and time after collision:
Hence, the velocities of the center of mass before and after collision are:
The numerators of and are the total momenta before and after collision. Since momentum is conserved, we have
One-dimensional relativistic
According to special relativity,
where p denotes momentum of any particle with mass, v denotes velocity, and c is the speed of light.
In the center of momentum frame where the total momentum equals zero,
Here represent the rest masses of the two colliding bodies, represent their velocities before collision, their velocities after collision, their momenta, is the speed of light in vacuum, and denotes the total energy, the sum of rest masses and kinetic energies of the two bodies.
Since the total energy and momentum of the system are conserved and their rest masses do not change, it is shown that the momentum of the colliding body is decided by the rest masses of the colliding bodies, total energy and the total momentum. Relative to the center of momentum frame, the momentum of each colliding body does not change magnitude after collision, but reverses its direction of movement.
Comparing with classical mechanics, which gives accurate results dealing with macroscopic objects moving much slower than the speed of light, total momentum of the two colliding bodies is frame-dependent. In the center of momentum frame, according to classical mechanics,
This agrees with the relativistic calculation despite other differences.
One of the postulates in Special Relativity states that the laws of physics, such as conservation of momentum, should be invariant in all inertial frames of reference. In a general inertial frame where the total momentum could be arbitrary,
We can look at the two moving bodies as one system of which the total momentum is the total energy is and its velocity is the velocity of its center of mass. Relative to the center of momentum frame the total momentum equals zero. It can be shown that is given by:
Now the velocities before the collision in the center of momentum frame and are:
When and
Therefore, the classical calculation holds true when the speed of both colliding bodies is much lower than the speed of light (~300,000 kilometres per second).
Relativistic derivation using hyperbolic functions
Using the so-called parameter of velocity (usually called the rapidity),
we get
Relativistic energy and momentum are expressed as follows:
Equations sum of energy and momentum colliding masses and (velocities correspond to the velocity parameters ), after dividing by adequate power are as follows:
and dependent equation, the sum of above equations:
subtract squares both sides equations "momentum" from "energy" and use the identity after simplifying we get:
for non-zero mass, using the hyperbolic trigonometric identity we get:
as functions is even we get two solutions:
from the last equation, leading to a non-trivial solution, we solve and substitute into the dependent equation, we obtain and then we have:
It is a solution to the problem, but expressed by the parameters of velocity. Return substitution to get the solution for velocities is:
Substitute the previous solutions and replace:
and after long transformation, with substituting:
we get:
Two-dimensional
For the case of two non-spinning colliding bodies in two dimensions, the motion of the bodies is determined by the three conservation laws of momentum, kinetic energy and angular momentum. The overall velocity of each body must be split into two perpendicular velocities: one tangent to the common normal surfaces of the colliding bodies at the point of contact, the other along the line of collision. Since the collision only imparts force along the line of collision, the velocities that are tangent to the point of collision do not change. The velocities along the line of collision can then be used in the same equations as a one-dimensional collision. The final velocities can then be calculated from the two new component velocities and will depend on the point of collision. Studies of two-dimensional collisions are conducted for many bodies in the framework of a two-dimensional gas.
In a center of momentum frame at any time the velocities of the two bodies are in opposite directions, with magnitudes inversely proportional to the masses. In an elastic collision these magnitudes do not change. The directions may change depending on the shapes of the bodies and the point of impact. For example, in the case of spheres the angle depends on the distance between the (parallel) paths of the centers of the two bodies. Any non-zero change of direction is possible: if this distance is zero the velocities are reversed in the collision; if it is close to the sum of the radii of the spheres the two bodies are only slightly deflected.
Assuming that the second particle is at rest before the collision, the angles of deflection of the two particles, and , are related to the angle of deflection in the system of the center of mass by
The magnitudes of the velocities of the particles after the collision are:
Two-dimensional collision with two moving objects
The final x and y velocities components of the first ball can be calculated as:
where and are the scalar sizes of the two original speeds of the objects, and are their masses, and are their movement angles, that is, (meaning moving directly down to the right is either a −45° angle, or a 315° angle), and lowercase phi is the contact angle. (To get the and velocities of the second ball, one needs to swap all the '1' subscripts with '2' subscripts.)
This equation is derived from the fact that the interaction between the two bodies is easily calculated along the contact angle, meaning the velocities of the objects can be calculated in one dimension by rotating the x and y axis to be parallel with the contact angle of the objects, and then rotated back to the original orientation to get the true x and y components of the velocities.
In an angle-free representation, the changed velocities are computed using the centers and at the time of contact as
where the angle brackets indicate the inner product (or dot product) of two vectors.
Other conserved quantities
In the particular case of particles having equal masses, it can be verified by direct computation from the result above that the scalar product of the velocities before and after the collision are the same, that is Although this product is not an additive invariant in the same way that momentum and kinetic energy are for elastic collisions, it seems that preservation of this quantity can nonetheless be used to derive higher-order conservation laws.
See also
Collision
Inelastic collision
Coefficient of restitution
References
General references
External links
Rigid Body Collision Resolution in three dimensions including a derivation using the conservation laws
Classical mechanics
Collision
Particle physics
Scattering
Articles containing video clips
ru:Удар#Абсолютно упругий удар | 0.783017 | 0.99594 | 0.779838 |
Inertial confinement fusion | Inertial confinement fusion (ICF) is a fusion energy process that initiates nuclear fusion reactions by compressing and heating targets filled with fuel. The targets are small pellets, typically containing deuterium (2H) and tritium (3H).
Energy is deposited in the target's outer layer, which explodes outward. This produces a reaction force in the form of shock waves that travel through the target. The waves compress and heat it. Sufficiently powerful shock waves generate fusion.
ICF is one of two major branches of fusion energy research; the other is magnetic confinement fusion (MCF). When first proposed in the early 1970s, ICF appeared to be a practical approach to power production and the field flourished. Experiments demonstrated that the efficiency of these devices was much lower than expected. Throughout the 1980s and '90s, experiments were conducted in order to understand the interaction of high-intensity laser light and plasma. These led to the design of much larger machines that achieved ignition-generating energies.
The largest operational ICF experiment is the National Ignition Facility (NIF) in the US. In 2022, the NIF produced fusion, delivering 2.05 megajoules (MJ) of energy to the target which produced 3.15 MJ, the first time that an ICF device produced more energy than was delivered to the target.
Description
Fusion basics
Fusion reactions combine smaller atoms to form larger ones. This occurs when two atoms (or ions, atoms stripped of their electrons) come close enough to each other that the nuclear force dominates the electrostatic force that otherwise keeps them apart. Overcoming electrostatic repulsion requires kinetic energy sufficient to overcome the Coulomb barrier or fusion barrier.
Less energy is needed to cause lighter nuclei to fuse, as they have less electrical charge and thus a lower barrier energy. Thus the barrier is lowest for hydrogen. Conversely, the nuclear force increases with the number of nucleons, so isotopes of hydrogen that contain additional neutrons reduce the required energy. The easiest fuel is a mixture of 2H, and 3H, known as D-T.
The odds of fusion occurring are a function of the fuel density and temperature and the length of time that the density and temperature are maintained. Even under ideal conditions, the chance that a D and T pair fuse is very small. Higher density and longer times allow more encounters among the atoms. This cross section is further dependent on individual ion energies. This combination, the fusion triple product, must reach the Lawson criterion, to reach ignition.
Thermonuclear devices
The first ICF devices were the hydrogen bombs invented in the early 1950s. A hydrogen bomb consists of two bombs in a single case. The first, the primary stage, is a fission-powered device normally using plutonium. When it explodes it gives off a burst of thermal X-rays that fill the interior of the specially designed bomb casing. These X-rays are absorbed by a special material surrounding the secondary stage, which consists mostly of the fusion fuel. The X-rays heat this material and cause it to explode. Due to Newton's Third Law, this causes the fuel inside to be driven inward, compressing and heating it. This causes the fusion fuel to reach the temperature and density where fusion reactions begin.
In the case of D-T fuel, most of the energy is released in the form of alpha particles and neutrons. Under normal conditions, an alpha can travel about 10 mm through the fuel, but in the ultra-dense conditions in the compressed fuel, they can travel about 0.01 mm before their electrical charge, interacting with the surrounding plasma, causes them to lose velocity. This means the majority of the energy released by the alphas is redeposited in the fuel. This transfer of kinetic energy heats the surrounding particles to the energies they need to undergo fusion. This process causes the fusion fuel to burn outward from the center. The electrically neutral neutrons travel longer distances in the fuel mass and do not contribute to this self-heating process. In a bomb, they are instead used to either breed tritium through reactions in a lithium-deuteride fuel, or are used to split additional fissionable fuel surrounding the secondary stage, often part of the bomb casing.
The requirement that the reaction has to be sparked by a fission bomb makes this method impractical for power generation. Not only would the fission triggers be expensive to produce, but the minimum size of such a bomb is large, defined roughly by the critical mass of the plutonium fuel used. Generally, it seems difficult to build efficient nuclear fusion devices much smaller than about 1 kiloton in yield, and the fusion secondary would add to this yield. This makes it a difficult engineering problem to extract power from the resulting explosions. Project PACER studied solutions to the engineering issues, but also demonstrated that it was not economically feasible. The cost of the bombs was far greater than the value of the resulting electricity.
Mechanism of action
The energy needed to overcome the Coulomb barrier corresponds to the energy of the average particle in a gas heated to 100 million K. The specific heat of hydrogen is about 14 Joule per gram-K, so considering a 1 milligram fuel pellet, the energy needed to raise the mass as a whole to this temperature is 1.4 megajoules (MJ).
In the more widely developed magnetic fusion energy (MFE) approach, confinement times are on the order of one second. However, plasmas can be sustained for minutes. In this case the confinement time represents the amount of time it takes for the energy from the reaction to be lost to the environment - through a variety of mechanisms. For a one second confinement, the density needed to meet the Lawson criterion is about 1014 particles per cubic centimetre (cc). For comparison, air at sea level has about 2.7 x 1019 particles/cc, so the MFE approach has been described as "a good vacuum".
Considering a 1 milligram drop of D-T fuel in liquid form, the size is about 1 mm and the density is about 4 x 1020/cc. Nothing holds the fuel together. Heat created by fusion events causes it to expand at the speed of sound, which leads to a confinement time around 2 x 10−10 seconds. At liquid density the required confinement time is about 2 x 10−7s. In this case only about 0.1 percent of the fuel fuses before the drop blows apart.
The rate of fusion reactions is a function of density, and density can be improved through compression. If the drop is compressed from 1 mm to 0.1 mm in diameter, the confinement time drops by the same factor of 10, because the particles have less distance to travel before they escape. However, the density, which is the cube of the dimensions, increases by 1,000 times. This means the overall rate of fusion increases 1,000 times while the confinement drops by 10 times, a 100-fold improvement. In this case 10% of the fuel undergoes fusion; 10% of 1 mg of fuel produces about 30 MJ of energy, 30 times the amount needed to compress it to that density.
The other key concept in ICF is that the entire fuel mass does not have to be raised to 100 million K. In a fusion bomb the reaction continues because the alpha particles released in the interior heat the fuel around it. At liquid density the alphas travel about 10 mm and thus their energy escapes the fuel. In the 0.1 mm compressed fuel, the alphas have a range of about 0.016 mm, meaning that they will stop within the fuel and heat it. In this case a "propagating burn" can be caused by heating only the center of the fuel to the needed temperature. This requires far less energy; calculations suggested 1 kJ is enough to reach the compression goal.
Some method is needed to heat the interior to fusion temperatures, and do so while when the fuel is compressed and the density is high enough. In modern ICF devices, the density of the compressed fuel mixture is as much as one-thousand times the density of water, or one-hundred times that of lead, around 1000 g/cm3. Much of the work since the 1970s has been on ways to create the central hot-spot that starts off the burning, and dealing with the many practical problems in reaching the desired density.
Heating concepts
Early calculations suggested that the amount of energy needed to ignite the fuel was very small, but this does not match subsequent experience.
Hot spot ignition
The initial solution to the heating problem involved deliberate "shaping" of the energy delivery. The idea was to use an initial lower-energy pulse to vaporize the capsule and cause compression, and then a very short, very powerful pulse near the end of the compression cycle. The goal is to launch shock waves into the compressed fuel that travel inward to the center. When they reach the center they meet the waves coming in from other sides. This causes a brief period where the density in the center reaches much higher values, over 800 g/cm3.
The central hot spot ignition concept was the first to suggest ICF was not only a practical route to fusion, but relatively simple. This led to numerous efforts to build working systems in the early 1970s. These experiments revealed unexpected loss mechanisms. Early calculations suggested about 4.5x107 J/g would be needed, but modern calculations place it closer to 108 J/g. Greater understanding led to complex shaping of the pulse into multiple time intervals.
Fast ignition
The fast ignition approach employs a separate laser to supply additional energy directly to the center of the fuel. This can be done mechanically, often using a small metal cone to puncture the outer fuel pellet wall to inject the energy into the center. In tests, this approach failed because the laser pulse had to reach the center at a precise moment, while the center is obscured by debris and free electrons from the compression pulse. It also has the disadvantage of requiring a second laser pulse, which generally involves a completely separate laser.
Shock ignition
Shock ignition is similar in concept to the hot-spot technique, but instead of achieving ignition via compression heating, a powerful shock wave is sent into the fuel at a later time through a combination of compression and shock heating. This increases the efficiency of the process while lowering the overall amount of power required.
Direct vs. indirect drive
In the simplest method of inertial confinement, the fuel is arranged as a sphere. This allows it to be compressed uniformly from all sides. To produce the inward force, the fuel is placed within a thin capsule that absorbs energy from the driver beams, causing the capsule shell to explode outward. The capsule shell is usually made of a lightweight plastic, and the fuel is deposited as a layer on the inside by injecting and freezing the gaseous fuel into the shell.
Shining the driver beams directly onto the fuel capsule is known as "direct drive". The implosion process must be extremely uniform in order to avoid asymmetry due to Rayleigh–Taylor instability and similar effects. For a beam energy of 1 MJ, the fuel capsule cannot be larger than about 2 mm before these effects disrupt the implosion symmetry. This limits the size of the laser beams to a diameter so narrow that it is difficult to achieve in practice.
Alternatively "indirect drive" illuminates a small cylinder of heavy metal, often gold or lead, known as a hohlraum. The beam energy heats the hohlraum until it emits X-rays. These X-rays fill the interior of the hohlraum and heat the capsule. The advantage of indirect drive is that the beams can be larger and less accurate. The disadvantage is that much of the delivered energy is used to heat the hohlraum until it is "X-ray hot", so the end-to-end energy efficiency is much lower than the direct drive method.
Challenges
The primary challenges with increasing ICF performance are:
Improving the energy delivered to the target
Controlling symmetry of the imploding fuel
Delaying fuel heating until sufficient density is achieved
Preventing premature mixing of hot and cool fuel by hydrodynamic instabilities
Achieving shockwave convergence at the fuel center
In order to focus the shock wave on the center of the target, the target must be made with great precision and sphericity with tolerances of no more than a few micrometres over its (inner and outer) surface. The lasers must be precisely targeted in space and time. Beam timing is relatively simple and is solved by using delay lines in the beams' optical path to achieve picosecond accuracy. The other major issue is so-called "beam-beam" imbalance and beam anisotropy. These problems are, respectively, where the energy delivered by one beam may be higher or lower than other beams impinging and of "hot spots" within a beam diameter hitting a target which induces uneven compression on the target surface, thereby forming Rayleigh-Taylor instabilities in the fuel, prematurely mixing it and reducing heating efficacy at the instant of maximum compression. The Richtmyer-Meshkov instability is also formed during the process due to shock waves.
These problems have been mitigated by beam smoothing techniques and beam energy diagnostics; however, RT instability remains a major issue. Modern cryogenic hydrogen ice targets tend to freeze a thin layer of deuterium on the inside of the shell while irradiating it with a low power infrared laser to smooth its inner surface and monitoring it with a microscope equipped camera, thereby allowing the layer to be closely monitored. Cryogenic targets filled with D-T are "self-smoothing" due to the small amount of heat created by tritium decay. This is referred to as "beta-layering".
In the indirect drive approach, the absorption of thermal x-rays by the target is more efficient than the direct absorption of laser light. However, the hohlraums take up considerable energy to heat, significantly reducing energy transfer efficiency. Most often, indirect drive hohlraum targets are used to simulate thermonuclear weapons tests due to the fact that the fusion fuel in weapons is also imploded mainly by X-ray radiation.
ICF drivers are evolving. Lasers have scaled up from a few joules and kilowatts to megajoules and hundreds of terawatts, using mostly frequency doubled or tripled light from neodymium glass amplifiers.
Heavy ion beams are particularly interesting for commercial generation, as they are easy to create, control, and focus. However, it is difficult to achieve the energy densities required to implode a target efficiently, and most ion-beam systems require the use of a hohlraum surrounding the target to smooth out the irradiation.
History
Conception
United States
ICF history began as part of the "Atoms For Peace" conference in 1957. This was an international, UN-sponsored conference between the US and the Soviet Union. Some thought was given to using a hydrogen bomb to heat a water-filled cavern. The resulting steam could then be used to power conventional generators, and thereby provide electrical power.
This meeting led to Operation Plowshare, formed in June 1957 and formally named in 1961. It included three primary concepts; energy generation under Project PACER, the use of nuclear explosions for excavation, and for fracking in the natural gas industry. PACER was directly tested in December 1961 when the 3 kt Project Gnome device was detonated in bedded salt in New Mexico. While the press looked on, radioactive steam was released from the drill shaft, at some distance from the test site. Further studies designed engineered cavities to replace natural ones, but Plowshare turned from bad to worse, especially after the failure of 1962's Sedan which produced significant fallout. PACER continued to receive funding until 1975, when a 3rd party study demonstrated that the cost of electricity from PACER would be ten times the cost of conventional nuclear plants.
Another outcome of Atoms For Peace was to prompt John Nuckolls to consider what happens on the fusion side of the bomb as fuel mass is reduced. This work suggested that at sizes on the order of milligrams, little energy would be needed to ignite the fuel, much less than a fission primary. He proposed building, in effect, tiny all-fusion explosives using a tiny drop of D-T fuel suspended in the center of a hohlraum. The shell provided the same effect as the bomb casing in an H-bomb, trapping x-rays inside to irradiate the fuel. The main difference is that the X-rays would be supplied by an external device that heated the shell from the outside until it was glowing in the x-ray region. The power would be delivered by a then-unidentified pulsed power source he referred to, using bomb terminology, as the "primary".
The main advantage to this scheme is the fusion efficiency at high densities. According to the Lawson criterion, the amount of energy needed to heat the D-T fuel to break-even conditions at ambient pressure is perhaps 100 times greater than the energy needed to compress it to a pressure that would deliver the same rate of fusion. So, in theory, the ICF approach could offer dramatically more gain. This can be understood by considering the energy losses in a conventional scenario where the fuel is slowly heated, as in the case of magnetic fusion energy; the rate of energy loss to the environment is based on the temperature difference between the fuel and its surroundings, which continues to increase as the fuel temperature increases. In the ICF case, the entire hohlraum is filled with high-temperature radiation, limiting losses.
Germany
In 1956 a meeting was organized at the Max Planck Institute in Germany by fusion pioneer Carl Friedrich von Weizsäcker. At this meeting Friedwardt Winterberg proposed the non-fission ignition of a thermonuclear micro-explosion by a convergent shock wave driven with high explosives. Further reference to Winterberg's work in Germany on nuclear micro explosions (mininukes) is contained in a declassified report of the former East German Stasi (Staatsicherheitsdienst).
In 1964 Winterberg proposed that ignition could be achieved by an intense beam of microparticles accelerated to a velocity of 1000 km/s. In 1968, he proposed to use intense electron and ion beams generated by Marx generators for the same purpose. The advantage of this proposal is that charged particle beams are not only less expensive than laser beams, but can entrap the charged fusion reaction products due to the strong self-magnetic beam field, drastically reducing the compression requirements for beam ignited cylindrical targets.
USSR
In 1967, research fellow Gurgen Askaryan published an article proposing the use of focused laser beams in the fusion of lithium deuteride or deuterium.
Early research
Through the late 1950s, and collaborators at Lawrence Livermore National Laboratory (LLNL) completed computer simulations of the ICF concept. In early 1960, they performed a full simulation of the implosion of 1 mg of D-T fuel inside a dense shell. The simulation suggested that a 5 MJ power input to the hohlraum would produce 50 MJ of fusion output, a gain of 10x. This was before the laser and a variety of other possible drivers were considered, including pulsed power machines, charged particle accelerators, plasma guns, and hypervelocity pellet guns.
Two theoretical advances advanced the field. One came from new simulations that considered the timing of the energy delivered in the pulse, known as "pulse shaping", leading to better implosion. The second was to make the shell much larger and thinner, forming a thin shell as opposed to an almost solid ball. These two changes dramatically increased implosion efficiency and thereby greatly lowered the required compression energy. Using these improvements, it was calculated that a driver of about 1 MJ would be needed, a five-fold reduction. Over the next two years, other theoretical advancements were proposed, notably Ray Kidder's development of an implosion system without a hohlraum, the so-called "direct drive" approach, and Stirling Colgate and Ron Zabawski's work on systems with as little as 1 μg of D-T fuel.
The introduction of the laser in 1960 at Hughes Research Laboratories in California appeared to present a perfect driver mechanism. Starting in 1962, Livermore's director John S. Foster, Jr. and Edward Teller began a small ICF laser study. Even at this early stage the suitability of ICF for weapons research was well understood and was the primary reason for its funding. Over the next decade, LLNL made small experimental devices for basic laser-plasma interaction studies.
Development begins
In 1967 Kip Siegel started KMS Industries. In the early 1970s he formed KMS Fusion to begin development of a laser-based ICF system. This development led to considerable opposition from the weapons labs, including LLNL, who put forth a variety of reasons that KMS should not be allowed to develop ICF in public. This opposition was funnelled through the Atomic Energy Commission, which demanded funding. Adding to the background noise were rumours of an aggressive Soviet ICF program, new higher-powered CO2 and glass lasers, the electron beam driver concept, and the energy crisis which added impetus to many energy projects.
In 1972 John Nuckolls wrote a paper introducing ICF and suggesting that testbed systems could be made to generate fusion with drivers in the kJ range, and high-gain systems with MJ drivers.
In spite of limited resources and business problems, KMS Fusion successfully demonstrated IFC fusion on 1 May 1974. This success was soon followed by Siegel's death and the end of KMS Fusion a year later. By this point several weapons labs and universities had started their own programs, notably the solid-state lasers (Nd:glass lasers) at LLNL and the University of Rochester, and krypton fluoride excimer lasers systems at Los Alamos and the Naval Research Laboratory.
"High-energy" ICF
High-energy ICF experiments (multi-hundred joules per shot) began in the early 1970s, when better lasers appeared. Funding for fusion research was stimulated by energy crises produced rapid gains in performance, and inertial designs were soon reaching the same sort of "below break-even" conditions of the best MCF systems.
LLNL was, in particular, well funded and started a laser fusion development program. Their Janus laser started operation in 1974, and validated the approach of using Nd:glass lasers for high power devices. Focusing problems were explored in the Long path and Cyclops lasers, which led to the larger Argus laser. None of these were intended to be practical devices, but they increased confidence that the approach was valid. It was then believed that a much larger device of the Cyclops type could both compress and heat targets, leading to ignition. This misconception was based on extrapolation of the fusion yields seen from experiments utilizing the so-called "exploding pusher" fuel capsule. During the late 1970s and early 1980s the estimates for laser energy on target needed to achieve ignition doubled almost yearly as plasma instabilities and laser-plasma energy coupling loss modes were increasingly understood. The realization that exploding pusher target designs and single-digit kilojoule (kJ) laser irradiation intensities would never scale to high yields led to the effort to increase laser energies to the 100 kJ level in the ultraviolet band and to the production of advanced ablator and cryogenic DT ice target designs.
Shiva and Nova
One of the earliest large scale attempts at an ICF driver design was the Shiva laser, a 20-beam neodymium doped glass laser system at LLNL that started operation in 1978. Shiva was a "proof of concept" design intended to demonstrate compression of fusion fuel capsules to many times the liquid density of hydrogen. In this, Shiva succeeded, reaching 100 times the liquid density of deuterium. However, due to the laser's coupling with hot electrons, premature heating of the dense plasma was problematic and fusion yields were low. This failure to efficiently heat the compressed plasma pointed to the use of optical frequency multipliers as a solution that would frequency triple the infrared light from the laser into the ultraviolet at 351 nm. Schemes to efficiently triple the frequency of laser light discovered at the Laboratory for Laser Energetics in 1980 was experimented with in the 24 beam OMEGA laser and the NOVETTE laser, which was followed by the Nova laser design with 10 times Shiva's energy, the first design with the specific goal of reaching ignition.
Nova also failed, this time due to severe variation in laser intensity in its beams (and differences in intensity between beams) caused by filamentation that resulted in large non-uniformity in irradiation smoothness at the target and asymmetric implosion. The techniques pioneered earlier could not address these new issues. This failure led to a much greater understanding of the process of implosion, and the way forward again seemed clear, namely to increase the uniformity of irradiation, reduce hot-spots in the laser beams through beam smoothing techniques to reduce Rayleigh–Taylor instabilities and increase laser energy on target by at least an order of magnitude. Funding was constrained in the 1980s.
National Ignition Facility
The resulting 192-beam design, dubbed the National Ignition Facility, started construction at LLNL in 1997. NIF's main objective is to operate as the flagship experimental device of the so-called nuclear stewardship program, supporting LLNLs traditional bomb-making role. Completed in March 2009, NIF experiments set new records for power delivery by a laser. As of September 27, 2013, for the first time fusion energy generated was greater than the energy absorbed into deuterium–tritium fuel. In June, 2018 NIF announced record production of 54kJ of fusion energy output. On August 8, 2021 the NIF produced 1.3MJ of output, 25x higher than the 2018 result, generating 70% of the break-even definition of ignition - when energy out equals energy in.
As of December 2022, the NIF claims to have become the first fusion experiment to achieve scientific breakeven on December 5, 2022, with an experiment producing 3.15 megajoules of energy from a 2.05 megajoule input of laser light (somewhat less than the energy needed to boil 1 kg of water) for an energy gain of about 1.5.
Fast ignition
Fast ignition may offer a way to directly heat fuel after compression, thus decoupling the heating and compression phases. In this approach, the target is first compressed "normally" using a laser system. When the implosion reaches maximum density (at the stagnation point or "bang time"), a second short, high-power petawatt (PW) laser delivers a single pulse to one side of the core, dramatically heating it and starting ignition.
The two types of fast ignition are the "plasma bore-through" method and the "cone-in-shell" method. In plasma bore-through, the second laser bores through the outer plasma of an imploding capsule, impinges on and heats the core. In the cone-in-shell method, the capsule is mounted on the end of a small high-z (high atomic number) cone such that the tip of the cone projects into the core. In this second method, when the capsule is imploded, the laser has a clear view of the core and does not use energy to bore through a 'corona' plasma. However, the presence of the cone affects the implosion process in significant ways that are not fully understood. Several projects are currently underway to explore fast ignition, including upgrades to the OMEGA laser at the University of Rochester and the GEKKO XII device in Japan.
HiPer is a proposed £500 million facility in the European Union. Compared to NIF's 2 MJ UV beams, HiPER's driver was planned to be 200 kJ and heater 70 kJ, although the predicted fusion gains are higher than NIF. It was to employ diode lasers, which convert electricity into laser light with much higher efficiency and run cooler. This allows them to operate at much higher frequencies. HiPER proposed to operate at 1 MJ at 1 Hz, or alternately 100 kJ at 10 Hz. The project's final update was in 2014. It was expected to offer a higher Q with a 10x reduction in construction costs times.
Other projects
The French Laser Mégajoule achieved its first experimental line in 2002, and its first target shots were conducted in 2014. The machine was roughly 75% complete as of 2016.
Using a different approach entirely is the z-pinch device. Z-pinch uses massive electric currents switched into a cylinder comprising extremely fine wires. The wires vaporize to form an electrically conductive, high current plasma. The resulting circumferential magnetic field squeezes the plasma cylinder, imploding it, generating a high-power x-ray pulse that can be used to implode a fuel capsule. Challenges to this approach include relatively low drive temperatures, resulting in slow implosion velocities and potentially large instability growth, and preheat caused by high-energy x-rays.
Shock ignition was proposed to address problems with fast ignition. Japan developed the KOYO-F design and laser inertial fusion test (LIFT) experimental reactor. In April 2017, clean energy startup Apollo Fusion began to develop a hybrid fusion-fission reactor technology.
In Germany, technology company Marvel Fusion is working on laser-initiated inertial confinement fusion. The startup adopted a short-pulsed high energy laser and the aneutronic fuel pB11. It was founded in Munich 2019. It works with Siemens Energy, TRUMPF, and Thales. The company partnered with Ludwig Maximilian University of Munich in July 2022.
In March 2022, Australian company HB11 announced fusion using non-thermal laser pB11, at a higher than predicted rate of alpha particle creation. Other companies include NIF-like Longview Fusion and fast-ignition origned Focused Energy.
Applications
Electricity generation
Inertial fusion energy (IFE) power plants have been studied since the late 1970s. These devices were to deliver multiple targets per second into the reaction chamber, using the resulting energy to drive a conventional steam turbine.
Technical challenges
Even if the many technical challenges in reaching ignition were all to be solved, practical problems abound. Given the 1 to 1.5% efficiency of the laser amplification process and that steam-driven turbine systems are typically about 35% efficient, fusion gains would have to be on the order of 125-fold just to energetically break even.
An order of magnitude improvement in laser efficiency may be possible through the use of designs that replace flash lamps with laser diodes that are tuned to produce most of their energy in a frequency range that is strongly absorbed. Initial experimental devices offer efficiencies of about 10%, and it is suggested that 20% is possible.
NIF uses about 330 MJ to produce the driver beams, producing an expected yield of about 20 MJ, with maximum credible yield of 45 MJ.
Power extraction
ICF systems face some of the secondary power extraction problems as MCF systems. One of the primary concerns is how to successfully remove heat from the reaction chamber without interfering with the targets and driver beams. Another concern is that the released neutrons react with the reactor structure, mechanically weakening it, and turning it intensely radioactive. Conventional metals such as steel would have a short lifetime and require frequent replacement of the core containment walls. Another concern is fusion afterdamp (debris left in the reaction chamber), which could interfere with subsequent shots, including helium ash produced by fusion, along with unburned hydrogen and other elements used in the fuel pellet. This problem is most troublesome with indirect drive systems. If the driver energy misses the fuel pellet completely and strikes the containment chamber, material could foul the interaction region, or the lenses or focusing elements.
One concept, as shown in the HYLIFE-II design, is to use a "waterfall" of FLiBe, a molten mix of fluoride salts of lithium and beryllium, which both protect the chamber from neutrons and carry away heat. The FLiBe is passed into a heat exchanger where it heats water for the turbines. The tritium produced by splitting lithium nuclei can be extracted in order to close the power plant's thermonuclear fuel cycle, a necessity for perpetual operation because tritium is rare and otherwise must be manufactured. Another concept, Sombrero, uses a reaction chamber built of carbon-fiber-reinforced polymer which has a low neutron cross section. Cooling is provided by a molten ceramic, chosen because of its ability to absorb the neutrons and its efficiency as a heat transfer agent.
Economic viability
Another factor working against IFE is the cost of the fuel. Even as Nuckolls was developing his earliest calculations, co-workers pointed out that if an IFE machine produces 50 MJ of fusion energy, a shot could produce perhaps 10 MJ (2.8 kWh) of energy. Wholesale rates for electrical power on the grid were about 0.3 cents/kWh at the time, which meant the monetary value of the shot was perhaps one cent. In the intervening 50 years the real price of power has remained about even, and the rate in 2012 in Ontario, Canada was about 2.8 cents/kWh. Thus, in order for an IFE plant to be economically viable, fuel shots would have to cost considerably less than ten cents in 2012 dollars.
Direct-drive systems avoid the use of a hohlraum and thereby may be less expensive in fuel terms. However, these systems still require an ablator, and the accuracy and geometrical considerations are critical. The direct-drive approach still may not be less expensive to operate.
Nuclear weapons
The hot and dense conditions encountered during an ICF experiment are similar to those in a thermonuclear weapon, and have applications to nuclear weapons programs. ICF experiments might be used, for example, to help determine how warhead performance degrades as it ages, or as part of a weapons design program. Retaining knowledge and expertise inside the nuclear weapons program is another motivation for pursuing ICF. Funding for the NIF in the United States is sourced from the Nuclear Weapons Stockpile Stewardship program, whose goals are oriented accordingly. It has been argued that some aspects of ICF research violate the Comprehensive Test Ban Treaty or the Nuclear Non-Proliferation Treaty. In the long term, despite the formidable technical hurdles, ICF research could lead to the creation of a "pure fusion weapon".
Neutron source
ICF has the potential to produce orders of magnitude more neutrons than spallation. Neutrons are capable of locating hydrogen atoms in molecules, resolving atomic thermal motion and studying collective excitations of photons more effectively than X-rays. Neutron scattering studies of molecular structures could resolve problems associated with protein folding, diffusion through membranes, proton transfer mechanisms, dynamics of molecular motors, etc. by modulating thermal neutrons into beams of slow neutrons. In combination with fissile materials, neutrons produced by ICF can potentially be used in Hybrid Nuclear Fusion designs to produce electric power.
See also
Antimatter catalyzed nuclear pulse propulsion
Bubble fusion, a phenomenon claimed – controversially – to provide an acoustic form of inertial confinement fusion.
Dense plasma focus
Laboratory for Laser Energetics
Laser Mégajoule
Leonardo Mascheroni, who proposed using hydrogen fluoride lasers to achieve fusion.
List of laser articles
Magnetic confinement fusion
Magnetized target fusion (MTF)
Magneto-inertial fusion
Proton-boron fusion
Pulsed power
Notes
References
Bibliography
External links
National Ignition Facility Project
Zpinch Home Page
Europe plans laser-fusion facility (Physicsweb)
Lasers point the way to clean energy (The Guardian)
National Laser Fusion Energy Development Plan
Institute of Laser Engineering Osaka University
Laser Inertial-Confinement Fusion-Fission Energy
Heavy Ion Fusion
Lawrence Livermore National Laboratory | 0.785146 | 0.993113 | 0.779739 |
Electromagnetic radiation | In physics, electromagnetic radiation (EMR) consists of waves of the electromagnetic (EM) field, which propagate through space and carry momentum and electromagnetic radiant energy.
Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields. In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted c. There, depending on the frequency of oscillation, different wavelengths of electromagnetic spectrum are produced. In homogeneous, isotropic media, the oscillations of the two fields are on average perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave.
Electromagnetic radiation is commonly referred to as "light", EM, EMR, or electromagnetic waves.
The position of an electromagnetic wave within the electromagnetic spectrum can be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves of different frequency are called by different names since they have different sources and effects on matter. In order of increasing frequency and decreasing wavelength, the electromagnetic spectrum includes: radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays.
Electromagnetic waves are emitted by electrically charged particles undergoing acceleration, and these waves can subsequently interact with other charged particles, exerting force on them. EM waves carry energy, momentum, and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves ("radiate") without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field, while the near field refers to EM fields near the charges and current that directly produced them, specifically electromagnetic induction and electrostatic induction phenomena.
In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions. Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level. Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation. The energy of an individual photon is quantized and proportional to frequency according to Planck's equation E = hf, where E is the energy per photon, f is the frequency of the photon, and h is the Planck constant. Thus, higher frequency photons have more energy. For example, a gamma ray photon has times the energy of a extremely low frequency radio wave photon.
The effects of EMR upon chemical compounds and biological organisms depend both upon the radiation's power and its frequency. EMR of lower energy ultraviolet or lower frequencies (i.e., near ultraviolet, visible light, infrared, microwaves, and radio waves) is non-ionizing because its photons do not individually have enough energy to ionize atoms or molecules or to break chemical bonds. The effect of non-ionizing radiation on chemical systems and living tissue is primarily simply heating, through the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are ionizing – individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds. Ionizing radiation can cause chemical reactions and damage living cells beyond simply heating, and can be a health hazard and dangerous.
Physics
Theory
Maxwell's equations
James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave. Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves.
Near and far fields
Maxwell's equations established that some charges and currents (sources) produce local electromagnetic fields near them that do not radiate. Currents directly produce magnetic fields, but such fields of a magnetic-dipole–type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential (such as in an antenna) produce an electric-dipole–type electrical field, but this also declines with distance. These fields make up the near field. Neither of these behaviours is responsible for EM radiation. Instead, they only efficiently transfer energy to a receiver very close to the source, such as inside a transformer. The near field has strong effects its source, with any energy withdrawn by a receiver causing increased load (decreased electrical reactance) on the source. The near field does not propagate freely into space, carrying energy away without a distance limit, but rather oscillates, returning its energy to the transmitter if it is not absorbed by a receiver.
By contrast, the far field is composed of radiation that is free of the transmitter, in the sense that the transmitter requires the same power to send changes in the field out regardless of whether anything absorbs the signal, e.g. a radio station does not need to increase its power when more receivers use the signal. This far part of the electromagnetic field is electromagnetic radiation. The far fields propagate (radiate) without allowing the transmitter to affect them. This causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Due to conservation of energy, the amount of power passing through any spherical surface drawn around the source is the same. Because such a surface has an area proportional to the square of its distance from the source, the power density of EM radiation from an isotropic source decreases with the inverse square of the distance from the source; this is called the inverse-square law. This is in contrast to dipole parts of the EM field, the near field, which varies in intensity according to an inverse cube power law, and thus does not transport a conserved amount of energy over distances but instead fades with distance, with its energy (as noted) rapidly returning to the transmitter or absorbed by a nearby receiver (such as a transformer secondary coil).
In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity are both associated with the near field, and do not comprise electromagnetic radiation.
Properties
Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition. For example, in optics two or more coherent light waves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual light waves.
The electromagnetic fields of light are not affected by traveling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields—these interactions include the Faraday effect and the Kerr effect.
In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount.
EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not so difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior. Rather, it reflects the quantum nature of matter. Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle affair.
Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon. When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once.
A quantum theory of the interaction between electromagnetic radiation and matter such as electrons is described by the theory of quantum electrodynamics.
Electromagnetic waves can be polarized, reflected, refracted, or diffracted, and can interfere with each other.
Wave model
In homogeneous, isotropic media, electromagnetic radiation is a transverse wave, meaning that its oscillations are perpendicular to the direction of energy transfer and travel. It comes from the following equations:These equations predicate that any electromagnetic wave must be a transverse wave, where the electric field and the magnetic field are both perpendicular to the direction of wave propagation.
The electric and magnetic parts of the field in an electromagnetic wave stand in a fixed ratio of strengths to satisfy the two Maxwell equations that specify how one is produced from the other. In dissipation-less (lossless) media, these E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). In the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a time-change in one type of field is proportional to the curl of the other. These derivatives require that the E and B fields in EMR are in-phase (see mathematics section below).
An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion.
A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation:
where v is the speed of the wave (c in a vacuum or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant.
Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude and phase. Such a component wave is said to be monochromatic. A monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its peak amplitude, its phase relative to some reference phase, its direction of propagation, and its polarization.
Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. Additionally, multiple polarization signals can be combined (i.e. interfered) to form new states of polarization, which is known as parallel polarization state generation.
The energy in electromagnetic waves is sometimes called radiant energy.
Particle model and quantum theory
An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled with this problem unsuccessfully for many years, and it later became known as the ultraviolet catastrophe. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. In 1905, Albert Einstein proposed that light quanta be regarded as real particles. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by
where h is the Planck constant, is the wavelength and c is the speed of light. This is sometimes known as the Planck–Einstein equation. In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave.
Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength:
The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect.
As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. Delayed emission is called phosphorescence.
Wave–particle duality
The modern theory that explains the nature of light includes the notion of wave–particle duality.
Wave and particle effects of electromagnetic radiation
Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the light beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation. An example is the emission spectrum of nebulae. Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature.
These phenomena can aid various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Spectroscopy is also used in the determination of the distance of a star, using the red shift.
Propagation speed
When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the current.
As a wave, light is characterized by a velocity (the speed of light), wavelength, and frequency. As particles, light is a stream of photons. Each has an energy related to the frequency of the wave given by Planck's relation E = hf, where E is the energy of the photon, h is the Planck constant, 6.626 × 10−34 J·s, and f is the frequency of the wave.
In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum.
History of discovery
Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to astronomer William Herschel, who published his results in 1800 before the Royal Society of London. Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These "calorific rays" were later termed infrared.
In 1801, German physicist Johann Wilhelm Ritter discovered ultraviolet in an experiment similar to Herschel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the ultraviolet rays (which at first were called "chemical rays") were capable of causing chemical reactions.
In 1862–64 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were first produced deliberately by Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves.
Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered X-rays' main properties.
The last portion of the EM spectrum to be discovered was associated with radioactivity. Henri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, finding that they were similar to X-rays but with shorter wavelengths and higher frequency, although a 'cross-over' between X and gamma rays makes it possible to have X-rays with a higher energy (and hence shorter wavelength) than gamma rays and vice versa. The origin of the ray differentiates them, gamma rays tend to be natural phenomena originating from the unstable nucleus of an atom and X-rays are electrically generated (and hence man-made) unless they are as a result of bremsstrahlung X-radiation caused by the interaction of fast moving particles (such as beta particles) colliding with certain materials, usually of higher atomic numbers.
Electromagnetic spectrum
EM radiation (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radio, microwave, infrared, visible, ultraviolet, X-rays and gamma rays. Arbitrary electromagnetic waves can be expressed by Fourier analysis in terms of sinusoidal waves (monochromatic radiation), which in turn can each be classified into these regions of the EMR spectrum.
For certain classes of EM waves, the waveform is most usefully treated as random, and then spectral analysis must be done by slightly different mathematical techniques appropriate to random or stochastic processes. In such cases, the individual frequency components are represented in terms of their power content, and the phase information is not preserved. Such a representation is called the power spectral density of the random process. Random electromagnetic radiation requiring this kind of analysis is, for example, encountered in the interior of stars, and in certain other very wideband forms of radiation such as the Zero point wave field of the electromagnetic vacuum.
The behavior of EM radiation and its interaction with matter depends on its frequency, and changes qualitatively as the frequency changes. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe.
Radio and microwave
When radio waves impinge upon a conductor, they couple to the conductor, travel along it and induce an electric current on the conductor surface by moving the electrons of the conducting material in correlated bunches of charge.
Electromagnetic radiation phenomena with wavelengths ranging from as long as one meter to as short as one millimeter are called microwaves; with frequencies between 300 MHz (0.3 GHz) and 300 GHz.
At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven. These interactions produce either electric currents or heat, or both.
Infrared
Like radio and microwave, infrared (IR) also is reflected by metals (and also most EMR, well into the ultraviolet range). However, unlike lower-frequency radio and microwave radiation, Infrared EMR commonly interacts with dipoles present in single molecules, which change as atoms vibrate at the ends of a single chemical bond. It is consequently absorbed by a wide range of substances, causing them to increase in temperature as the vibrations dissipate as heat. The same process, run in reverse, causes bulk substances to radiate in the infrared spontaneously (see thermal radiation section below).
Infrared radiation is divided into spectral subregions. While different subdivision schemes exist, the spectrum is commonly divided as near-infrared (0.75–1.4 μm), short-wavelength infrared (1.4–3 μm), mid-wavelength infrared (3–8 μm), long-wavelength infrared (8–15 μm) and far infrared (15–1000 μm).
Visible light
Natural sources produce EM radiation across the spectrum. EM radiation with a wavelength between approximately 400 nm and 700 nm is directly detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light.
As frequency increases into the visible range, photons have enough energy to change the bond structure of some individual molecules. It is not a coincidence that this happens in the visible range, as the mechanism of vision involves the change in bonding of a single molecule, retinal, which absorbs a single photon. The change in retinal causes a change in the shape of the rhodopsin protein it is contained in, which starts the biochemical process that causes the retina of the human eye to sense the light.
Photosynthesis becomes possible in this range as well, for the same reason. A single molecule of chlorophyll is excited by a single photon. In plant tissues that conduct photosynthesis, carotenoids act to quench electronically excited chlorophyll produced by visible light in a process called non-photochemical quenching, to prevent reactions that would otherwise interfere with photosynthesis at high light levels.
Animals that detect infrared make use of small packets of water that change temperature, in an essentially thermal process that involves many photons.
Infrared, microwaves and radio waves are known to damage molecules and biological tissue only by bulk heating, not excitation from single photons of the radiation.
Visible light is able to affect only a tiny percentage of all molecules. Usually not in a permanent or damaging way, rather the photon excites an electron which then emits another photon when returning to its original position. This is the source of color produced by most dyes. Retinal is an exception. When a photon is absorbed, the retinal permanently changes structure from cis to trans, and requires a protein to convert it back, i.e. reset it to be able to function as a light detector again.
Limited evidence indicate that some reactive oxygen species are created by visible light in skin, and that these may have some role in photoaging, in the same manner as ultraviolet A.
Ultraviolet
As frequency increases into the ultraviolet, photons now carry enough energy (about three electron volts or more) to excite certain doubly bonded molecules into permanent chemical rearrangement. In DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species produced by ultraviolet A (UVA), which has energy too low to damage DNA directly. This is why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for UVB) skin burns (sunburn) that are far worse than would be produced by simple heating (temperature increase) effects.
At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volt (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called "extreme UV." Ionizing UV is strongly filtered by the Earth's atmosphere.
X-rays and gamma rays
Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more, (which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing radiation. (Many other kinds of ionizing radiation are made of non-EM particles). Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher frequencies and shorter wavelengths, which means that all X-rays and gamma rays qualify. These are capable of the most severe types of molecular damage, which can happen in biology to any type of biomolecule, including mutation and cancer, and often at great depths below the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum, penetrate matter.
Atmosphere and magnetosphere
Most UV and X-rays are blocked by absorption first from molecular nitrogen, and then (for wavelengths in the upper UV) from the electronic excitation of dioxygen and finally ozone at the mid-range of UV. Only 30% of the Sun's ultraviolet light reaches the ground, and almost all of this is well transmitted.
Visible light is well transmitted in air, a property known as an atmospheric window, as it is not energetic enough to excite nitrogen, oxygen, or ozone, but too energetic to excite molecular vibrational frequencies of water vapor and CO2.
Absorption bands in the infrared are due to modes of vibrational excitation in water vapor. However, at energies too low to excite water vapor, the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves.
Finally, at radio wavelengths longer than 10 m or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 m or 3 MHz) to be reflected and results in shortwave radio beyond line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 m).
Thermal and electromagnetic radiation as a form of heat
The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy, or even kinetic energy, in the material. With a few exceptions related to high-energy photons (such as fluorescence, harmonic generation, photochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens for infrared, microwave and radio wave radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can easily set paper afire.
Ionizing radiation creates high-speed electrons in a material and breaks chemical bonds, but after these electrons collide many times with other atoms eventually most of the energy becomes thermal energy all in a tiny fraction of a second. This process makes ionizing radiation far more dangerous per unit of energy than non-ionizing radiation. This caveat also applies to UV, even though almost all of it is not ionizing, because UV can damage molecules due to electronic excitation, which is far greater per unit energy than heating effects.
Infrared radiation in the spectral distribution of a black body is usually considered a form of heat, since it has an equivalent temperature and is associated with an entropy change per unit of thermal energy. However, "heat" is a technical term in physics and thermodynamics and is often confused with thermal energy. Any type of electromagnetic energy can be transformed into thermal energy in interaction with matter. Thus, any electromagnetic radiation can "heat" (in the sense of increase the thermal energy temperature of) a material, when it is absorbed.
The inverse or time-reversed process of absorption is thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material.
The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy.
Biological effects
Bioelectromagnetics is the study of the interactions and effects of EM radiation on living organisms. The effects of electromagnetic radiation upon living cells, including those in humans, depends upon the radiation's power and frequency. For low-frequency radiation (radio waves to near ultraviolet) the best-understood effects are those due to radiation power alone, acting through heating when radiation is absorbed. For these thermal effects, frequency is important as it affects the intensity of the radiation and penetration into the organism (for example, microwaves penetrate better than infrared). It is widely accepted that low frequency fields that are too weak to cause significant heating could not possibly have any biological effect.
Some research suggests that weaker non-thermal electromagnetic fields (including weak ELF magnetic fields, although the latter does not strictly qualify as EM radiation) and modulated RF and microwave fields can have biological effects, though the significance of this is unclear.
The World Health Organization has classified radio frequency electromagnetic radiation as Group 2B – possibly carcinogenic. This group contains possible carcinogens such as lead, DDT, and styrene.
At higher frequencies (some of visible and beyond), the effects of individual photons begin to become important, as these now have enough energy individually to directly or indirectly damage biological molecules. All UV frequencies have been classed as Group 1 carcinogens by the World Health Organization. Ultraviolet radiation from sun exposure is the primary cause of skin cancer.
Thus, at UV frequencies and higher, electromagnetic radiation does more damage to biological systems than simple heating predicts. This is most obvious in the "far" (or "extreme") ultraviolet. UV, with X-ray and gamma radiation, are referred to as ionizing radiation due to the ability of photons of this radiation to produce ions and free radicals in materials (including living tissue). Since such radiation can severely damage life at energy levels that produce little heating, it is considered far more dangerous (in terms of damage-produced per unit of energy, or power) than the rest of the electromagnetic spectrum.
Use as a weapon
The heat ray is an application of EMR that makes use of microwave frequencies to create an unpleasant heating effect in the upper layer of the skin. A publicly known heat ray weapon called the Active Denial System was developed by the US military as an experimental weapon to deny the enemy access to an area. A death ray is a theoretical weapon that delivers heat ray based on electromagnetic energy at levels that are capable of injuring human tissue. An inventor of a death ray, Harry Grindell Matthews, claimed to have lost sight in his left eye while working on his death ray weapon based on a microwave magnetron from the 1920s (a normal microwave oven creates a tissue damaging cooking effect inside the oven at around 2 kV/m).
Derivation from electromagnetic theory
Electromagnetic waves are predicted by the classical laws of electricity and magnetism, known as Maxwell's equations. There are nontrivial solutions of the homogeneous Maxwell's equations (without charges or currents), describing waves of changing electric and magnetic fields. Beginning with Maxwell's equations in free space:
where
and are the electric field (measured in V/m or N/C) and the magnetic field (measured in T or Wb/m2), respectively;
yields the divergence and the curl of a vector field
and are partial derivatives (rate of change in time, with location fixed) of the magnetic and electric field;
is the permeability of a vacuum (4 × 10−7 H/m), and is the permittivity of a vacuum (8.85 × 10−12 F/m);
Besides the trivial solution
useful solutions can be derived with the following vector identity, valid for all vectors in some vector field:
Taking the curl of the second Maxwell equation yields:
Evaluating the left hand side of with the above identity and simplifying using, yields:
Evaluating the right hand side of by exchanging the sequence of derivatives and inserting the fourth yields:
Combining and again, gives a vector-valued differential equation for the electric field, solving the homogeneous Maxwell equations:
Taking the curl of the fourth Maxwell equation results in a similar differential equation for a magnetic field solving the homogeneous Maxwell equations:
Both differential equations have the form of the general wave equation for waves propagating with speed where is a function of time and location, which gives the amplitude of the wave at some time at a certain location:
This is also written as:
where denotes the so-called d'Alembert operator, which in Cartesian coordinates is given as:
Comparing the terms for the speed of propagation, yields in the case of the electric and magnetic fields:
This is the speed of light in vacuum. Thus Maxwell's equations connect the vacuum permittivity , the vacuum permeability , and the speed of light, c0, via the above equation. This relationship had been discovered by Wilhelm Eduard Weber and Rudolf Kohlrausch prior to the development of Maxwell's electrodynamics, however Maxwell was the first to produce a field theory consistent with waves traveling at the speed of light.
These are only two equations versus the original four, so more information pertains to these waves hidden within Maxwell's equations. A generic vector wave for the electric field has the form
Here, is a constant vector, is any second differentiable function, is a unit vector in the direction of propagation, and is a position vector. is a generic solution to the wave equation. In other words,
for a generic wave traveling in the direction.
From the first of Maxwell's equations, we get
Thus,
which implies that the electric field is orthogonal to the direction the wave propagates. The second of Maxwell's equations yields the magnetic field, namely,
Thus,
The remaining equations will be satisfied by this choice of .
The electric and magnetic field waves in the far-field travel at the speed of light. They have a special restricted orientation and proportional magnitudes, , which can be seen immediately from the Poynting vector. The electric field, magnetic field, and direction of wave propagation are all orthogonal, and the wave propagates in the same direction as . Also, E and B far-fields in free space, which as wave solutions depend primarily on these two Maxwell equations, are in-phase with each other. This is guaranteed since the generic wave solution is first order in both space and time, and the curl operator on one side of these equations results in first-order spatial derivatives of the wave solution, while the time-derivative on the other side of the equations, which gives the other field, is first-order in time, resulting in the same phase shift for both fields in each mathematical operation.
From the viewpoint of an electromagnetic wave traveling forward, the electric field might be oscillating up and down, while the magnetic field oscillates right and left. This picture can be rotated with the electric field oscillating right and left and the magnetic field oscillating down and up. This is a different solution that is traveling in the same direction. This arbitrariness in the orientation with respect to propagation direction is known as polarization. On a quantum level, it is described as photon polarization. The direction of the polarization is defined as the direction of the electric field.
More general forms of the second-order wave equations given above are available, allowing for both non-vacuum propagation media and sources. Many competing derivations exist, all with varying levels of approximation and intended applications. One very general example is a form of the electric field equation, which was factorized into a pair of explicitly directional wave equations, and then efficiently reduced into a single uni-directional wave equation by means of a simple slow-evolution approximation.
See also
Antenna measurement
Bioelectromagnetics
Bolometer
CONELRAD
Electromagnetic pulse
Electromagnetic radiation and health
Evanescent wave coupling
Finite-difference time-domain method
Gravitational wave
Helicon
Impedance of free space
Radiation reaction
Health effects of sunlight exposure
Sinusoidal plane-wave solutions of the electromagnetic wave equation
References
Further reading
External links
The Feynman Lectures on Physics Vol. I Ch. 28: Electromagnetic Radiation
Electromagnetic Waves from Maxwell's Equations on Project PHYSNET.
Heinrich Hertz
Radiation | 0.780068 | 0.999547 | 0.779714 |
Stoic physics | Stoic physics refers to the natural philosophy of the Stoic philosophers of ancient Greece and Rome which they used to explain the natural processes at work in the universe.
To the Stoics, the cosmos is a single pantheistic god, one which is rational and creative, and which is the basis of everything which exists. Nothing incorporeal exists. The nature of the world is one of unceasing change, driven by the active part or reason (logos) of God which pervades all things. The active substance of the world is characterized as a 'breath', or pneuma, which provides form and motion to matter, and is the origin of the elements, life, and human rationality. The cosmos proceeds from an original state in utmost heat, and, in the cooling and separation that occurs, all things appear which are only different and stages in the change of primitive being. Eventually though, the world will be reabsorbed into the primary substance, to be consumed in a general conflagration (ekpyrôsis), out of which a new cycle begins again.
Since the world operates through reason, all things are determined. But the Stoics adopted a compatibilist view which allowed humans freedom and responsibility within the causal network of fate. Humans are part of the logos which permeates the cosmos. The human soul is a physical unity of reason and mind. The good for a human is thus to be fully rational, behaving as Nature does in the natural order.
Central tenets
In pursuing their physics the Stoics wanted to create a picture of the world which would be completely coherent. Stoic physics can be described in terms of (a) monism, (b) materialism, and (c) dynamism.
Monism
Stoicism is a pantheistic philosophy. The cosmos is active, life-giving, rational and creative. It is a single cohesive unit, a self-supporting entity containing within it all that it needs, and all parts depending on mutual exchange with each other. Different parts of this unified structure are able to interact and have an affinity with each other (sympatheia). The Stoics explained everything from natural events to human conduct as manifestations of an all-pervading reason (logos). Thus they identified the universe with God, and the diversity of the world is explained through the transformations and products of God as the rational principle of the cosmos.
Materialism
Philosophers since the time of Plato had asked whether abstract qualities such as justice and wisdom, have an independent existence. Plato in his Sophist dialogue (245e–249d) had argued that since qualities such as virtue and vice cannot be 'touched', they must be something very different from ordinary bodies. The Stoics' answer to this dilemma was to assert that everything, including wisdom, justice, etc., are bodies. Plato had defined being as "that which has the power to act or be acted upon," and for the Stoics this meant that all action proceeds by bodily contact; every form of causation is reduced to the efficient cause, which implies the communication of motion from one body to another. Only Body exists. The Stoics did recognise the presence of incorporeal things such as void, place and time, but although real they could not exist and were said to "subsist". Stoicism was thus fully materialistic; the answers to metaphysics are to be sought in physics; particularly the problem of the causes of things for which Plato's theory of forms and Aristotle's "substantial form" had been put forth as solutions.
Dynamism
A dualistic feature of the Stoic system are the two principles, the active and the passive: everything which exists is capable of acting and being acted upon. The active principle is God acting as the rational principle (logos), and which has a higher status than the passive matter (ousia). In their earlier writings the Stoics characterised the rational principle as a creative fire, but later accounts stress the idea of breath, or pneuma, as the active substance. The cosmos is thus filled with an all-pervading pneuma which allows for the cohesion of matter and permits contact between all parts of the cosmos. The pneuma is everywhere coextensive with matter, pervading and permeating it, and, together with it, occupying and filling space.
The Epicureans had placed the form and movement of matter in the chance movements of primitive atoms. In the Stoic system material substance has a continuous structure, held together by tension (tonos) as the essential attribute of body. This tension is a property of the pneuma, and physical bodies are held together by the pneuma which is in a continual state of motion. The various pneuma currents combining give objects their stable, physical properties (hexis). A thing is no longer, as Plato maintained, hot or hard or bright by partaking in abstract heat or hardness or brightness, but by containing within its own substance the material of these pneuma currents in various degrees of tension.
As to the relation between the active and the passive principles there was no clear difference. Although the Stoics talked about the active and passive as two separate types of body, it is likely they saw them as merely two aspects of the single material cosmos. Pneuma, from this perspective, is not a special substance intermingled with passive matter, but rather it could be said that the material world has pneumatic qualities. The diversity of the world is explained through the transformations and products of this eternal principle.
Universe
Like Aristotle, the Stoics conceived of the cosmos as being finite with the Earth at the centre and the moon, sun, planets, and fixed stars surrounding it. Similarly, they rejected the possibility of any void (i.e. vacuum) within the cosmos since that would destroy the coherence of the universe and the sympathy of its parts. However, unlike Aristotle, the Stoics saw the cosmos as an island embedded in an infinite void. The cosmos has its own hexis which holds it together and protects it and the surrounding void cannot affect it. The cosmos can, however, vary in volume, allowing it to expand and contract in volume through its cycles.
Formation
The pneuma of the Stoics is the primitive substance which existed before the cosmos. It is the everlasting presupposition of particular things; the totality of all existence; out of it the whole of nature proceeds, eventually to be consumed by it. It is the creative force (God) which develops and shapes the universal order (cosmos). God is everything that exists.
In the original state, the pneuma-God and the cosmos are absolutely identical; but even then tension, the essential attribute of matter, is at work. In the primitive pneuma there resides the utmost heat and tension, within which there is a pressure, an expansive and dispersive tendency. Motion backwards and forwards once set up cools the glowing mass of fiery vapour and weakens the tension. Thus follows the first differentiation of primitive substance—the separation of force from matter, the emanation of the world from God. The seminal Logos which, in virtue of its tension, slumbered in pneuma, now proceeds upon its creative task. The cycle of its transformations and successive condensations constitutes the life of the cosmos. The cosmos and all its parts are only different embodiments and stages in the change of primitive being which Heraclitus had called "a progress up and down".
Out of it is separated elemental fire, the fire which we know, which burns and destroys; and this condenses into air; a further step in the downward path produces water and earth from the solidification of air. At every stage the degree of tension is slackened, and the resulting element approaches more and more to "inert" matter. But, just as one element does not wholly transform into another (e.g. only a part of air is transmuted into water or earth), so the pneuma itself does not wholly transform into the elements. From the elements the one substance is transformed into the multitude of individual things in the orderly cosmos, which is itself a living thing or being, and the pneuma pervading it, and conditioning life and growth everywhere, is its soul.
Ekpyrosis
The process of differentiation is not eternal; it continues only until the time of the restoration of all things. For the cosmos will in turn decay, and the tension which has been relaxed will again be tightened. Things will gradually resolve into elements, and the elements into the primary substance, to be consumed in a general conflagration when once more the world will be absorbed in God. This ekpyrôsis is not so much a catastrophic event, but rather the period of the cosmic cycle when the preponderance of the fiery element once again reaches its maximum. All matter is consumed becoming completely fiery and wholly soul-like. God, at this point, can be regarded as completely existing in itself.
In due order a new cycle of the cosmos begins (palingenesis), reproducing the previous world, and so on forever. Therefore, the same events play out again repeated endlessly. Since the cosmos always unfolds according to the best possible reason, any succeeding world is likely to be identical to the previous one. Thus in the same way that the cosmos occupies a finite space in an infinite void, so it can be understood to occupy a finite period in an infinite span of time.
Ekpyrosis itself however, was not a universally accepted theory by all Stoics. Other prominent stoics such as Panaetius, Zeno of Tarsus, Boethus of Sidon, and others either rejected Ekpyrosis or had differing opinions regarding its degree. A strong acceptance of Aristotle's theories of the universe, combined with a more practical lifestyle practiced by the Roman people, caused the later Stoics to focus their main effort on their own social well-being on earth, not on the cosmos. A prime example are the Stoic-influenced writings of the Roman Emperor Marcus Aurelius (121–180). In his Meditations, he chooses to discuss how one should act and live their life, rather than speculate on cosmological theories.
God
The Stoics attempted to incorporate traditional polytheism into their philosophy. Not only was the primitive substance God, the one supreme being, but divinity could be ascribed to the manifestations—to the heavenly bodies, to the forces of nature, even to deified persons; and thus the world was peopled with divine agencies. Prayer is of apparently little help in a rationally ordered cosmos, and surviving examples of Stoic prayers appear similar to self-meditation rather than appeals for divine intervention.
The Stoics often identified the universe and God with Zeus, as the ruler and upholder, and at the same time the law, of the universe. The Stoic God is not a transcendent omniscient being standing outside nature, but rather it is immanent—the divine element is immersed in nature itself. God orders the world for the good, and every element of the world contains a portion of the divine element that accounts for its behaviour. The reason of things—that which accounts for them—is not some external end to which they are tending; it is something acting within them, "a spirit deeply interfused," germinating and developing from within.
In one sense the Stoics believed that this is the best of all possible worlds. Only God or Nature is good, and Nature is perfectly rational. It is an organic unity and completely ordered. The goodness of Nature manifests in the way it works to arrange things in the most rational way. For the Stoics this is therefore the most reasonable, the most rational, of all possible worlds.
None of the events which occur by Nature are inherently bad; but nor are they intrinsically 'good' even though they have been caused by a good agent. The natural patterning of the world—life, death, sickness, health, etc.—is made up of morally indifferent events which in themselves are neither good nor bad. Such events are not unimportant, but they only have value in as far as they contribute to a life according to Nature. As reasoning creatures, humans have a share in Nature's rationality. The good for a human is to be fully rational, behaving as Nature does to maintain the natural order. This means to know the logic of the good, to understand the rational explanation of the universe, and the nature and possibilities of being human. The only evil for a human is to behave irrationally—to fail to act upon reason—such a person is insane.
Fate
To the Stoics nothing passes unexplained; there is a reason (Logos) for everything in nature. Because of the Stoics' commitment to the unity and cohesion of the cosmos and its all-encompassing reason, they fully embraced determinism. However instead of a single chain of causal events, there is instead a many-dimensional network of events interacting within the framework of fate. Out of this swarm of causes, the course of events is fully realised. Humans appear to have free will because personal actions participate in the determined chain of events independently of external conditions. This "soft-determinism" allows humans to be responsible for their own actions, alleviating the apparent arbitrariness of fate.
Divination
Divination was an essential element of Greek religion, and the Stoics attempted to reconcile it with their own rational doctrine of strict causation. Since the pneuma of the world-soul pervades the whole universe, this allows human souls to be influenced by divine souls. Omens and portents, Chrysippus explained, are the natural symptoms of certain occurrences. There must be countless indications of the course of providence, for the most part unobserved, the meaning of only a few having become known to humanity. To those who argued that divination was superfluous as all events are foreordained, he replied that both divination and our behaviour under the warnings which it affords are included in the chain of causation.
Mixture
To fully characterize the physical world, the Stoics developed a theory of mixing in which they recognised three types of mixture. The first type was a purely mechanical mixture such as mixing barley and wheat grains together: the individual components maintain their own properties, and they can be separated again. The second type was a fusion, whereby a new substance is created leading to the loss of the properties of the individual components, this roughly corresponds to the modern concept of a chemical change. The third type was a commingling, or total blending: there is complete interpenetration of the components down to the infinitesimal, but each component maintains its own properties. In this third type of mixture a new substance is created, but since it still has the qualities of the two original substances, it is possible to extract them again. In the words of Chrysippus: "there is nothing to prevent one drop of wine from mixing with the whole ocean". Ancient critics often regarded this type of mixing as paradoxical since it apparently implied that each constituent substance be the receptacle of each other. However to the Stoics, the pneuma is like a force, a continuous field interpenetrating matter and spreading through all of space.
Tension
Every character and property of a particular thing is determined solely by the tension in it of pneuma, and pneuma, though present in all things, varies indefinitely in quantity and intensity.
In the lowest degree of tension the pneuma dwelling in inorganic bodies holds bodies together (whether animate or inanimate) providing cohesion (hexis). This is the type of pneuma present in stone or metal as a retaining principle.
In the next degree of tension the pneuma provides nature or growth (physis) to living things. This is the highest level in which it is found in plants.
In a higher degree of tension the pneuma produces soul (psyche) to all animals, providing them with sensation and impulse.
In humans can be found the pneuma in its highest form as the rational soul (logike psyche).
A certain warmth, akin to the vital heat of organic being, seems to be found in inorganic nature: vapours from the earth, hot springs, sparks from the flint, were claimed as the last remnant of pneuma not yet utterly slackened and cold. They appealed also to the speed and expansion of gaseous bodies, to whirlwinds and inflated balloons.
Soul
In the rational creatures pneuma is manifested in the highest degree of purity and intensity as an emanation from the world-soul. Humans have souls because the universe has a soul, and human rationality is the same as God's rationality. The pneuma that is soul pervades the entire human body.
The soul is corporeal, else it would have no real existence, would be incapable of extension in three dimensions (i.e. to diffuse all over the body), incapable of holding the body together, herein presenting a sharp contrast to the Epicurean tenet that it is the body which confines and shelters the atoms of soul. This corporeal soul is reason, mind, and ruling principle; in virtue of its divine origin Cleanthes can say to Zeus, "We too are thy offspring," and Seneca can calmly insist that, if man and God are not on perfect equality, the superiority rests rather on our side. What God is for the world, the soul is for humans. The cosmos is a single whole, its variety being referred to varying stages of condensation in pneuma. So, too, the human soul must possess absolute simplicity, its varying functions being conditioned by the degrees of its tension. There are no separate "parts" of the soul, as previous thinkers imagined.
With this psychology is intimately connected the Stoic theory of knowledge. From the unity of soul it follows that all mental processes—sensation, assent, impulse—proceed from reason, the ruling part; the one rational soul alone has sensations, assents to judgments, is impelled towards objects of desire just as much as it thinks or reasons. Not that all these powers at once reach full maturity. The soul at first is empty of content; in the embryo it has not developed beyond the nutritive principle of a plant; at birth the "ruling part" is a blank tablet, although ready prepared to receive writing. The source of knowledge is experience and discursive thought, which manipulates the materials of sense. Our ideas are copied from stored-up sensations.
Just as a relaxation in tension brings about the dissolution of the universe; so in the body, a relaxation of tension, accounts for sleep, decay, and death for the human body. After death the disembodied soul can only maintain its separate existence, even for a limited time, by mounting to that region of the universe which is akin to its nature. It was a moot point whether all souls so survive, as Cleanthes thought, or the souls of the wise and good alone, which was the opinion of Chrysippus; in any case, sooner or later individual souls are merged in the soul of the universe, from which they originated.
Sensation
The Stoics explained perception as a transmission of the perceived quality of an object, by means of the sense organ, into the percipient's mind. The quality transmitted appears as a disturbance or impression upon the corporeal surface of that "thinking thing," the soul. In the example of sight, a conical pencil of rays diverges from the pupil of the eye, so that its base covers the object seen. A presentation is conveyed, by an air-current, from the sense organ, here the eye, to the mind, i.e. the soul's "ruling part." The presentation, besides attesting its own existence, gives further information of its object—such as colour or size. Zeno and Cleanthes compared this presentation to the impression which a seal bears upon wax, while Chrysippus determined it more vaguely as a hidden modification or mode of mind. But the mind is no mere passive recipient of impressions: the mind assents or dissents. The contents of experience are not all true or valid: hallucination is possible; here the Stoics agreed with the Epicureans. It is necessary, therefore, that assent should not be given indiscriminately; we must determine a criterion of truth, a special formal test whereby reason may recognize the merely plausible and hold fast the true.
The earlier Stoics made right reason the standard of truth. Zeno compared sensation to the outstretched hand, flat and open; bending the fingers was assent; the clenched fist was "simple apprehension," the mental grasp of an object; knowledge was the clenched fist tightly held in the other hand. But this criterion was open to the persistent attacks of Epicureans and Academics, who made clear (1) that reason is dependent upon, if not derived from, sense, and (2) that the utterances of reason lack consistency. Chrysippus, therefore, did much to develop Stoic logic, and more clearly defined and safeguarded his predecessors' position.
See also
Notes
a. Some historians prefer to describe Stoic doctrine as "corporealism" rather than "materialism". One objection to the materialism label relates to a narrow 17th/18th-century conception of materialism whereby things must be "explained by the movements and combination of passive matter". Since Stoicism is vitalistic it is "not materialism in the strict sense". A second objection refers to a Stoic distinction between mere bodies (which extend in three dimensions and offer resistance), and material bodies which are "constituted by the presence with one another of both [active and passive] principles, and by the effects of one principle on the other". The active and passive principles are bodies but not material bodies under this definition.
b. The concept of pneuma (as a "vital breath") was prominent in the Hellenistic medical schools. Its precise relationship to the "creative fire" (pyr technikon) of the early Stoics is unclear. Some ancient sources state that pneuma was a combination of elemental fire and air (these two elements being "active"). But in Stoic writings pneuma behaves much like the active principle, and it seems they adopted pneuma as a straight swap for the creative fire.
Citations
References
physics
Ancient Greek physics
Divination
Theories in ancient Greek philosophy
Pantheism | 0.791967 | 0.984305 | 0.779538 |
Prandtl number | The Prandtl number (Pr) or Prandtl group is a dimensionless number, named after the German physicist Ludwig Prandtl, defined as the ratio of momentum diffusivity to thermal diffusivity. The Prandtl number is given as:where:
: momentum diffusivity (kinematic viscosity), , (SI units: m2/s)
: thermal diffusivity, , (SI units: m2/s)
: dynamic viscosity, (SI units: Pa s = N s/m2)
: thermal conductivity, (SI units: W/(m·K))
: specific heat, (SI units: J/(kg·K))
: density, (SI units: kg/m3).
Note that whereas the Reynolds number and Grashof number are subscripted with a scale variable, the Prandtl number contains no such length scale and is dependent only on the fluid and the fluid state. The Prandtl number is often found in property tables alongside other properties such as viscosity and thermal conductivity.
The mass transfer analog of the Prandtl number is the Schmidt number and the ratio of the Prandtl number and the Schmidt number is the Lewis number.
Experimental values
Typical values
For most gases over a wide range of temperature and pressure, is approximately constant. Therefore, it can be used to determine the thermal conductivity of gases at high temperatures, where it is difficult to measure experimentally due to the formation of convection currents.
Typical values for are:
0.003 for molten potassium at 975 K
around 0.015 for mercury
0.065 for molten lithium at 975 K
around 0.16–0.7 for mixtures of noble gases or noble gases with hydrogen
0.63 for oxygen
around 0.71 for air and many other gases
1.38 for gaseous ammonia
between 4 and 5 for R-12 refrigerant
around 7.56 for water (At 18 °C)
13.4 and 7.2 for seawater (At 0 °C and 20 °C respectively)
50 for n-butanol
between 100 and 40,000 for engine oil
1000 for glycerol
10,000 for polymer melts
around 1 for Earth's mantle.
Formula for the calculation of the Prandtl number of air and water
For air with a pressure of 1 bar, the Prandtl numbers in the temperature range between −100 °C and +500 °C can be calculated using the formula given below. The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 0.1% from the literature values.
,
where is the temperature in Celsius.
The Prandtl numbers for water (1 bar) can be determined in the temperature range between 0 °C and 90 °C using the formula given below. The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 1% from the literature values.
Physical interpretation
Small values of the Prandtl number, , means the thermal diffusivity dominates. Whereas with large values, , the momentum diffusivity dominates the behavior.
For example, the listed value for liquid mercury indicates that the heat conduction is more significant compared to convection, so thermal diffusivity is dominant.
However, engine oil with its high viscosity and low heat conductivity, has a higher momentum diffusivity as compared to thermal diffusivity.
The Prandtl numbers of gases are about 1, which indicates that both momentum and heat dissipate through the fluid at about the same rate. Heat diffuses very quickly in liquid metals and very slowly in oils relative to momentum. Consequently thermal boundary layer is much thicker for liquid metals and much thinner for oils relative to the velocity boundary layer.
In heat transfer problems, the Prandtl number controls the relative thickness of the momentum and thermal boundary layers. When is small, it means that the heat diffuses quickly compared to the velocity (momentum). This means that for liquid metals the thermal boundary layer is much thicker than the velocity boundary layer.
In laminar boundary layers, the ratio of the thermal to momentum boundary layer thickness over a flat plate is well approximated by
where is the thermal boundary layer thickness and is the momentum boundary layer thickness.
For incompressible flow over a flat plate, the two Nusselt number correlations are asymptotically correct:
where is the Reynolds number. These two asymptotic solutions can be blended together using the concept of the Norm (mathematics):
See also
Turbulent Prandtl number
Magnetic Prandtl number
References
Further reading
Convection
Dimensionless numbers of fluid mechanics
Dimensionless numbers of thermodynamics
Fluid dynamics | 0.783069 | 0.995388 | 0.779457 |
Lorentz oscillator model | The Lorentz oscillator model describes the optical response of bound charges. The model is named after the Dutch physicist Hendrik Antoon Lorentz. It is a classical, phenomenological model for materials with characteristic resonance frequencies (or other characteristic energy scales) for optical absorption, e.g. ionic and molecular vibrations, interband transitions (semiconductors), phonons, and collective excitations.
Derivation of electron motion
The model is derived by modeling an electron orbiting a massive, stationary nucleus as a spring-mass-damper system. The electron is modeled to be connected to the nucleus via a hypothetical spring and its motion is damped by via a hypothetical damper. The damping force ensures that the oscillator's response is finite at its resonance frequency. For a time-harmonic driving force which originates from the electric field, Newton's second law can be applied to the electron to obtain the motion of the electron and expressions for the dipole moment, polarization, susceptibility, and dielectric function.
Equation of motion for electron oscillator:
where
is the displacement of charge from the rest position,
is time,
is the relaxation time/scattering time,
is a constant factor characteristic of the spring,
is the effective mass of the electron,
is the resonance frequency of the oscillator,
is the elementary charge,
is the electric field.
For time-harmonic fields:
The stationary solution of this equation of motion is:
The fact that the above solution is complex means there is a time delay (phase shift) between the driving electric field and the response of the electron's motion.
Dipole moment
The displacement, , induces a dipole moment, , given by
is the polarizability of single oscillator, given by
Three distinct scattering regimes can be interpreted corresponding to the dominant denominator term in the dipole moment:
Polarization
The polarization is the dipole moment per unit volume. For macroscopic material properties N is the density of charges (electrons) per unit volume. Considering that each electron is acting with the same dipole moment we have the polarization as below
Electric displacement
The electric displacement is related to the polarization density by
Dielectric function
The complex dielectric function is given by
where and is the so-called plasma frequency.
In practice, the model is commonly modified to account for multiple absorption mechanisms present in a medium. The modified version is given by
where
and
is the value of the dielectric function at infinite frequency, which can be used as an adjustable parameter to account for high frequency absorption mechanisms,
and is related to the strength of the th absorption mechanism,
.
Separating the real and imaginary components,
Complex conductivity
The complex optical conductivity in general is related to the complex dielectric function
Substituting the formula of in the equation above we obtain
Separating the real and imaginary components,
See also
Cauchy equation
Sellmeier equation
Forouhi–Bloomer model
Tauc–Lorentz model
Brendel–Bormann oscillator model
References
Condensed matter physics
Electric and magnetic fields in matter
Optics | 0.787935 | 0.989218 | 0.77944 |
Angular frequency | In physics, angular frequency (symbol ω), also called angular speed and angular rate, is a scalar measure of the angle rate (the angle per unit time) or the temporal rate of change of the phase argument of a sinusoidal waveform or sine function (for example, in oscillations and waves).
Angular frequency (or angular speed) is the magnitude of the pseudovector quantity angular velocity.
Angular frequency can be obtained multiplying rotational frequency, ν (or ordinary frequency, f) by a full turn (2 radians): .
It can also be formulated as , the instantaneous rate of change of the angular displacement, θ, with respect to time, t.
Unit
In SI units, angular frequency is normally presented in the unit radian per second. The unit hertz (Hz) is dimensionally equivalent, but by convention it is only used for frequency f, never for angular frequency ω. This convention is used to help avoid the confusion that arises when dealing with quantities such as frequency and angular quantities because the units of measure (such as cycle or radian) are considered to be one and hence may be omitted when expressing quantities in terms of SI units.
In digital signal processing, the frequency may be normalized by the sampling rate, yielding the normalized frequency.
Examples
Circular motion
In a rotating or orbiting object, there is a relation between distance from the axis, , tangential speed, , and the angular frequency of the rotation. During one period, , a body in circular motion travels a distance . This distance is also equal to the circumference of the path traced out by the body, . Setting these two quantities equal, and recalling the link between period and angular frequency we obtain: Circular motion on the unit circle is given by
where:
ω is the angular frequency (SI unit: radians per second),
T is the period (SI unit: seconds),
f is the ordinary frequency (SI unit: hertz).
Oscillations of a spring
An object attached to a spring can oscillate. If the spring is assumed to be ideal and massless with no damping, then the motion is simple and harmonic with an angular frequency given by
where
k is the spring constant,
m is the mass of the object.
ω is referred to as the natural angular frequency (sometimes be denoted as ω0).
As the object oscillates, its acceleration can be calculated by
where x is displacement from an equilibrium position.
Using standard frequency f, this equation would be
LC circuits
The resonant angular frequency in a series LC circuit equals the square root of the reciprocal of the product of the capacitance (C, with SI unit farad) and the inductance of the circuit (L, with SI unit henry):
Adding series resistance (for example, due to the resistance of the wire in a coil) does not change the resonant frequency of the series LC circuit. For a parallel tuned circuit, the above equation is often a useful approximation, but the resonant frequency does depend on the losses of parallel elements.
Terminology
Although angular frequency is often loosely referred to as frequency, it differs from frequency by a factor of 2, which potentially leads confusion when the distinction is not made clear.
See also
Cycle per second
Radian per second
Degree (angle)
Mean motion
Rotational frequency
Simple harmonic motion
References and notes
Related Reading:
Angle
Kinematic properties
Frequency
Quotients
ca:Freqüència angular
he:תדירות זוויתית | 0.781513 | 0.997338 | 0.779433 |
Conservative vector field | In vector calculus, a conservative vector field is a vector field that is the gradient of some function. A conservative vector field has the property that its line integral is path independent; the choice of path between two points does not change the value of the line integral. Path independence of the line integral is equivalent to the vector field under the line integral being conservative. A conservative vector field is also irrotational; in three dimensions, this means that it has vanishing curl. An irrotational vector field is necessarily conservative provided that the domain is simply connected.
Conservative vector fields appear naturally in mechanics: They are vector fields representing forces of physical systems in which energy is conserved. For a conservative system, the work done in moving along a path in a configuration space depends on only the endpoints of the path, so it is possible to define potential energy that is independent of the actual path taken.
Informal treatment
In a two- and three-dimensional space, there is an ambiguity in taking an integral between two points as there are infinitely many paths between the two points—apart from the straight line formed between the two points, one could choose a curved path of greater length as shown in the figure. Therefore, in general, the value of the integral depends on the path taken. However, in the special case of a conservative vector field, the value of the integral is independent of the path taken, which can be thought of as a large-scale cancellation of all elements that do not have a component along the straight line between the two points. To visualize this, imagine two people climbing a cliff; one decides to scale the cliff by going vertically up it, and the second decides to walk along a winding path that is longer in length than the height of the cliff, but at only a small angle to the horizontal. Although the two hikers have taken different routes to get up to the top of the cliff, at the top, they will have both gained the same amount of gravitational potential energy. This is because a gravitational field is conservative.
Intuitive explanation
M. C. Escher's lithograph print Ascending and Descending illustrates a non-conservative vector field, impossibly made to appear to be the gradient of the varying height above ground (gravitational potential) as one moves along the staircase. The force field experienced by the one moving on the staircase is non-conservative in that one can return to the starting point while ascending more than one descends or vice versa, resulting in nonzero work done by gravity. On a real staircase, the height above the ground is a scalar potential field: one has to go upward exactly as much as one goes downward in order to return to the same place, in which case the work by gravity totals to zero. This suggests path-independence of work done on the staircase; equivalently, the force field experienced is conservative (see the later section: Path independence and conservative vector field). The situation depicted in the print is impossible.
Definition
A vector field , where is an open subset of , is said to be conservative if there exists a (continuously differentiable) scalar field on such that
Here, denotes the gradient of . Since is continuously differentiable, is continuous. When the equation above holds, is called a scalar potential for .
The fundamental theorem of vector calculus states that, under some regularity conditions, any vector field can be expressed as the sum of a conservative vector field and a solenoidal field.
Path independence and conservative vector field
Path independence
A line integral of a vector field is said to be path-independent if it depends on only two integral path endpoints regardless of which path between them is chosen:
for any pair of integral paths and between a given pair of path endpoints in .
The path independence is also equivalently expressed as
for any piecewise smooth closed path in where the two endpoints are coincident. Two expressions are equivalent since any closed path can be made by two path; from an endpoint to another endpoint , and from to , so
where is the reverse of and the last equality holds due to the path independence
Conservative vector field
A key property of a conservative vector field is that its integral along a path depends on only the endpoints of that path, not the particular route taken. In other words, if it is a conservative vector field, then its line integral is path-independent. Suppose that for some (continuously differentiable) scalar field over as an open subset of (so is a conservative vector field that is continuous) and is a differentiable path (i.e., it can be parameterized by a differentiable function) in with an initial point and a terminal point . Then the gradient theorem (also called fundamental theorem of calculus for line integrals) states that
This holds as a consequence of the definition of a line integral, the chain rule, and the second fundamental theorem of calculus. in the line integral is an exact differential for an orthogonal coordinate system (e.g., Cartesian, cylindrical, or spherical coordinates). Since the gradient theorem is applicable for a differentiable path, the path independence of a conservative vector field over piecewise-differential curves is also proved by the proof per differentiable curve component.
So far it has been proven that a conservative vector field is line integral path-independent. Conversely, if a continuous vector field is (line integral) path-independent, then it is a conservative vector field, so the following biconditional statement holds:
The proof of this converse statement is the following.
is a continuous vector field which line integral is path-independent. Then, let's make a function defined as
over an arbitrary path between a chosen starting point and an arbitrary point . Since it is path-independent, it depends on only and regardless of which path between these points is chosen.
Let's choose the path shown in the left of the right figure where a 2-dimensional Cartesian coordinate system is used. The second segment of this path is parallel to the axis so there is no change along the axis. The line integral along this path is
By the path independence, its partial derivative with respect to (for to have partial derivatives, needs to be continuous.) is
since and are independent to each other. Let's express as where and are unit vectors along the and axes respectively, then, since ,
where the last equality is from the second fundamental theorem of calculus.
A similar approach for the line integral path shown in the right of the right figure results in so
is proved for the 2-dimensional Cartesian coordinate system. This proof method can be straightforwardly expanded to a higher dimensional orthogonal coordinate system (e.g., a 3-dimensional spherical coordinate system) so the converse statement is proved. Another proof is found here as the converse of the gradient theorem.
Irrotational vector fields
Let (3-dimensional space), and let be a (continuously differentiable) vector field, with an open subset of . Then is called irrotational if its curl is everywhere in , i.e., if
For this reason, such vector fields are sometimes referred to as curl-free vector fields or curl-less vector fields. They are also referred to as longitudinal vector fields.
It is an identity of vector calculus that for any (continuously differentiable up to the 2nd derivative) scalar field on , we have
Therefore, every conservative vector field in is also an irrotational vector field in . This result can be easily proved by expressing in a Cartesian coordinate system with Schwarz's theorem (also called Clairaut's theorem on equality of mixed partials).
Provided that is a simply connected open space (roughly speaking, a single piece open space without a hole within it), the converse of this is also true: Every irrotational vector field in a simply connected open space is a conservative vector field in .
The above statement is not true in general if is not simply connected. Let be with removing all coordinates on the -axis (so not a simply connected space), i.e., . Now, define a vector field on by
Then has zero curl everywhere in ( at everywhere in ), i.e., is irrotational. However, the circulation of around the unit circle in the -plane is ; in polar coordinates, , so the integral over the unit circle is
Therefore, does not have the path-independence property discussed above so is not conservative even if since where is defined is not a simply connected open space.
Say again, in a simply connected open region, an irrotational vector field has the path-independence property (so as conservative). This can be proved directly by using Stokes' theorem,for any smooth oriented surface which boundary is a simple closed path . So, it is concluded that In a simply connected open region, any vector field that has the path-independence property (so it is a conservative vector field.) must also be irrotational and vice versa.
Abstraction
More abstractly, in the presence of a Riemannian metric, vector fields correspond to differential . The conservative vector fields correspond to the exact , that is, to the forms which are the exterior derivative of a function (scalar field) on . The irrotational vector fields correspond to the closed , that is, to the such that . As any exact form is closed, so any conservative vector field is irrotational. Conversely, all closed are exact if is simply connected.
Vorticity
The vorticity of a vector field can be defined by:
The vorticity of an irrotational field is zero everywhere. Kelvin's circulation theorem states that a fluid that is irrotational in an inviscid flow will remain irrotational. This result can be derived from the vorticity transport equation, obtained by taking the curl of the Navier–Stokes equations.
For a two-dimensional field, the vorticity acts as a measure of the local rotation of fluid elements. The vorticity does not imply anything about the global behavior of a fluid. It is possible for a fluid that travels in a straight line to have vorticity, and it is possible for a fluid that moves in a circle to be irrotational.
Conservative forces
If the vector field associated to a force is conservative, then the force is said to be a conservative force.
The most prominent examples of conservative forces are gravitational force (associated with a gravitational field) and electric force (associated with an electrostatic field). According to Newton's law of gravitation, a gravitational force acting on a mass due to a mass located at a distance from , obeys the equation
where is the gravitational constant and is a unit vector pointing from toward . The force of gravity is conservative because , where
is the gravitational potential energy. In other words, the gravitation field associated with the gravitational force is the gradient of the gravitation potential associated with the gravitational potential energy . It can be shown that any vector field of the form is conservative, provided that is integrable.
For conservative forces, path independence can be interpreted to mean that the work done in going from a point to a point is independent of the moving path chosen (dependent on only the points and ), and that the work done in going around a simple closed loop is :
The total energy of a particle moving under the influence of conservative forces is conserved, in the sense that a loss of potential energy is converted to the equal quantity of kinetic energy, or vice versa.
See also
Beltrami vector field
Conservative force
Conservative system
Complex lamellar vector field
Helmholtz decomposition
Laplacian vector field
Longitudinal and transverse vector fields
Solenoidal vector field
References
Further reading
Vector calculus
Force | 0.785574 | 0.991914 | 0.779223 |
Wave | In physics, mathematics, engineering, and related fields, a wave is a propagating dynamic disturbance (change from equilibrium) of one or more quantities. Periodic waves oscillate repeatedly about an equilibrium (resting) value at some frequency. When the entire waveform moves in one direction, it is said to be a travelling wave; by contrast, a pair of superimposed periodic waves traveling in opposite directions makes a standing wave. In a standing wave, the amplitude of vibration has nulls at some positions where the wave amplitude appears smaller or even zero.
There are two types of waves that are most commonly studied in classical physics: mechanical waves and electromagnetic waves. In a mechanical wave, stress and strain fields oscillate about a mechanical equilibrium. A mechanical wave is a local deformation (strain) in some physical medium that propagates from particle to particle by creating local stresses that cause strain in neighboring particles too. For example, sound waves are variations of the local pressure and particle motion that propagate through the medium. Other examples of mechanical waves are seismic waves, gravity waves, surface waves and string vibrations. In an electromagnetic wave (such as light), coupling between the electric and magnetic fields sustains propagation of waves involving these fields according to Maxwell's equations. Electromagnetic waves can travel through a vacuum and through some dielectric media (at wavelengths where they are considered transparent). Electromagnetic waves, as determined by their frequencies (or wavelengths), have more specific designations including radio waves, infrared radiation, terahertz waves, visible light, ultraviolet radiation, X-rays and gamma rays.
Other types of waves include gravitational waves, which are disturbances in spacetime that propagate according to general relativity; heat diffusion waves; plasma waves that combine mechanical deformations and electromagnetic fields; reaction–diffusion waves, such as in the Belousov–Zhabotinsky reaction; and many more. Mechanical and electromagnetic waves transfer energy, momentum, and information, but they do not transfer particles in the medium. In mathematics and electronics waves are studied as signals. On the other hand, some waves have envelopes which do not move at all such as standing waves (which are fundamental to music) and hydraulic jumps.
A physical wave field is almost always confined to some finite region of space, called its domain. For example, the seismic waves generated by earthquakes are significant only in the interior and surface of the planet, so they can be ignored outside it. However, waves with infinite domain, that extend over the whole space, are commonly studied in mathematics, and are very valuable tools for understanding physical waves in finite domains.
A plane wave is an important mathematical idealization where the disturbance is identical along any (infinite) plane normal to a specific direction of travel. Mathematically, the simplest wave is a sinusoidal plane wave in which at any point the field experiences simple harmonic motion at one frequency. In linear media, complicated waves can generally be decomposed as the sum of many sinusoidal plane waves having different directions of propagation and/or different frequencies. A plane wave is classified as a transverse wave if the field disturbance at each point is described by a vector perpendicular to the direction of propagation (also the direction of energy transfer); or longitudinal wave if those vectors are aligned with the propagation direction. Mechanical waves include both transverse and longitudinal waves; on the other hand electromagnetic plane waves are strictly transverse while sound waves in fluids (such as air) can only be longitudinal. That physical direction of an oscillating field relative to the propagation direction is also referred to as the wave's polarization, which can be an important attribute.
Mathematical description
Single waves
A wave can be described just like a field, namely as a function where is a position and is a time.
The value of is a point of space, specifically in the region where the wave is defined. In mathematical terms, it is usually a vector in the Cartesian three-dimensional space . However, in many cases one can ignore one dimension, and let be a point of the Cartesian plane . This is the case, for example, when studying vibrations of a drum skin. One may even restrict to a point of the Cartesian line — that is, the set of real numbers. This is the case, for example, when studying vibrations in a violin string or recorder. The time , on the other hand, is always assumed to be a scalar; that is, a real number.
The value of can be any physical quantity of interest assigned to the point that may vary with time. For example, if represents the vibrations inside an elastic solid, the value of is usually a vector that gives the current displacement from of the material particles that would be at the point in the absence of vibration. For an electromagnetic wave, the value of can be the electric field vector , or the magnetic field vector , or any related quantity, such as the Poynting vector . In fluid dynamics, the value of could be the velocity vector of the fluid at the point , or any scalar property like pressure, temperature, or density. In a chemical reaction, could be the concentration of some substance in the neighborhood of point of the reaction medium.
For any dimension (1, 2, or 3), the wave's domain is then a subset of , such that the function value is defined for any point in . For example, when describing the motion of a drum skin, one can consider to be a disk (circle) on the plane with center at the origin , and let be the vertical displacement of the skin at the point of and at time .
Superposition
Waves of the same type are often superposed and encountered simultaneously at a given point in space and time. The properties at that point are the sum of the properties of each component wave at that point. In general, the velocities are not the same, so the wave form will change over time and space.
Wave spectrum
Wave families
Sometimes one is interested in a single specific wave. More often, however, one needs to understand large set of possible waves; like all the ways that a drum skin can vibrate after being struck once with a drum stick, or all the possible radar echos one could get from an airplane that may be approaching an airport.
In some of those situations, one may describe such a family of waves by a function that depends on certain parameters , besides and . Then one can obtain different waves — that is, different functions of and — by choosing different values for those parameters.
For example, the sound pressure inside a recorder that is playing a "pure" note is typically a standing wave, that can be written as
The parameter defines the amplitude of the wave (that is, the maximum sound pressure in the bore, which is related to the loudness of the note); is the speed of sound; is the length of the bore; and is a positive integer (1,2,3,...) that specifies the number of nodes in the standing wave. (The position should be measured from the mouthpiece, and the time from any moment at which the pressure at the mouthpiece is maximum. The quantity is the wavelength of the emitted note, and is its frequency.) Many general properties of these waves can be inferred from this general equation, without choosing specific values for the parameters.
As another example, it may be that the vibrations of a drum skin after a single strike depend only on the distance from the center of the skin to the strike point, and on the strength of the strike. Then the vibration for all possible strikes can be described by a function .
Sometimes the family of waves of interest has infinitely many parameters. For example, one may want to describe what happens to the temperature in a metal bar when it is initially heated at various temperatures at different points along its length, and then allowed to cool by itself in vacuum. In that case, instead of a scalar or vector, the parameter would have to be a function such that is the initial temperature at each point of the bar. Then the temperatures at later times can be expressed by a function that depends on the function (that is, a functional operator), so that the temperature at a later time is
Differential wave equations
Another way to describe and study a family of waves is to give a mathematical equation that, instead of explicitly giving the value of , only constrains how those values can change with time. Then the family of waves in question consists of all functions that satisfy those constraints — that is, all solutions of the equation.
This approach is extremely important in physics, because the constraints usually are a consequence of the physical processes that cause the wave to evolve. For example, if is the temperature inside a block of some homogeneous and isotropic solid material, its evolution is constrained by the partial differential equation
where is the heat that is being generated per unit of volume and time in the neighborhood of at time (for example, by chemical reactions happening there); are the Cartesian coordinates of the point ; is the (first) derivative of with respect to ; and is the second derivative of relative to . (The symbol "" is meant to signify that, in the derivative with respect to some variable, all other variables must be considered fixed.)
This equation can be derived from the laws of physics that govern the diffusion of heat in solid media. For that reason, it is called the heat equation in mathematics, even though it applies to many other physical quantities besides temperatures.
For another example, we can describe all possible sounds echoing within a container of gas by a function that gives the pressure at a point and time within that container. If the gas was initially at uniform temperature and composition, the evolution of is constrained by the formula
Here is some extra compression force that is being applied to the gas near by some external process, such as a loudspeaker or piston right next to .
This same differential equation describes the behavior of mechanical vibrations and electromagnetic fields in a homogeneous isotropic non-conducting solid. Note that this equation differs from that of heat flow only in that the left-hand side is , the second derivative of with respect to time, rather than the first derivative . Yet this small change makes a huge difference on the set of solutions . This differential equation is called "the" wave equation in mathematics, even though it describes only one very special kind of waves.
Wave in elastic medium
Consider a traveling transverse wave (which may be a pulse) on a string (the medium). Consider the string to have a single spatial dimension. Consider this wave as traveling
in the direction in space. For example, let the positive direction be to the right, and the negative direction be to the left.
with constant amplitude
with constant velocity , where is
independent of wavelength (no dispersion)
independent of amplitude (linear media, not nonlinear).
with constant waveform, or shape
This wave can then be described by the two-dimensional functions
or, more generally, by d'Alembert's formula:
representing two component waveforms and traveling through the medium in opposite directions. A generalized representation of this wave can be obtained as the partial differential equation
General solutions are based upon Duhamel's principle.
Wave forms
The form or shape of F in d'Alembert's formula involves the argument x − vt. Constant values of this argument correspond to constant values of F, and these constant values occur if x increases at the same rate that vt increases. That is, the wave shaped like the function F will move in the positive x-direction at velocity v (and G will propagate at the same speed in the negative x-direction).
In the case of a periodic function F with period λ, that is, F(x + λ − vt) = F(x − vt), the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period λ (the wavelength of the wave). In a similar fashion, this periodicity of F implies a periodicity in time as well: F(x − v(t + T)) = F(x − vt) provided vT = λ, so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = λ/v.
Amplitude and modulation
The amplitude of a wave may be constant (in which case the wave is a c.w. or continuous wave), or may be modulated so as to vary with time and/or position. The outline of the variation in amplitude is called the envelope of the wave. Mathematically, the modulated wave can be written in the form:
where is the amplitude envelope of the wave, is the wavenumber and is the phase. If the group velocity (see below) is wavelength-independent, this equation can be simplified as:
showing that the envelope moves with the group velocity and retains its shape. Otherwise, in cases where the group velocity varies with wavelength, the pulse shape changes in a manner often described using an envelope equation.
Phase velocity and group velocity
There are two velocities that are associated with waves, the phase velocity and the group velocity.
Phase velocity is the rate at which the phase of the wave propagates in space: any given phase of the wave (for example, the crest) will appear to travel at the phase velocity. The phase velocity is given in terms of the wavelength (lambda) and period as
Group velocity is a property of waves that have a defined envelope, measuring propagation through space (that is, phase velocity) of the overall shape of the waves' amplitudes—modulation or envelope of the wave.
Special waves
Sine waves
Plane waves
A plane wave is a kind of wave whose value varies only in one spatial direction. That is, its value is constant on a plane that is perpendicular to that direction. Plane waves can be specified by a vector of unit length indicating the direction that the wave varies in, and a wave profile describing how the wave varies as a function of the displacement along that direction and time. Since the wave profile only depends on the position in the combination , any displacement in directions perpendicular to cannot affect the value of the field.
Plane waves are often used to model electromagnetic waves far from a source. For electromagnetic plane waves, the electric and magnetic fields themselves are transverse to the direction of propagation, and also perpendicular to each other.
Standing waves
A standing wave, also known as a stationary wave, is a wave whose envelope remains in a constant position. This phenomenon arises as a result of interference between two waves traveling in opposite directions.
The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example, when a violin string is displaced, transverse waves propagate out to where the string is held in place at the bridge and the nut, where the waves are reflected back. At the bridge and nut, the two opposed waves are in antiphase and cancel each other, producing a node. Halfway between two nodes there is an antinode, where the two counter-propagating waves enhance each other maximally. There is no net propagation of energy over time.
Solitary waves
A soliton or solitary wave is a self-reinforcing wave packet that maintains its shape while it propagates at a constant velocity. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons are the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems.
Physical properties
Propagation
Wave propagation is any of the ways in which waves travel. With respect to the direction of the oscillation relative to the propagation direction, we can distinguish between longitudinal wave and transverse waves.
Electromagnetic waves propagate in vacuum as well as in material media. Propagation of other wave types such as sound may occur only in a transmission medium.
Reflection of plane waves in a half-space
The propagation and reflection of plane waves—e.g. Pressure waves (P-wave) or Shear waves (SH or SV-waves) are phenomena that were first characterized within the field of classical seismology, and are now considered fundamental concepts in modern seismic tomography. The analytical solution to this problem exists and is well known. The frequency domain solution can be obtained by first finding the Helmholtz decomposition of the displacement field, which is then substituted into the wave equation. From here, the plane wave eigenmodes can be calculated.
SV wave propagation
The analytical solution of SV-wave in a half-space indicates that the plane SV wave reflects back to the domain as a P and SV waves, leaving out special cases. The angle of the reflected SV wave is identical to the incidence wave, while the angle of the reflected P wave is greater than the SV wave. For the same wave frequency, the SV wavelength is smaller than the P wavelength. This fact has been depicted in this animated picture.
P wave propagation
Similar to the SV wave, the P incidence, in general, reflects as the P and SV wave. There are some special cases where the regime is different.
Wave velocity
Wave velocity is a general concept, of various kinds of wave velocities, for a wave's phase and speed concerning energy (and information) propagation. The phase velocity is given as:
where:
vp is the phase velocity (with SI unit m/s),
ω is the angular frequency (with SI unit rad/s),
k is the wavenumber (with SI unit rad/m).
The phase speed gives you the speed at which a point of constant phase of the wave will travel for a discrete frequency. The angular frequency ω cannot be chosen independently from the wavenumber k, but both are related through the dispersion relationship:
In the special case , with c a constant, the waves are called non-dispersive, since all frequencies travel at the same phase speed c. For instance electromagnetic waves in vacuum are non-dispersive. In case of other forms of the dispersion relation, we have dispersive waves. The dispersion relationship depends on the medium through which the waves propagate and on the type of waves (for instance electromagnetic, sound or water waves).
The speed at which a resultant wave packet from a narrow range of frequencies will travel is called the group velocity and is determined from the gradient of the dispersion relation:
In almost all cases, a wave is mainly a movement of energy through a medium. Most often, the group velocity is the velocity at which the energy moves through this medium.
Waves exhibit common behaviors under a number of standard situations, for example:
Transmission and media
Waves normally move in a straight line (that is, rectilinearly) through a transmission medium. Such media can be classified into one or more of the following categories:
A bounded medium if it is finite in extent, otherwise an unbounded medium
A linear medium if the amplitudes of different waves at any particular point in the medium can be added
A uniform medium or homogeneous medium if its physical properties are unchanged at different locations in space
An anisotropic medium if one or more of its physical properties differ in one or more directions
An isotropic medium if its physical properties are the same in all directions
Absorption
Waves are usually defined in media which allow most or all of a wave's energy to propagate without loss. However materials may be characterized as "lossy" if they remove energy from a wave, usually converting it into heat. This is termed "absorption." A material which absorbs a wave's energy, either in transmission or reflection, is characterized by a refractive index which is complex. The amount of absorption will generally depend on the frequency (wavelength) of the wave, which, for instance, explains why objects may appear colored.
Reflection
When a wave strikes a reflective surface, it changes direction, such that the angle made by the incident wave and line normal to the surface equals the angle made by the reflected wave and the same normal line.
Refraction
Refraction is the phenomenon of a wave changing its speed. Mathematically, this means that the size of the phase velocity changes. Typically, refraction occurs when a wave passes from one medium into another. The amount by which a wave is refracted by a material is given by the refractive index of the material. The directions of incidence and refraction are related to the refractive indices of the two materials by Snell's law.
Diffraction
A wave exhibits diffraction when it encounters an obstacle that bends the wave or when it spreads after emerging from an opening. Diffraction effects are more pronounced when the size of the obstacle or opening is comparable to the wavelength of the wave.
Interference
When waves in a linear medium (the usual case) cross each other in a region of space, they do not actually interact with each other, but continue on as if the other one were not present. However at any point in that region the field quantities describing those waves add according to the superposition principle. If the waves are of the same frequency in a fixed phase relationship, then there will generally be positions at which the two waves are in phase and their amplitudes add, and other positions where they are out of phase and their amplitudes (partially or fully) cancel. This is called an interference pattern.
Polarization
The phenomenon of polarization arises when wave motion can occur simultaneously in two orthogonal directions. Transverse waves can be polarized, for instance. When polarization is used as a descriptor without qualification, it usually refers to the special, simple case of linear polarization. A transverse wave is linearly polarized if it oscillates in only one direction or plane. In the case of linear polarization, it is often useful to add the relative orientation of that plane, perpendicular to the direction of travel, in which the oscillation occurs, such as "horizontal" for instance, if the plane of polarization is parallel to the ground. Electromagnetic waves propagating in free space, for instance, are transverse; they can be polarized by the use of a polarizing filter.
Longitudinal waves, such as sound waves, do not exhibit polarization. For these waves there is only one direction of oscillation, that is, along the direction of travel.
Dispersion
A wave undergoes dispersion when either the phase velocity or the group velocity depends on the wave frequency.
Dispersion is most easily seen by letting white light pass through a prism, the result of which is to produce the spectrum of colors of the rainbow. Isaac Newton performed experiments with light and prisms, presenting his findings in the Opticks (1704) that white light consists of several colors and that these colors cannot be decomposed any further.
Doppler effect
The Doppler effect or Doppler shift is the change in frequency of a wave in relation to an observer who is moving relative to the wave source. It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842.
Mechanical waves
A mechanical wave is an oscillation of matter, and therefore transfers energy through a medium. While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves.
Waves on strings
The transverse vibration of a string is a function of tension and inertia, and is constrained by the length of the string as the ends are fixed. This constraint limits the steady state modes that are possible, and thereby the frequencies.
The speed of a transverse wave traveling along a vibrating string (v) is directly proportional to the square root of the tension of the string (T) over the linear mass density (μ):
where the linear density μ is the mass per unit length of the string.
Acoustic waves
Acoustic or sound waves are compression waves which travel as body waves at the speed given by:
or the square root of the adiabatic bulk modulus divided by the ambient density of the medium (see speed of sound).
Water waves
Ripples on the surface of a pond are actually a combination of transverse and longitudinal waves; therefore, the points on the surface follow orbital paths.
Sound – a mechanical wave that propagates through gases, liquids, solids and plasmas;
Inertial waves, which occur in rotating fluids and are restored by the Coriolis effect;
Ocean surface waves, which are perturbations that propagate through water
Body waves
Body waves travel through the interior of the medium along paths controlled by the material properties in terms of density and modulus (stiffness). The density and modulus, in turn, vary according to temperature, composition, and material phase. This effect resembles the refraction of light waves. Two types of particle motion result in two types of body waves: Primary and Secondary waves.
Seismic waves
Seismic waves are waves of energy that travel through the Earth's layers, and are a result of earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions that give out low-frequency acoustic energy. They include body waves—the primary (P waves) and secondary waves (S waves)—and surface waves, such as Rayleigh waves, Love waves, and Stoneley waves.
Shock waves
A shock wave is a type of propagating disturbance. When a wave moves faster than the local speed of sound in a fluid, it is a shock wave. Like an ordinary wave, a shock wave carries energy and can propagate through a medium; however, it is characterized by an abrupt, nearly discontinuous change in pressure, temperature and density of the medium.
Shear waves
Shear waves are body waves due to shear rigidity and inertia. They can only be transmitted through solids and to a lesser extent through liquids with a sufficiently high viscosity.
Other
Waves of traffic, that is, propagation of different densities of motor vehicles, and so forth, which can be modeled as kinematic waves
Metachronal wave refers to the appearance of a traveling wave produced by coordinated sequential actions.
Electromagnetic waves
An electromagnetic wave consists of two waves that are oscillations of the electric and magnetic fields. An electromagnetic wave travels in a direction that is at right angles to the oscillation direction of both fields. In the 19th century, James Clerk Maxwell showed that, in vacuum, the electric and magnetic fields satisfy the wave equation both with speed equal to that of the speed of light. From this emerged the idea that light is an electromagnetic wave. The unification of light and electromagnetic waves was experimentally confirmed by Hertz in the end of the 1880s. Electromagnetic waves can have different frequencies (and thus wavelengths), and are classified accordingly in wavebands, such as radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays. The range of frequencies in each of these bands is continuous, and the limits of each band are mostly arbitrary, with the exception of visible light, which must be visible to the normal human eye.
Quantum mechanical waves
Schrödinger equation
The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle.
Dirac equation
The Dirac equation is a relativistic wave equation detailing electromagnetic interactions. Dirac waves accounted for the fine details of the hydrogen spectrum in a completely rigorous way. The wave equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin- particles.
de Broglie waves
Louis de Broglie postulated that all particles with momentum have a wavelength
where h is the Planck constant, and p is the magnitude of the momentum of the particle. This hypothesis was at the basis of quantum mechanics. Nowadays, this wavelength is called the de Broglie wavelength. For example, the electrons in a CRT display have a de Broglie wavelength of about 10−13 m.
A wave representing such a particle traveling in the k-direction is expressed by the wave function as follows:
where the wavelength is determined by the wave vector k as:
and the momentum by:
However, a wave like this with definite wavelength is not localized in space, and so cannot represent a particle localized in space. To localize a particle, de Broglie proposed a superposition of different wavelengths ranging around a central value in a wave packet, a waveform often used in quantum mechanics to describe the wave function of a particle. In a wave packet, the wavelength of the particle is not precise, and the local wavelength deviates on either side of the main wavelength value.
In representing the wave function of a localized particle, the wave packet is often taken to have a Gaussian shape and is called a Gaussian wave packet. Gaussian wave packets also are used to analyze water waves.
For example, a Gaussian wavefunction ψ might take the form:
at some initial time t = 0, where the central wavelength is related to the central wave vector k0 as λ0 = 2π / k0. It is well known from the theory of Fourier analysis, or from the Heisenberg uncertainty principle (in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. The Fourier transform of a Gaussian is itself a Gaussian. Given the Gaussian:
the Fourier transform is:
The Gaussian in space therefore is made up of waves:
that is, a number of waves of wavelengths λ such that kλ = 2 π.
The parameter σ decides the spatial spread of the Gaussian along the x-axis, while the Fourier transform shows a spread in wave vector k determined by 1/σ. That is, the smaller the extent in space, the larger the extent in k, and hence in λ = 2π/k.
Gravity waves
Gravity waves are waves generated in a fluid medium or at the interface between two media when the force of gravity or buoyancy works to restore equilibrium. Surface waves on water are the most familiar example.
Gravitational waves
Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016.
Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
See also
Index of wave articles
Waves in general
Parameters
Waveforms
Electromagnetic waves
In fluids
Airy wave theory, in fluid dynamics
Capillary wave, in fluid dynamics
Cnoidal wave, in fluid dynamics
Edge wave, a surface gravity wave fixed by refraction against a rigid boundary
Faraday wave, a type of wave in liquids
Gravity wave, in fluid dynamics
Internal wave, a wave within a fluid medium
Shock wave, in aerodynamics
Sound wave, a wave of sound through a medium such as air or water
Tidal wave, a scientifically incorrect name for a tsunami
Tollmien–Schlichting wave, in fluid dynamics
Wind wave
In quantum mechanics
In relativity
Other specific types of waves
Alfvén wave, in plasma physics
Atmospheric wave, a periodic disturbance in the fields of atmospheric variables
Fir wave, a forest configuration
Lamb waves, in solid materials
Rayleigh wave, surface acoustic waves that travel on solids
Spin wave, in magnetism
Spin density wave, in solid materials
Trojan wave packet, in particle science
Waves in plasmas, in plasma physics
Related topics
Absorption (electromagnetic radiation)
Antenna (radio)
Beat (acoustics)
Branched flow
Cymatics
Diffraction
Dispersion (water waves)
Doppler effect
Envelope detector
Fourier transform for computing periodicity in evenly spaced data
Group velocity
Harmonic
Huygens–Fresnel principle
Index of wave articles
Inertial wave
Least-squares spectral analysis for computing periodicity in unevenly spaced data
List of waves named after people
Phase velocity
Photon
Polarization (physics)
Propagation constant
Radio propagation
Ray (optics)
Reaction–diffusion system
Reflection (physics)
Refraction
Resonance
Ripple tank
Rogue wave
Scattering
Shallow water equations
Shive wave machine
Sound
Standing wave
Transmission medium
Velocity factor
Wave equation
Wave power
Wave turbulence
Wind wave
Wind wave#Formation
References
Sources
.
.
Crawford jr., Frank S. (1968). Waves (Berkeley Physics Course, Vol. 3), McGraw-Hill, Free online version
External links
The Feynman Lectures on Physics: Waves
Linear and nonlinear waves
Science Aid: Wave properties – Concise guide aimed at teens
"AT&T Archives: Similiarities of Wave Behavior" demonstrated by J.N. Shive of Bell Labs (video on YouTube)
Differential equations
Articles containing video clips | 0.780841 | 0.99789 | 0.779194 |
Dissipation | In thermodynamics, dissipation is the result of an irreversible process that affects a thermodynamic system. In a dissipative process, energy (internal, bulk flow kinetic, or system potential) transforms from an initial form to a final form, where the capacity of the final form to do thermodynamic work is less than that of the initial form. For example, transfer of energy as heat is dissipative because it is a transfer of energy other than by thermodynamic work or by transfer of matter, and spreads previously concentrated energy. Following the second law of thermodynamics, in conduction and radiation from one body to another, the entropy varies with temperature (reduces the capacity of the combination of the two bodies to do work), but never decreases in an isolated system.
In mechanical engineering, dissipation is the irreversible conversion of mechanical energy into thermal energy with an associated increase in entropy.
Processes with defined local temperature produce entropy at a certain rate. The entropy production rate times local temperature gives the dissipated power. Important examples of irreversible processes are: heat flow through a thermal resistance, fluid flow through a flow resistance, diffusion (mixing), chemical reactions, and electric current flow through an electrical resistance (Joule heating).
Definition
Dissipative thermodynamic processes are essentially irreversible because they produce entropy. Planck regarded friction as the prime example of an irreversible thermodynamic process. In a process in which the temperature is locally continuously defined, the local density of rate of entropy production times local temperature gives the local density of dissipated power.
A particular occurrence of a dissipative process cannot be described by a single individual Hamiltonian formalism. A dissipative process requires a collection of admissible individual Hamiltonian descriptions, exactly which one describes the actual particular occurrence of the process of interest being unknown. This includes friction and hammering, and all similar forces that result in decoherency of energy—that is, conversion of coherent or directed energy flow into an indirected or more isotropic distribution of energy.
Energy
"The conversion of mechanical energy into heat is called energy dissipation." – François Roddier The term is also applied to the loss of energy due to generation of unwanted heat in electric and electronic circuits.
Computational physics
In computational physics, numerical dissipation (also known as "Numerical diffusion") refers to certain side-effects that may occur as a result of a numerical solution to a differential equation. When the pure advection equation, which is free of dissipation, is solved by a numerical approximation method, the energy of the initial wave may be reduced in a way analogous to a diffusional process. Such a method is said to contain 'dissipation'. In some cases, "artificial dissipation" is intentionally added to improve the numerical stability characteristics of the solution.
Mathematics
A formal, mathematical definition of dissipation, as commonly used in the mathematical study of measure-preserving dynamical systems, is given in the article wandering set.
Examples
In hydraulic engineering
Dissipation is the process of converting mechanical energy of downward-flowing water into thermal and acoustical energy. Various devices are designed in stream beds to reduce the kinetic energy of flowing waters to reduce their erosive potential on banks and river bottoms. Very often, these devices look like small waterfalls or cascades, where water flows vertically or over riprap to lose some of its kinetic energy.
Irreversible processes
Important examples of irreversible processes are:
Heat flow through a thermal resistance
Fluid flow through a flow resistance
Diffusion (mixing)
Chemical reactions
Electrical current flow through an electrical resistance (Joule heating).
Waves or oscillations
Waves or oscillations, lose energy over time, typically from friction or turbulence. In many cases, the "lost" energy raises the temperature of the system. For example, a wave that loses amplitude is said to dissipate. The precise nature of the effects depends on the nature of the wave: an atmospheric wave, for instance, may dissipate close to the surface due to friction with the land mass, and at higher levels due to radiative cooling.
History
The concept of dissipation was introduced in the field of thermodynamics by William Thomson (Lord Kelvin) in 1852. Lord Kelvin deduced that a subset of the above-mentioned irreversible dissipative processes will occur unless a process is governed by a "perfect thermodynamic engine". The processes that Lord Kelvin identified were friction, diffusion, conduction of heat and the absorption of light.
See also
Entropy production
General equation of heat transfer
Flood control
Principle of maximum entropy
Two-dimensional gas
References
Thermodynamic processes
Thermodynamic entropy
Non-equilibrium thermodynamics
Dynamical systems | 0.788657 | 0.987996 | 0.77919 |
Coriolis force | In physics, the Coriolis force is an inertial (or fictitious) force that acts on objects in motion within a frame of reference that rotates with respect to an inertial frame. In a reference frame with clockwise rotation, the force acts to the left of the motion of the object. In one with anticlockwise (or counterclockwise) rotation, the force acts to the right. Deflection of an object due to the Coriolis force is called the Coriolis effect. Though recognized previously by others, the mathematical expression for the Coriolis force appeared in an 1835 paper by French scientist Gaspard-Gustave de Coriolis, in connection with the theory of water wheels. Early in the 20th century, the term Coriolis force began to be used in connection with meteorology.
Newton's laws of motion describe the motion of an object in an inertial (non-accelerating) frame of reference. When Newton's laws are transformed to a rotating frame of reference, the Coriolis and centrifugal accelerations appear. When applied to objects with masses, the respective forces are proportional to their masses. The magnitude of the Coriolis force is proportional to the rotation rate, and the magnitude of the centrifugal force is proportional to the square of the rotation rate. The Coriolis force acts in a direction perpendicular to two quantities: the angular velocity of the rotating frame relative to the inertial frame and the velocity of the body relative to the rotating frame, and its magnitude is proportional to the object's speed in the rotating frame (more precisely, to the component of its velocity that is perpendicular to the axis of rotation). The centrifugal force acts outwards in the radial direction and is proportional to the distance of the body from the axis of the rotating frame. These additional forces are termed inertial forces, fictitious forces, or pseudo forces. By introducing these fictitious forces to a rotating frame of reference, Newton's laws of motion can be applied to the rotating system as though it were an inertial system; these forces are correction factors that are not required in a non-rotating system.
In popular (non-technical) usage of the term "Coriolis effect", the rotating reference frame implied is almost always the Earth. Because the Earth spins, Earth-bound observers need to account for the Coriolis force to correctly analyze the motion of objects. The Earth completes one rotation for each sidereal day, so for motions of everyday objects the Coriolis force is imperceptible; its effects become noticeable only for motions occurring over large distances and long periods of time, such as large-scale movement of air in the atmosphere or water in the ocean, or where high precision is important, such as artillery or missile trajectories. Such motions are constrained by the surface of the Earth, so only the horizontal component of the Coriolis force is generally important. This force causes moving objects on the surface of the Earth to be deflected to the right (with respect to the direction of travel) in the Northern Hemisphere and to the left in the Southern Hemisphere. The horizontal deflection effect is greater near the poles, since the effective rotation rate about a local vertical axis is largest there, and decreases to zero at the equator. Rather than flowing directly from areas of high pressure to low pressure, as they would in a non-rotating system, winds and currents tend to flow to the right of this direction north of the equator ("clockwise") and to the left of this direction south of it ("anticlockwise"). This effect is responsible for the rotation and thus formation of cyclones .
History
Italian scientist Giovanni Battista Riccioli and his assistant Francesco Maria Grimaldi described the effect in connection with artillery in the 1651 Almagestum Novum, writing that rotation of the Earth should cause a cannonball fired to the north to deflect to the east. In 1674, Claude François Milliet Dechales described in his Cursus seu Mundus Mathematicus how the rotation of the Earth should cause a deflection in the trajectories of both falling bodies and projectiles aimed toward one of the planet's poles. Riccioli, Grimaldi, and Dechales all described the effect as part of an argument against the heliocentric system of Copernicus. In other words, they argued that the Earth's rotation should create the effect, and so failure to detect the effect was evidence for an immobile Earth. The Coriolis acceleration equation was derived by Euler in 1749, and the effect was described in the tidal equations of Pierre-Simon Laplace in 1778.
Gaspard-Gustave Coriolis published a paper in 1835 on the energy yield of machines with rotating parts, such as waterwheels. That paper considered the supplementary forces that are detected in a rotating frame of reference. Coriolis divided these supplementary forces into two categories. The second category contained a force that arises from the cross product of the angular velocity of a coordinate system and the projection of a particle's velocity into a plane perpendicular to the system's axis of rotation. Coriolis referred to this force as the "compound centrifugal force" due to its analogies with the centrifugal force already considered in category one. The effect was known in the early 20th century as the "acceleration of Coriolis", and by 1920 as "Coriolis force".
In 1856, William Ferrel proposed the existence of a circulation cell in the mid-latitudes with air being deflected by the Coriolis force to create the prevailing westerly winds.
The understanding of the kinematics of how exactly the rotation of the Earth affects airflow was partial at first. Late in the 19th century, the full extent of the large scale interaction of pressure-gradient force and deflecting force that in the end causes air masses to move along isobars was understood.
Formula
In Newtonian mechanics, the equation of motion for an object in an inertial reference frame is:
where is the vector sum of the physical forces acting on the object, is the mass of the object, and is the acceleration of the object relative to the inertial reference frame.
Transforming this equation to a reference frame rotating about a fixed axis through the origin with angular velocity having variable rotation rate, the equation takes the form:
where the prime (') variables denote coordinates of the rotating reference frame (not a derivative) and:
is the vector sum of the physical forces acting on the object
is the angular velocity, of the rotating reference frame relative to the inertial frame
is the position vector of the object relative to the rotating reference frame
is the velocity of the object relative to the rotating reference frame
is the acceleration of the object relative to the rotating reference frame
The fictitious forces as they are perceived in the rotating frame act as additional forces that contribute to the apparent acceleration just like the real external forces. The fictitious force terms of the equation are, reading from left to right:
Euler force,
Coriolis force,
centrifugal force,
As seen in these formulas the Euler and centrifugal forces depend on the position vector of the object, while the Coriolis force depends on the object's velocity as measured in the rotating reference frame. As expected, for a non-rotating inertial frame of reference the Coriolis force and all other fictitious forces disappear.
Direction of Coriolis force for simple cases
As the Coriolis force is proportional to a cross product of two vectors, it is perpendicular to both vectors, in this case the object's velocity and the frame's rotation vector. It therefore follows that:
if the velocity is parallel to the rotation axis, the Coriolis force is zero. For example, on Earth, this situation occurs for a body at the equator moving north or south relative to the Earth's surface. (At any latitude other than the equator, however, the north–south motion would have a component perpendicular to the rotation axis and a force specified by the inward or outward cases mentioned below).
if the velocity is straight inward to the axis, the Coriolis force is in the direction of local rotation. For example, on Earth, this situation occurs for a body at the equator falling downward, as in the Dechales illustration above, where the falling ball travels further to the east than does the tower. Note also that heading north in the northern hemisphere would have a velocity component toward the rotation axis, resulting in a Coriolis force to the east (more pronounced the further north one is).
if the velocity is straight outward from the axis, the Coriolis force is against the direction of local rotation. In the tower example, a ball launched upward would move toward the west.
if the velocity is in the direction of rotation, the Coriolis force is outward from the axis. For example, on Earth, this situation occurs for a body at the equator moving east relative to Earth's surface. It would move upward as seen by an observer on the surface. This effect (see Eötvös effect below) was discussed by Galileo Galilei in 1632 and by Riccioli in 1651.
if the velocity is against the direction of rotation, the Coriolis force is inward to the axis. For example, on Earth, this situation occurs for a body at the equator moving west, which would deflect downward as seen by an observer.
Intuitive explanation
For an intuitive explanation of the origin of the Coriolis force, consider an object, constrained to follow the Earth's surface and moving northward in the Northern Hemisphere. Viewed from outer space, the object does not appear to go due north, but has an eastward motion (it rotates around toward the right along with the surface of the Earth). The further north it travels, the smaller the "radius of its parallel (latitude)" (the minimum distance from the surface point to the axis of rotation, which is in a plane orthogonal to the axis), and so the slower the eastward motion of its surface. As the object moves north, to higher latitudes, it has a tendency to maintain the eastward speed it started with (rather than slowing down to match the reduced eastward speed of local objects on the Earth's surface), so it veers east (i.e. to the right of its initial motion).
Though not obvious from this example, which considers northward motion, the horizontal deflection occurs equally for objects moving eastward or westward (or in any other direction). However, the theory that the effect determines the rotation of draining water in a household bathtub, sink or toilet has been repeatedly disproven by modern-day scientists; the force is negligibly small compared to the many other influences on the rotation.
Length scales and the Rossby number
The time, space, and velocity scales are important in determining the importance of the Coriolis force. Whether rotation is important in a system can be determined by its Rossby number, which is the ratio of the velocity, U, of a system to the product of the Coriolis parameter, , and the length scale, L, of the motion:
Hence, it is the ratio of inertial to Coriolis forces; a small Rossby number indicates a system is strongly affected by Coriolis forces, and a large Rossby number indicates a system in which inertial forces dominate. For example, in tornadoes, the Rossby number is large, so in them the Coriolis force is negligible, and balance is between pressure and centrifugal forces. In low-pressure systems the Rossby number is low, as the centrifugal force is negligible; there, the balance is between Coriolis and pressure forces. In oceanic systems the Rossby number is often around 1, with all three forces comparable.
An atmospheric system moving at U = occupying a spatial distance of L = , has a Rossby number of approximately 0.1.
A baseball pitcher may throw the ball at U = for a distance of L = . The Rossby number in this case would be 32,000 (at latitude 31°47'46.382").
Baseball players don't care about which hemisphere they're playing in. However, an unguided missile obeys exactly the same physics as a baseball, but can travel far enough and be in the air long enough to experience the effect of Coriolis force. Long-range shells in the Northern Hemisphere landed close to, but to the right of, where they were aimed until this was noted. (Those fired in the Southern Hemisphere landed to the left.) In fact, it was this effect that first drew the attention of Coriolis himself.
Simple cases
Tossed ball on a rotating carousel
The figure illustrates a ball tossed from 12:00 o'clock toward the center of a counter-clockwise rotating carousel. On the left, the ball is seen by a stationary observer above the carousel, and the ball travels in a straight line to the center, while the ball-thrower rotates counter-clockwise with the carousel. On the right, the ball is seen by an observer rotating with the carousel, so the ball-thrower appears to stay at 12:00 o'clock. The figure shows how the trajectory of the ball as seen by the rotating observer can be constructed.
On the left, two arrows locate the ball relative to the ball-thrower. One of these arrows is from the thrower to the center of the carousel (providing the ball-thrower's line of sight), and the other points from the center of the carousel to the ball. (This arrow gets shorter as the ball approaches the center.) A shifted version of the two arrows is shown dotted.
On the right is shown this same dotted pair of arrows, but now the pair are rigidly rotated so the arrow corresponding to the line of sight of the ball-thrower toward the center of the carousel is aligned with 12:00 o'clock. The other arrow of the pair locates the ball relative to the center of the carousel, providing the position of the ball as seen by the rotating observer. By following this procedure for several positions, the trajectory in the rotating frame of reference is established as shown by the curved path in the right-hand panel.
The ball travels in the air, and there is no net force upon it. To the stationary observer, the ball follows a straight-line path, so there is no problem squaring this trajectory with zero net force. However, the rotating observer sees a curved path. Kinematics insists that a force (pushing to the right of the instantaneous direction of travel for a counter-clockwise rotation) must be present to cause this curvature, so the rotating observer is forced to invoke a combination of centrifugal and Coriolis forces to provide the net force required to cause the curved trajectory.
Bounced ball
The figure describes a more complex situation where the tossed ball on a turntable bounces off the edge of the carousel and then returns to the tosser, who catches the ball. The effect of Coriolis force on its trajectory is shown again as seen by two observers: an observer (referred to as the "camera") that rotates with the carousel, and an inertial observer. The figure shows a bird's-eye view based upon the same ball speed on forward and return paths. Within each circle, plotted dots show the same time points. In the left panel, from the camera's viewpoint at the center of rotation, the tosser (smiley face) and the rail both are at fixed locations, and the ball makes a very considerable arc on its travel toward the rail, and takes a more direct route on the way back. From the ball tosser's viewpoint, the ball seems to return more quickly than it went (because the tosser is rotating toward the ball on the return flight).
On the carousel, instead of tossing the ball straight at a rail to bounce back, the tosser must throw the ball toward the right of the target and the ball then seems to the camera to bear continuously to the left of its direction of travel to hit the rail (left because the carousel is turning clockwise). The ball appears to bear to the left from direction of travel on both inward and return trajectories. The curved path demands this observer to recognize a leftward net force on the ball. (This force is "fictitious" because it disappears for a stationary observer, as is discussed shortly.) For some angles of launch, a path has portions where the trajectory is approximately radial, and Coriolis force is primarily responsible for the apparent deflection of the ball (centrifugal force is radial from the center of rotation, and causes little deflection on these segments). When a path curves away from radial, however, centrifugal force contributes significantly to deflection.
The ball's path through the air is straight when viewed by observers standing on the ground (right panel). In the right panel (stationary observer), the ball tosser (smiley face) is at 12 o'clock and the rail the ball bounces from is at position 1. From the inertial viewer's standpoint, positions 1, 2, and 3 are occupied in sequence. At position 2, the ball strikes the rail, and at position 3, the ball returns to the tosser. Straight-line paths are followed because the ball is in free flight, so this observer requires that no net force is applied.
Applied to the Earth
The acceleration affecting the motion of air "sliding" over the Earth's surface is the horizontal component of the Coriolis term
This component is orthogonal to the velocity over the Earth surface and is given by the expression
where
is the spin rate of the Earth
is the latitude, positive in the northern hemisphere and negative in the southern hemisphere
In the northern hemisphere, where the latitude is positive, this acceleration, as viewed from above, is to the right of the direction of motion. Conversely, it is to the left in the southern hemisphere.
Rotating sphere
Consider a location with latitude φ on a sphere that is rotating around the north–south axis. A local coordinate system is set up with the x axis horizontally due east, the y axis horizontally due north and the z axis vertically upwards. The rotation vector, velocity of movement and Coriolis acceleration expressed in this local coordinate system (listing components in the order east (e), north (n) and upward (u)) are:
When considering atmospheric or oceanic dynamics, the vertical velocity is small, and the vertical component of the Coriolis acceleration is small compared with the acceleration due to gravity (g, approximately near Earth's surface). For such cases, only the horizontal (east and north) components matter. The restriction of the above to the horizontal plane is (setting vu = 0):
where is called the Coriolis parameter.
By setting vn = 0, it can be seen immediately that (for positive φ and ω) a movement due east results in an acceleration due south; similarly, setting ve = 0, it is seen that a movement due north results in an acceleration due east. In general, observed horizontally, looking along the direction of the movement causing the acceleration, the acceleration always is turned 90° to the right (for positive φ) and of the same size regardless of the horizontal orientation.
In the case of equatorial motion, setting φ = 0° yields:
Ω in this case is parallel to the north axis.
Accordingly, an eastward motion (that is, in the same direction as the rotation of the sphere) provides an upward acceleration known as the Eötvös effect, and an upward motion produces an acceleration due west.
Meteorology and oceanography
Perhaps the most important impact of the Coriolis effect is in the large-scale dynamics of the oceans and the atmosphere. In meteorology and oceanography, it is convenient to postulate a rotating frame of reference wherein the Earth is stationary. In accommodation of that provisional postulation, the centrifugal and Coriolis forces are introduced. Their relative importance is determined by the applicable Rossby numbers. Tornadoes have high Rossby numbers, so, while tornado-associated centrifugal forces are quite substantial, Coriolis forces associated with tornadoes are for practical purposes negligible.
Because surface ocean currents are driven by the movement of wind over the water's surface, the Coriolis force also affects the movement of ocean currents and cyclones as well. Many of the ocean's largest currents circulate around warm, high-pressure areas called gyres. Though the circulation is not as significant as that in the air, the deflection caused by the Coriolis effect is what creates the spiralling pattern in these gyres. The spiralling wind pattern helps the hurricane form. The stronger the force from the Coriolis effect, the faster the wind spins and picks up additional energy, increasing the strength of the hurricane.
Air within high-pressure systems rotates in a direction such that the Coriolis force is directed radially inwards, and nearly balanced by the outwardly radial pressure gradient. As a result, air travels clockwise around high pressure in the Northern Hemisphere and anticlockwise in the Southern Hemisphere. Air around low-pressure rotates in the opposite direction, so that the Coriolis force is directed radially outward and nearly balances an inwardly radial pressure gradient.
Flow around a low-pressure area
If a low-pressure area forms in the atmosphere, air tends to flow in towards it, but is deflected perpendicular to its velocity by the Coriolis force. A system of equilibrium can then establish itself creating circular movement, or a cyclonic flow. Because the Rossby number is low, the force balance is largely between the pressure-gradient force acting towards the low-pressure area and the Coriolis force acting away from the center of the low pressure.
Instead of flowing down the gradient, large scale motions in the atmosphere and ocean tend to occur perpendicular to the pressure gradient. This is known as geostrophic flow. On a non-rotating planet, fluid would flow along the straightest possible line, quickly eliminating pressure gradients. The geostrophic balance is thus very different from the case of "inertial motions" (see below), which explains why mid-latitude cyclones are larger by an order of magnitude than inertial circle flow would be.
This pattern of deflection, and the direction of movement, is called Buys-Ballot's law. In the atmosphere, the pattern of flow is called a cyclone. In the Northern Hemisphere the direction of movement around a low-pressure area is anticlockwise. In the Southern Hemisphere, the direction of movement is clockwise because the rotational dynamics is a mirror image there. At high altitudes, outward-spreading air rotates in the opposite direction. Cyclones rarely form along the equator due to the weak Coriolis effect present in this region.
Inertial circles
An air or water mass moving with speed subject only to the Coriolis force travels in a circular trajectory called an inertial circle. Since the force is directed at right angles to the motion of the particle, it moves with a constant speed around a circle whose radius is given by:
where is the Coriolis parameter , introduced above (where is the latitude). The time taken for the mass to complete a full circle is therefore . The Coriolis parameter typically has a mid-latitude value of about 10−4 s−1; hence for a typical atmospheric speed of , the radius is with a period of about 17 hours. For an ocean current with a typical speed of , the radius of an inertial circle is . These inertial circles are clockwise in the northern hemisphere (where trajectories are bent to the right) and anticlockwise in the southern hemisphere.
If the rotating system is a parabolic turntable, then is constant and the trajectories are exact circles. On a rotating planet, varies with latitude and the paths of particles do not form exact circles. Since the parameter varies as the sine of the latitude, the radius of the oscillations associated with a given speed are smallest at the poles (latitude of ±90°), and increase toward the equator.
Other terrestrial effects
The Coriolis effect strongly affects the large-scale oceanic and atmospheric circulation, leading to the formation of robust features like jet streams and western boundary currents. Such features are in geostrophic balance, meaning that the Coriolis and pressure gradient forces balance each other. Coriolis acceleration is also responsible for the propagation of many types of waves in the ocean and atmosphere, including Rossby waves and Kelvin waves. It is also instrumental in the so-called Ekman dynamics in the ocean, and in the establishment of the large-scale ocean flow pattern called the Sverdrup balance.
Eötvös effect
The practical impact of the "Coriolis effect" is mostly caused by the horizontal acceleration component produced by horizontal motion.
There are other components of the Coriolis effect. Westward-traveling objects are deflected downwards, while eastward-traveling objects are deflected upwards. This is known as the Eötvös effect. This aspect of the Coriolis effect is greatest near the equator. The force produced by the Eötvös effect is similar to the horizontal component, but the much larger vertical forces due to gravity and pressure suggest that it is unimportant in the hydrostatic equilibrium. However, in the atmosphere, winds are associated with small deviations of pressure from the hydrostatic equilibrium. In the tropical atmosphere, the order of magnitude of the pressure deviations is so small that the contribution of the Eötvös effect to the pressure deviations is considerable.
In addition, objects traveling upwards (i.e. out) or downwards (i.e. in) are deflected to the west or east respectively. This effect is also the greatest near the equator. Since vertical movement is usually of limited extent and duration, the size of the effect is smaller and requires precise instruments to detect. For example, idealized numerical modeling studies suggest that this effect can directly affect tropical large-scale wind field by roughly 10% given long-duration (2 weeks or more) heating or cooling in the atmosphere. Moreover, in the case of large changes of momentum, such as a spacecraft being launched into orbit, the effect becomes significant. The fastest and most fuel-efficient path to orbit is a launch from the equator that curves to a directly eastward heading.
Intuitive example
Imagine a train that travels through a frictionless railway line along the equator. Assume that, when in motion, it moves at the necessary speed to complete a trip around the world in one day (465 m/s). The Coriolis effect can be considered in three cases: when the train travels west, when it is at rest, and when it travels east. In each case, the Coriolis effect can be calculated from the rotating frame of reference on Earth first, and then checked against a fixed inertial frame. The image below illustrates the three cases as viewed by an observer at rest in a (near) inertial frame from a fixed point above the North Pole along the Earth's axis of rotation; the train is denoted by a few red pixels, fixed at the left side in the leftmost picture, moving in the others
The train travels toward the west: In that case, it moves against the direction of rotation. Therefore, on the Earth's rotating frame the Coriolis term is pointed inwards towards the axis of rotation (down). This additional force downwards should cause the train to be heavier while moving in that direction.If one looks at this train from the fixed non-rotating frame on top of the center of the Earth, at that speed it remains stationary as the Earth spins beneath it. Hence, the only force acting on it is gravity and the reaction from the track. This force is greater (by 0.34%) than the force that the passengers and the train experience when at rest (rotating along with Earth). This difference is what the Coriolis effect accounts for in the rotating frame of reference.
The train comes to a stop: From the point of view on the Earth's rotating frame, the velocity of the train is zero, thus the Coriolis force is also zero and the train and its passengers recuperate their usual weight.From the fixed inertial frame of reference above Earth, the train now rotates along with the rest of the Earth. 0.34% of the force of gravity provides the centripetal force needed to achieve the circular motion on that frame of reference. The remaining force, as measured by a scale, makes the train and passengers "lighter" than in the previous case.
The train travels east. In this case, because it moves in the direction of Earth's rotating frame, the Coriolis term is directed outward from the axis of rotation (up). This upward force makes the train seem lighter still than when at rest. From the fixed inertial frame of reference above Earth, the train traveling east now rotates at twice the rate as when it was at rest—so the amount of centripetal force needed to cause that circular path increases leaving less force from gravity to act on the track. This is what the Coriolis term accounts for on the previous paragraph.As a final check one can imagine a frame of reference rotating along with the train. Such frame would be rotating at twice the angular velocity as Earth's rotating frame. The resulting centrifugal force component for that imaginary frame would be greater. Since the train and its passengers are at rest, that would be the only component in that frame explaining again why the train and the passengers are lighter than in the previous two cases.
This also explains why high-speed projectiles that travel west are deflected down, and those that travel east are deflected up. This vertical component of the Coriolis effect is called the Eötvös effect.
The above example can be used to explain why the Eötvös effect starts diminishing when an object is traveling westward as its tangential speed increases above Earth's rotation (465 m/s). If the westward train in the above example increases speed, part of the force of gravity that pushes against the track accounts for the centripetal force needed to keep it in circular motion on the inertial frame. Once the train doubles its westward speed at that centripetal force becomes equal to the force the train experiences when it stops. From the inertial frame, in both cases it rotates at the same speed but in the opposite directions. Thus, the force is the same cancelling completely the Eötvös effect. Any object that moves westward at a speed above experiences an upward force instead. In the figure, the Eötvös effect is illustrated for a object on the train at different speeds. The parabolic shape is because the centripetal force is proportional to the square of the tangential speed. On the inertial frame, the bottom of the parabola is centered at the origin. The offset is because this argument uses the Earth's rotating frame of reference. The graph shows that the Eötvös effect is not symmetrical, and that the resulting downward force experienced by an object that travels west at high velocity is less than the resulting upward force when it travels east at the same speed.
Draining in bathtubs and toilets
Contrary to popular misconception, bathtubs, toilets, and other water receptacles do not drain in opposite directions in the Northern and Southern Hemispheres. This is because the magnitude of the Coriolis force is negligible at this scale. Forces determined by the initial conditions of the water (e.g. the geometry of the drain, the geometry of the receptacle, preexisting momentum of the water, etc.) are likely to be orders of magnitude greater than the Coriolis force and hence will determine the direction of water rotation, if any. For example, identical toilets flushed in both hemispheres drain in the same direction, and this direction is determined mostly by the shape of the toilet bowl.
Under real-world conditions, the Coriolis force does not influence the direction of water flow perceptibly. Only if the water is so still that the effective rotation rate of the Earth is faster than that of the water relative to its container, and if externally applied torques (such as might be caused by flow over an uneven bottom surface) are small enough, the Coriolis effect may indeed determine the direction of the vortex. Without such careful preparation, the Coriolis effect will be much smaller than various other influences on drain direction such as any residual rotation of the water and the geometry of the container.
Laboratory testing of draining water under atypical conditions
In 1962, Ascher Shapiro performed an experiment at MIT to test the Coriolis force on a large basin of water, across, with a small wooden cross above the plug hole to display the direction of rotation, covering it and waiting for at least 24 hours for the water to settle. Under these precise laboratory conditions, he demonstrated the effect and consistent counterclockwise rotation. The experiment required extreme precision, since the acceleration due to Coriolis effect is only that of gravity. The vortex was measured by a cross made of two slivers of wood pinned above the draining hole. It takes 20 minutes to drain, and the cross starts turning only around 15 minutes. At the end it is turning at 1 rotation every 3 to 4 seconds.
He reported that,
Lloyd Trefethen reported clockwise rotation in the Southern Hemisphere at the University of Sydney in five tests with settling times of 18 h or more.
Ballistic trajectories
The Coriolis force is important in external ballistics for calculating the trajectories of very long-range artillery shells. The most famous historical example was the Paris gun, used by the Germans during World War I to bombard Paris from a range of about . The Coriolis force minutely changes the trajectory of a bullet, affecting accuracy at extremely long distances. It is adjusted for by accurate long-distance shooters, such as snipers. At the latitude of Sacramento, California, a northward shot would be deflected to the right. There is also a vertical component, explained in the Eötvös effect section above, which causes westward shots to hit low, and eastward shots to hit high.
The effects of the Coriolis force on ballistic trajectories should not be confused with the curvature of the paths of missiles, satellites, and similar objects when the paths are plotted on two-dimensional (flat) maps, such as the Mercator projection. The projections of the three-dimensional curved surface of the Earth to a two-dimensional surface (the map) necessarily results in distorted features. The apparent curvature of the path is a consequence of the sphericity of the Earth and would occur even in a non-rotating frame.
The Coriolis force on a moving projectile depends on velocity components in all three directions, latitude, and azimuth. The directions are typically downrange (the direction that the gun is initially pointing), vertical, and cross-range.
where
, down-range acceleration.
, vertical acceleration with positive indicating acceleration upward.
, cross-range acceleration with positive indicating acceleration to the right.
, down-range velocity.
, vertical velocity with positive indicating upward.
, cross-range velocity with positive indicating velocity to the right.
= 0.00007292 rad/sec, angular velocity of the earth (based on a sidereal day).
, latitude with positive indicating Northern hemisphere.
, azimuth measured clockwise from due North.
Visualization of the Coriolis effect
To demonstrate the Coriolis effect, a parabolic turntable can be used.
On a flat turntable, the inertia of a co-rotating object forces it off the edge. However, if the turntable surface has the correct paraboloid (parabolic bowl) shape (see the figure) and rotates at the corresponding rate, the force components shown in the figure make the component of gravity tangential to the bowl surface exactly equal to the centripetal force necessary to keep the object rotating at its velocity and radius of curvature (assuming no friction). (See banked turn.) This carefully contoured surface allows the Coriolis force to be displayed in isolation.
Discs cut from cylinders of dry ice can be used as pucks, moving around almost frictionlessly over the surface of the parabolic turntable, allowing effects of Coriolis on dynamic phenomena to show themselves. To get a view of the motions as seen from the reference frame rotating with the turntable, a video camera is attached to the turntable so as to co-rotate with the turntable, with results as shown in the figure. In the left panel of the figure, which is the viewpoint of a stationary observer, the gravitational force in the inertial frame pulling the object toward the center (bottom ) of the dish is proportional to the distance of the object from the center. A centripetal force of this form causes the elliptical motion. In the right panel, which shows the viewpoint of the rotating frame, the inward gravitational force in the rotating frame (the same force as in the inertial frame) is balanced by the outward centrifugal force (present only in the rotating frame). With these two forces balanced, in the rotating frame the only unbalanced force is Coriolis (also present only in the rotating frame), and the motion is an inertial circle. Analysis and observation of circular motion in the rotating frame is a simplification compared with analysis and observation of elliptical motion in the inertial frame.
Because this reference frame rotates several times a minute rather than only once a day like the Earth, the Coriolis acceleration produced is many times larger and so easier to observe on small time and spatial scales than is the Coriolis acceleration caused by the rotation of the Earth.
In a manner of speaking, the Earth is analogous to such a turntable. The rotation has caused the planet to settle on a spheroid shape, such that the normal force, the gravitational force and the centrifugal force exactly balance each other on a "horizontal" surface. (See equatorial bulge.)
The Coriolis effect caused by the rotation of the Earth can be seen indirectly through the motion of a Foucault pendulum.
Coriolis effects in other areas
Coriolis flow meter
A practical application of the Coriolis effect is the mass flow meter, an instrument that measures the mass flow rate and density of a fluid flowing through a tube. The operating principle involves inducing a vibration of the tube through which the fluid passes. The vibration, though not completely circular, provides the rotating reference frame that gives rise to the Coriolis effect. While specific methods vary according to the design of the flow meter, sensors monitor and analyze changes in frequency, phase shift, and amplitude of the vibrating flow tubes. The changes observed represent the mass flow rate and density of the fluid.
Molecular physics
In polyatomic molecules, the molecule motion can be described by a rigid body rotation and internal vibration of atoms about their equilibrium position. As a result of the vibrations of the atoms, the atoms are in motion relative to the rotating coordinate system of the molecule. Coriolis effects are therefore present, and make the atoms move in a direction perpendicular to the original oscillations. This leads to a mixing in molecular spectra between the rotational and vibrational levels, from which Coriolis coupling constants can be determined.
Gyroscopic precession
When an external torque is applied to a spinning gyroscope along an axis that is at right angles to the spin axis, the rim velocity that is associated with the spin becomes radially directed in relation to the external torque axis. This causes a torque-induced force to act on the rim in such a way as to tilt the gyroscope at right angles to the direction that the external torque would have tilted it. This tendency has the effect of keeping spinning bodies in their rotational frame.
Insect flight
Flies (Diptera) and some moths (Lepidoptera) exploit the Coriolis effect in flight with specialized appendages and organs that relay information about the angular velocity of their bodies. Coriolis forces resulting from linear motion of these appendages are detected within the rotating frame of reference of the insects' bodies. In the case of flies, their specialized appendages are dumbbell shaped organs located just behind their wings called "halteres".
The fly's halteres oscillate in a plane at the same beat frequency as the main wings so that any body rotation results in lateral deviation of the halteres from their plane of motion.
In moths, their antennae are known to be responsible for the sensing of Coriolis forces in the similar manner as with the halteres in flies. In both flies and moths, a collection of mechanosensors at the base of the appendage are sensitive to deviations at the beat frequency, correlating to rotation in the pitch and roll planes, and at twice the beat frequency, correlating to rotation in the yaw plane.
Lagrangian point stability
In astronomy, Lagrangian points are five positions in the orbital plane of two large orbiting bodies where a small object affected only by gravity can maintain a stable position relative to the two large bodies. The first three Lagrangian points (L1, L2, L3) lie along the line connecting the two large bodies, while the last two points (L4 and L5) each form an equilateral triangle with the two large bodies. The L4 and L5 points, although they correspond to maxima of the effective potential in the coordinate frame that rotates with the two large bodies, are stable due to the Coriolis effect. The stability can result in orbits around just L4 or L5, known as tadpole orbits, where trojans can be found. It can also result in orbits that encircle L3, L4, and L5, known as horseshoe orbits.
See also
Analytical mechanics
Applied mechanics
Classical mechanics
Dynamics (mechanics)
Earth's rotation
Equatorial Rossby wave
Frenet–Serret formulas
Gyroscope
Kinetics (physics)
Mechanics of planar particle motion
Reactive centrifugal force
Secondary flow
Statics
Uniform circular motion
Whirlpool
Physics and meteorology
Riccioli, G. B., 1651: Almagestum Novum, Bologna, pp. 425–427 (Original book [in Latin], scanned images of complete pages.)
Coriolis, G. G., 1832: "Mémoire sur le principe des forces vives dans les mouvements relatifs des machines." Journal de l'école Polytechnique, Vol 13, pp. 268–302. (Original article [in French], PDF file, 1.6 MB, scanned images of complete pages.)
Coriolis, G. G., 1835: "Mémoire sur les équations du mouvement relatif des systèmes de corps." Journal de l'école Polytechnique, Vol 15, pp. 142–154 (Original article [in French] PDF file, 400 KB, scanned images of complete pages.)
Gill, A. E. Atmosphere-Ocean dynamics, Academic Press, 1982.
Durran, D. R., 1993: Is the Coriolis force really responsible for the inertial oscillation?, Bull. Amer. Meteor. Soc., 74, pp. 2179–2184; Corrigenda. Bulletin of the American Meteorological Society, 75, p. 261
Durran, D. R., and S. K. Domonkos, 1996: An apparatus for demonstrating the inertial oscillation, Bulletin of the American Meteorological Society, 77, pp. 557–559.
Marion, Jerry B. 1970, Classical Dynamics of Particles and Systems, Academic Press.
Persson, A., 1998 How do we Understand the Coriolis Force? Bulletin of the American Meteorological Society 79, pp. 1373–1385.
Symon, Keith. 1971, Mechanics, Addison–Wesley
Akira Kageyama & Mamoru Hyodo: Eulerian derivation of the Coriolis force
James F. Price: A Coriolis tutorial Woods Hole Oceanographic Institute (2003)
. Elementary, non-mathematical; but well written.
Historical
Grattan-Guinness, I., Ed., 1994: Companion Encyclopedia of the History and Philosophy of the Mathematical Sciences. Vols. I and II. Routledge, 1840 pp. 1997: The Fontana History of the Mathematical Sciences. Fontana, 817 pp. 710 pp.
Khrgian, A., 1970: Meteorology: A Historical Survey. Vol. 1. Keter Press, 387 pp.
Kuhn, T. S., 1977: Energy conservation as an example of simultaneous discovery. The Essential Tension, Selected Studies in Scientific Tradition and Change, University of Chicago Press, 66–104.
Kutzbach, G., 1979: The Thermal Theory of Cyclones. A History of Meteorological Thought in the Nineteenth Century. Amer. Meteor. Soc., 254 pp.
References
External links
The definition of the Coriolis effect from the Glossary of Meteorology
The Coriolis Effect — a conflict between common sense and mathematics PDF-file. 20 pages. A general discussion by Anders Persson of various aspects of the coriolis effect, including Foucault's Pendulum and Taylor columns.
The coriolis effect in meteorology PDF-file. 5 pages. A detailed explanation by Mats Rosengren of how the gravitational force and the rotation of the Earth affect the atmospheric motion over the Earth surface. 2 figures
10 Coriolis Effect Videos and Games- from the About.com Weather Page
Coriolis Force – from ScienceWorld
Coriolis Effect and Drains An article from the NEWTON web site hosted by the Argonne National Laboratory.
Catalog of Coriolis videos
Coriolis Effect: A graphical animation, a visual Earth animation with precise explanation
An introduction to fluid dynamics SPINLab Educational Film explains the Coriolis effect with the aid of lab experiments
Do bathtubs drain counterclockwise in the Northern Hemisphere? by Cecil Adams.
Bad Coriolis. An article uncovering misinformation about the Coriolis effect. By Alistair B. Fraser, emeritus professor of meteorology at Pennsylvania State University
The Coriolis Effect: A (Fairly) Simple Explanation, an explanation for the layperson
Observe an animation of the Coriolis effect over Earth's surface
Animation clip showing scenes as viewed from both an inertial frame and a rotating frame of reference, visualizing the Coriolis and centrifugal forces.
Vincent Mallette The Coriolis Force @ INWIT
NASA notes
Interactive Coriolis Fountain lets you control rotation speed, droplet speed and frame of reference to explore the Coriolis effect.
Rotating Co-ordinating Systems , transformation from inertial systems
Classical mechanics
Force
Atmospheric dynamics
Physical phenomena
Fictitious forces
Rotation | 0.779717 | 0.999313 | 0.779181 |
Energy-based model | An energy-based model (EBM) (also called a Canonical Ensemble Learning(CEL) or Learning via Canonical Ensemble (LCE)) is an application of canonical ensemble formulation of statistical physics for learning from data problems. The approach prominently appears in generative models (GMs).
EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models.
An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution.
Energy-based generative neural networks is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models whose energy functions are parameterized by modern deep neural networks.
Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy.
Description
For a given input , the model describes an energy such that the Boltzmann distribution is a probability (density) and typically .
Since the normalization constant , also known as partition function, depends on all the Boltzmann factors of all possible inputs it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation.
However for maximizing the likelihood during training, the gradient of the log likelihood of a single training example is given by using the chain rule
The expectation in the above formula for the gradient can be approximately estimated by drawing samples from the distribution using Markov chain Monte Carlo (MCMC)
Early energy-based models like the 2003 Boltzmann machine by Hinton estimated this expectation using block Gibbs sampler. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD) drawing samples using:
and . A replay buffer of past values is used with LD to initialize the optimization module.
The parameters of the neural network are, therefore, trained in a generative manner by MCMC-based maximum likelihood estimation:
The learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method, e.g., Langevin dynamics or Hybrid Monte Carlo, and then updates the model parameters based on the difference between the training examples and the synthesized ones, see equation .
This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation.
In the end, the model learns a function that associates low energies to correct values, and higher energies to incorrect values.
After training, given a converged energy model , the Metropolis–Hastings algorithm can be used to draw new samples.
The acceptance probability is given by:
History
The term "energy-based models" was first coined in a 2003 JMLR paper where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs.
Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables.
Characteristics
EBMs demonstrate useful properties:
Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance.
Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples.
Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes).
Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples.
Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques.
Experimental results
On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification.
Applications
Target applications include natural language processing, robotics and computer vision.
The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis,
3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ).
Alternatives
EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows.
Extensions
Joint energy-based models
Joint energy-based models (JEM), proposed in 2020 by Grathwohl et al., allow any classifier with softmax output to be interpreted as energy-based model. The key observation is that such a classifier is trained to predict the conditional probability
where is the y-th index of the logits corresponding to class y.
Without any change to the logits it was proposed to reinterpret the logits to describe a joint probability density:
with unknown partition function and energy .
By marginalization, we obtain the unnormalized density
therefore,
so that any classifier can be used to define an energy function .
See also
Empirical likelihood
Posterior predictive distribution
Contrastive learning
Literature
Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689
Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263
References
External links
Statistical models
Machine learning
Statistical mechanics
Hamiltonian mechanics | 0.791007 | 0.984995 | 0.779137 |
Planck relation | The Planck relation (referred to as Planck's energy–frequency relation, the Planck–Einstein relation, Planck equation, and Planck formula, though the latter might also refer to Planck's law) is a fundamental equation in quantum mechanics which states that the energy of a photon, known as photon energy, is proportional to its frequency :
The constant of proportionality, , is known as the Planck constant. Several equivalent forms of the relation exist, including in terms of angular frequency :
where . Written using the symbol for frequency, the relation is
The relation accounts for the quantized nature of light and plays a key role in understanding phenomena such as the photoelectric effect and black-body radiation (where the related Planck postulate can be used to derive Planck's law).
Spectral forms
Light can be characterized using several spectral quantities, such as frequency , wavelength , wavenumber , and their angular equivalents (angular frequency , angular wavelength , and angular wavenumber ). These quantities are related through
so the Planck relation can take the following "standard" forms:
as well as the following "angular" forms:
The standard forms make use of the Planck constant . The angular forms make use of the reduced Planck constant . Here is the speed of light.
de Broglie relation
The de Broglie relation, also known as de Broglie's momentum–wavelength relation, generalizes the Planck relation to matter waves. Louis de Broglie argued that if particles had a wave nature, the relation would also apply to them, and postulated that particles would have a wavelength equal to . Combining de Broglie's postulate with the Planck–Einstein relation leads to
or
The de Broglie relation is also often encountered in vector form
where is the momentum vector, and is the angular wave vector.
Bohr's frequency condition
Bohr's frequency condition states that the frequency of a photon absorbed or emitted during an electronic transition is related to the energy difference between the two energy levels involved in the transition:
This is a direct consequence of the Planck–Einstein relation.
See also
Compton wavelength
References
Cited bibliography
Cohen-Tannoudji, C., Diu, B., Laloë, F. (1973/1977). Quantum Mechanics, translated from the French by S.R. Hemley, N. Ostrowsky, D. Ostrowsky, second edition, volume 1, Wiley, New York, .
French, A.P., Taylor, E.F. (1978). An Introduction to Quantum Physics, Van Nostrand Reinhold, London, .
Griffiths, D.J. (1995). Introduction to Quantum Mechanics, Prentice Hall, Upper Saddle River NJ, .
Landé, A. (1951). Quantum Mechanics, Sir Isaac Pitman & Sons, London.
Landsberg, P.T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, .
Messiah, A. (1958/1961). Quantum Mechanics, volume 1, translated from the French by G.M. Temmer, North-Holland, Amsterdam.
Schwinger, J. (2001). Quantum Mechanics: Symbolism of Atomic Measurements, edited by B.-G. Englert, Springer, Berlin, .
van der Waerden, B.L. (1967). Sources of Quantum Mechanics, edited with a historical introduction by B.L. van der Waerden, North-Holland Publishing, Amsterdam.
Weinberg, S. (1995). The Quantum Theory of Fields, volume 1, Foundations, Cambridge University Press, Cambridge UK, .
Weinberg, S. (2013). Lectures on Quantum Mechanics, Cambridge University Press, Cambridge UK, .
Foundational quantum physics
Max Planck
Old quantum theory | 0.784987 | 0.992537 | 0.779129 |
Black hole information paradox | The black hole information paradox is a paradox that appears when the predictions of quantum mechanics and general relativity are combined. The theory of general relativity predicts the existence of black holes that are regions of spacetime from which nothing—not even light—can escape. In the 1970s, Stephen Hawking applied the semiclassical approach of quantum field theory in curved spacetime to such systems and found that an isolated black hole would emit a form of radiation (now called Hawking radiation in his honor). He also argued that the detailed form of the radiation would be independent of the initial state of the black hole, and depend only on its mass, electric charge and angular momentum.
The information paradox appears when one considers a process in which a black hole is formed through a physical process and then evaporates away entirely through Hawking radiation. Hawking's calculation suggests that the final state of radiation would retain information only about the total mass, electric charge and angular momentum of the initial state. Since many different states can have the same mass, charge and angular momentum, this suggests that many initial physical states could evolve into the same final state. Therefore, information about the details of the initial state would be permanently lost; however, this violates a core precept of both classical and quantum physics: that, in principle only, the state of a system at one point in time should determine its state at any other time. Specifically, in quantum mechanics the state of the system is encoded by its wave function. The evolution of the wave function is determined by a unitary operator, and unitarity implies that the wave function at any instant of time can be used to determine the wave function either in the past or the future. In 1993, Don Page argued that if a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. This is called the Page curve.
It is now generally believed that information is preserved in black-hole evaporation. For many researchers, deriving the Page curve is synonymous with solving the black hole information puzzle. But views differ as to precisely how Hawking's original semiclassical calculation should be corrected. In recent years, several extensions of the original paradox have been explored. Taken together, these puzzles about black hole evaporation have implications for how gravity and quantum mechanics must be combined. The information paradox remains an active field of research in quantum gravity.
Relevant principles
In quantum mechanics, the evolution of the state is governed by the Schrödinger equation. The Schrödinger equation obeys two principles that are relevant to the paradox—quantum determinism, which means that given a present wave function, its future changes are uniquely determined by the evolution operator, and reversibility, which refers to the fact that the evolution operator has an inverse, meaning that the past wave functions are similarly unique. The combination of the two means that information must always be preserved. In this context "information" means all the details of the state, and the statement that information must be preserved means that details corresponding to an earlier time can always be reconstructed at a later time.
Mathematically, the Schrödinger equation implies that the wavefunction at a time t1 can be related to the wavefunction at a time t2 by means of a unitary operator.
Since the unitary operator is bijective, the wavefunction at t2 can be obtained from the wavefunction at t1 and vice versa.
The reversibility of time evolution described above applies only at the microscopic level, since the wavefunction provides a complete description of the state. It should not be conflated with thermodynamic irreversibility. A process may appear irreversible if one keeps track only of the system's coarse-grained features and not of its microscopic details, as is usually done in thermodynamics. But at the microscopic level, the principles of quantum mechanics imply that every process is completely reversible.
Starting in the mid-1970s, Stephen Hawking and Jacob Bekenstein put forward theoretical arguments that suggested that black-hole evaporation loses information, and is therefore inconsistent with unitarity. Crucially, these arguments were meant to apply at the microscopic level and suggested that black-hole evaporation is not only thermodynamically but microscopically irreversible. This contradicts the principle of unitarity described above and leads to the information paradox. Since the paradox suggested that quantum mechanics would be violated by black-hole formation and evaporation, Hawking framed the paradox in terms of the "breakdown of predictability in gravitational collapse".
The arguments for microscopic irreversibility were backed by Hawking's calculation of the spectrum of radiation that isolated black holes emit. This calculation utilized the framework of general relativity and quantum field theory. The calculation of Hawking radiation is performed at the black hole horizon and does not account for the backreaction of spacetime geometry; for a large enough black hole the curvature at the horizon is small and therefore both these theories should be valid. Hawking relied on the no-hair theorem to arrive at the conclusion that radiation emitted by black holes would depend only on a few macroscopic parameters, such as the black hole's mass, charge, and spin, but not on the details of the initial state that led to the formation of the black hole. In addition, the argument for information loss relied on the causal structure of the black hole spacetime, which suggests that information in the interior should not affect any observation in the exterior, including observations performed on the radiation the black hole emits. If so, the region of spacetime outside the black hole would lose information about the state of the interior after black-hole evaporation, leading to the loss of information.
Today, some physicists believe that the holographic principle (specifically the AdS/CFT duality) demonstrates that Hawking's conclusion was incorrect, and that information is in fact preserved. Moreover, recent analyses indicate that in semiclassical gravity the information loss paradox cannot be formulated in a self-consistent manner due to the impossibility of simultaneously realizing all of the necessary assumptions required for its formulation.
Black hole evaporation
Hawking radiation
In 1973–1975, Stephen Hawking showed that black holes should slowly radiate away energy, and he later argued that this leads to a contradiction with unitarity. Hawking used the classical no-hair theorem to argue that the form of this radiation—called Hawking radiation—would be completely independent of the initial state of the star or matter that collapsed to form the black hole. He argued that the process of radiation would continue until the black hole had evaporated completely. At the end of this process, all the initial energy in the black hole would have been transferred to the radiation. But, according to Hawking's argument, the radiation would retain no information about the initial state and therefore information about the initial state would be lost.
More specifically, Hawking argued that the pattern of radiation emitted from the black hole would be random, with a probability distribution controlled only by the black hole's initial temperature, charge, and angular momentum, not by the initial state of the collapse. The state produced by such a probabilistic process is called a mixed state in quantum mechanics. Therefore, Hawking argued that if the star or material that collapsed to form the black hole started in a specific pure quantum state, the process of evaporation would transform the pure state into a mixed state. This is inconsistent with the unitarity of quantum-mechanical evolution discussed above.
The loss of information can be quantified in terms of the change in the fine-grained von Neumann entropy of the state. A pure state is assigned a von Neumann entropy of 0, whereas a mixed state has a finite entropy. The unitary evolution of a state according to Schrödinger's equation preserves the entropy. Therefore Hawking's argument suggests that the process of black-hole evaporation cannot be described within the framework of unitary evolution. Although this paradox is often phrased in terms of quantum mechanics, the evolution from a pure state to a mixed state is also inconsistent with Liouville's theorem in classical physics (see e.g.).
In equations, Hawking showed that if one denotes the creation and annihilation operators at a frequency for a quantum field propagating in the black-hole background by and then the expectation value of the product of these operators in the state formed by the collapse of a black hole would satisfy where is the Boltzmann constant and is the temperature of the black hole. (See, for example, section 2.2 of.) This formula has two important aspects. The first is that the form of the radiation depends only on a single parameter, temperature, even though the initial state of the black hole cannot be characterized by one parameter. Second, the formula implies that the black hole radiates mass at a rate given by where is constant related to fundamental constants, including the Stefan–Boltzmann constant and certain properties of the black hole spacetime called its greybody factors.
The temperature of the black hole is in turn dependent on its mass, charge, and angular momentum. For a Schwarzschild black hole the temperature is given by
This means that if the black hole starts out with an initial mass , it evaporates completely in a time proportional to .
The important aspect of these formulas is that they suggest that the final gas of radiation formed through this process depends only on the black hole's temperature and is independent of other details of the initial state. This leads to the following paradox. Consider two distinct initial states that collapse to form a Schwarzschild black hole of the same mass. Even though the states were distinct at first, since the mass (and hence the temperature) of the black holes is the same, they will emit the same Hawking radiation. Once they evaporate completely, in both cases, one will be left with a featureless gas of radiation. This gas cannot be used to distinguish between the two initial states, and therefore information has been lost.
Page curve
During the same time period in the 1970s, Don Page was a doctoral student of Stephen Hawking. He objected to Hawking's reasoning leading to the paradox above, initially on the basis of violation of CPT symmetry. In 1993, Page focused on the combined system of a black hole with its Hawking radiation as one entangled system, a bipartite system, evolving over the lifetime of the black hole evaporation. Lacking the ability to make a full quantum analysis, he nonetheless made a powerful observation: If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy or entanglement entropy of the Hawking radiation initially increases from zero and then must decrease back to zero when the black hole to which the radiation is entangled has totally evaporated. This is known as the Page curve; and the time corresponding to the maximum or turnover point of the curve, which occurs at about half the black-hole lifetime, is called the Page time. In short, if black hole evaporation is unitary, then the radiation entanglement entropy follows the Page curve. After the Page time, correlations appear and the radiation becomes increasingly information rich.
Recent progress in deriving the Page curve for unitary black hole evaporation is a significant step towards finding both a resolution to the information paradox and a more general understanding of unitarity in quantum gravity. Many researchers consider deriving the Page curve as synonymous with solving the black hole information paradox.
Popular culture
The information paradox has received coverage in the popular media and has been described in popular-science books. Some of this coverage resulted from a widely publicized bet made in 1997 between John Preskill on the one hand with Hawking and Kip Thorne on the other that information was not lost in black holes. The scientific debate on the paradox was described in Leonard Susskind's 2008 book The Black Hole War. (The book carefully notes that the 'war' was purely a scientific one, and that, at a personal level, the participants remained friends.) Susskind writes that Hawking was eventually persuaded that black-hole evaporation was unitary by the holographic principle, which was first proposed by 't Hooft, further developed by Susskind, and later given a precise string theory interpretation by the AdS/CFT correspondence. In 2004, Hawking also conceded the 1997 bet, paying Preskill with a baseball encyclopedia "from which information can be retrieved at will". Thorne refused to concede.
Solutions
Since the 1997 proposal of the AdS/CFT correspondence, the predominant belief among physicists is that information is indeed preserved in black hole evaporation. There are broadly two main streams of thought about how this happens. Within what might broadly be termed the "string theory community", the dominant idea is that Hawking radiation is not precisely thermal but receives quantum correlations that encode information about the black hole's interior. This viewpoint has been the subject of extensive recent research and received further support in 2019 when researchers amended the computation of the entropy of the Hawking radiation in certain models and showed that the radiation is in fact dual to the black hole interior at late times. Hawking himself was influenced by this view and in 2004 published a paper that assumed the AdS/CFT correspondence and argued that quantum perturbations of the event horizon could allow information to escape from a black hole, which would resolve the information paradox. In this perspective, it is the event horizon of the black hole that is important and not the black-hole singularity. The GISR (Gravity Induced Spontaneous Radiation) mechanism of references can be considered an implementation of this idea but with the quantum perturbations of the event horizon replaced by the microscopic states of the black hole.
On the other hand, within what might broadly be termed the "loop quantum gravity community", the dominant belief is that to resolve the information paradox, it is important to understand how the black-hole singularity is resolved. These scenarios are broadly called remnant scenarios since information does not emerge gradually but remains in the black-hole interior only to emerge at the end of black-hole evaporation.
Researchers also study other possibilities, including a modification of the laws of quantum mechanics to allow for non-unitary time evolution.
Some of these solutions are described at greater length below.
GISR mechanism resolution to the paradox
This resolution takes GISR as the underlying mechanism for Hawking radiation, considering the latter only as a resultant effect. The physics ingredients of GISR are reflected in the following explicitly hermitian hamiltonian
The first term of is a diagonal matrix representing the microscopic state of black holes no heavier than the initial one. The second term describes vacuum fluctuations of particles around the black hole and is represented by many harmonic oscillators. The third term couples the vacuum fluctuation modes to the black hole, such that for each mode whose energy matches the difference between two states of the black hole, the latter transitions with an amplitude proportional to the similarity factor of their microscopic wave functions. Transitions between higher energy state to lower energy state and vice versa, are equally permitted at the Hamiltonian level. This coupling mimics the photon-atom coupling in the Jaynes–Cummings model of atomic physics, replacing the photon's vector potential with the binding energy of particles to be radiated in the black hole case, and the dipole moment of initial-to-final state transitions in atoms with the similarity factor of the initial and final states' wave functions in black holes. Despite its ad hoc nature, this coupling introduces no new interactions beyond gravity, and it is deemed necessary irrespective of the future development of quantum gravitational theories.
From the hamiltonian of GISR and the standard Schrodinger equation controlling the evolution of wave function of the system
here is the index of the radiated particles set with total energy . In the case of short time evolution or single quantum emission, Wigner-Wiesskopf approximation allows one to show that the power spectrum of GISR is exactly of thermal type and the corresponding temperature equals that of Hawking radiation. However, in the case of long time evolution or continuous quantum emission, the process is off-equilibrium and is characterised by an initial state dependent black hole mass or temperature vs. time curve. The observers far away can retrieve the information stored in the initial black hole from this mass or temperature versus time curve.
The hamiltonian and wave function description of GISR allows one to calculate the entanglement entropy between the black hole and its Hawking particles explicitly.
Since the Hamiltonian of GISR is explicitly Hermitian, the resulting Page curve is naturally expected, except for some late-time Rabi-type oscillations. These oscillations arise from the equal probability of emission and absorption transitions as the black hole approaches the vanishing stage. The most important lesson from this calculation is that the intermediate state of an evaporating black hole cannot be considered a semiclassical object with a time-dependent mass. Instead, it must be viewed as a superposition of many different mass ratio combinations of the black hole and Hawking particles. References designed a Schrödinger cat-type thought experiment to illustrate this fact, where an initial black hole is bounded with a group of living cats and each Hawking particle kills on from the group. In the quantum description, because the exact timing and number of particles radiated by a black hole cannot be determined definitively, the intermediate state of the evaporating black hole must be considered a superposition of many cat groups, each with a different ratio of dead members. The biggest flaw in the argument for the information loss paradox is ignoring this superposition.
Small-corrections resolution to the paradox
This idea suggests that Hawking's computation fails to keep track of small corrections that are eventually sufficient to preserve information about the initial state. This can be thought of as analogous to what happens during the mundane process of "burning": the radiation produced appears to be thermal, but its fine-grained features encode the precise details of the object that was burnt. This idea is consistent with reversibility, as required by quantum mechanics. It is the dominant idea in what might broadly be termed the string-theory approach to quantum gravity.
More precisely, this line of resolution suggests that Hawking's computation is corrected so that the two point correlator computed by Hawking and described above becomes
and higher-point correlators are similarly corrected
The equations above utilize a concise notation and the correction factors may depend on the temperature, the frequencies of the operators that enter the correlation function and other details of the black hole.
Maldacena initially explored such corrections in a simple version of the paradox. They were then analyzed by Papadodimas and Raju, who showed that corrections to low-point correlators (such as above ) that were exponentially suppressed in the black-hole entropy were sufficient to preserve unitarity, and significant corrections were required only for very high-point correlators. The mechanism that allowed the right small corrections to form was initially postulated in terms of a loss of exact locality in quantum gravity so that the black-hole interior and the radiation were described by the same degrees of freedom. Recent developments suggest that such a mechanism can be realized precisely within semiclassical gravity and allows information to escape. See § Recent developments.
Fuzzball resolution to the paradox
Some researchers, most notably Samir Mathur, have argued that the small corrections required to preserve information cannot be obtained while preserving the semiclassical form of the black-hole interior and instead require a modification of the black-hole geometry to a fuzzball.
The defining characteristic of the fuzzball is that it has structure at the horizon scale. This should be contrasted with the conventional picture of the black-hole interior as a largely featureless region of space. For a large enough black hole, tidal effects are very small at the black-hole horizon and remain small in the interior until one approaches the black-hole singularity. Therefore, in the conventional picture, an observer who crosses the horizon may not even realize they have done so until they start approaching the singularity. In contrast, the fuzzball proposal suggests that the black hole horizon is not empty. Consequently, it is also not information-free, since the details of the structure at the surface of the horizon preserve information about the black hole's initial state. This structure also affects the outgoing Hawking radiation and thereby allows information to escape from the fuzzball.
The fuzzball proposal is supported by the existence of a large number of gravitational solutions called microstate geometries.
The firewall proposal can be thought of as a variant of the fuzzball proposal that posits that the black-hole interior is replaced by a firewall rather than a fuzzball. Operationally, the difference between the fuzzball and the firewall proposals has to do with whether an observer crossing the horizon of the black hole encounters high-energy matter, suggested by the firewall proposal, or merely low-energy structure, suggested by the fuzzball proposal. The firewall proposal also originated with an exploration of Mathur's argument that small corrections are insufficient to resolve the information paradox.
The fuzzball and firewall proposals have been questioned for lacking an appropriate mechanism that can generate structure at the horizon scale.
Strong-quantum-effects resolution to the paradox
In the final stages of black-hole evaporation, quantum effects become important and cannot be ignored. The precise understanding of this phase of black-hole evaporation requires a complete theory of quantum gravity. Within what might be termed the loop-quantum-gravity approach to black holes, it is believed that understanding this phase of evaporation is crucial to resolving the information paradox.
This perspective holds that Hawking's computation is reliable until the final stages of black-hole evaporation, when information suddenly escapes. Another possibility along the same lines is that black-hole evaporation simply stops when the black hole becomes Planck-sized. Such scenarios are called "remnant scenarios".
An appealing aspect of this perspective is that a significant deviation from classical and semiclassical gravity is needed only in the regime in which the effects of quantum gravity are expected to dominate. On the other hand, this idea implies that just before the sudden escape of information, a very small black hole must be able to store an arbitrary amount of information and have a very large number of internal states. Therefore, researchers who follow this idea must take care to avoid the common criticism of remnant-type scenarios, which is that they might may violate the Bekenstein bound and lead to a violation of effective field theory due to the production of remnants as virtual particles in ordinary scattering events.
Soft-hair resolution to the paradox
In 2016, Hawking, Perry and Strominger noted that black holes must contain "soft hair". Particles that have no rest mass, like photons and gravitons, can exist with arbitrarily low-energy and are called soft particles. The soft-hair resolution posits that information about the initial state is stored in such soft particles. The existence of such soft hair is a peculiarity of four-dimensional asymptotically flat space and therefore this resolution to the paradox does not carry over to black holes in Anti-de Sitter space or black holes in other dimensions.
Information is irretrievably lost
A minority view in the theoretical physics community is that information is genuinely lost when black holes form and evaporate. This conclusion follows if one assumes that the predictions of semiclassical gravity and the causal structure of the black-hole spacetime are exact.
But this conclusion leads to the loss of unitarity. Banks, Susskind and Peskin argue that, in some cases, loss of unitarity also implies violation of energy–momentum conservation or locality, but this argument may possibly be evaded in systems with a large number of degrees of freedom. According to Roger Penrose, loss of unitarity in quantum systems is not a problem: quantum measurements are by themselves already non-unitary. Penrose claims that quantum systems will in fact no longer evolve unitarily as soon as gravitation comes into play, precisely as in black holes. The Conformal Cyclic Cosmology Penrose advocates critically depends on the condition that information is in fact lost in black holes. This new cosmological model might be tested experimentally by detailed analysis of the cosmic microwave background radiation (CMB): if true, the CMB should exhibit circular patterns with slightly lower or slightly higher temperatures. In November 2010, Penrose and V. G. Gurzadyan announced they had found evidence of such circular patterns in data from the Wilkinson Microwave Anisotropy Probe (WMAP), corroborated by data from the BOOMERanG experiment. The significance of these findings was debated.
Along similar lines, Modak, Ortíz, Peña, and Sudarsky have argued that the paradox can be dissolved by invoking foundational issues of quantum theory often called the measurement problem of quantum mechanics. This work built on an earlier proposal by Okon and Sudarsky on the benefits of objective collapse theory in a much broader context. The original motivation of these studies was Penrose's long-standing proposal wherein collapse of the wave-function is said to be inevitable in the presence of black holes (and even under the influence of gravitational field). Experimental verification of collapse theories is an ongoing effort.
Other proposed resolutions
Some other resolutions to the paradox have also been explored. These are listed briefly below.
Information is stored in a large remnantThis idea suggests that Hawking radiation stops before the black hole reaches the Planck size. Since the black hole never evaporates, information about its initial state can remain inside the black hole and the paradox disappears. But there is no accepted mechanism that would allow Hawking radiation to stop while the black hole remains macroscopic.
Information is stored in a baby universe that separates from our own universe.Some models of gravity, such as the Einstein–Cartan theory of gravity, which extends general relativity to matter with intrinsic angular momentum (spin), predict the formation of such baby universes. No violation of known general principles of physics is needed. There are no physical constraints on the number of the universes, even though only one remains observable.The Einstein–Cartan theory is difficult to test because its predictions are significantly different from general-relativistic ones only at extremely high densities.
Information is encoded in the correlations between future and pastThe final-state proposal suggests that boundary conditions must be imposed at the black-hole singularity, which, from a causal perspective, is to the future of all events in the black-hole interior. This helps reconcile black-hole evaporation with unitarity but contradicts the intuitive idea of causality and locality of time-evolution.
Quantum-channel theoryIn 2014, Chris Adami argued that analysis using quantum channel theory causes any apparent paradox to disappear; Adami rejects black hole complementarity, arguing instead that no space-like surface contains duplicated quantum information.
Recent developments
Significant progress was made in 2019, when, starting with work by Penington and Almheiri, Engelhardt, Marolf and Maxfield, researchers were able to compute the von Neumann entropy of the radiation black holes emit in specific models of quantum gravity. These calculations showed that, in these models, the entropy of this radiation first rises and then falls back to zero. As explained above, one way to frame the information paradox is that Hawking's calculation appears to show that the von Neumann entropy of Hawking radiation increases throughout the lifetime of the black hole. But if the black hole formed from a pure state with zero entropy, unitarity implies that the entropy of the Hawking radiation must decrease back to zero once the black hole evaporates completely, i.e., the Page curve. Therefore, the results above provide a resolution to the information paradox, at least in the specific models of gravity considered in these models.
These calculations compute the entropy by first analytically continuing the spacetime to a Euclidean spacetime and then using the replica trick. The path integral that computes the entropy receives contributions from novel Euclidean configurations called "replica wormholes". (These wormholes exist in a Wick rotated spacetime and should not be conflated with wormholes in the original spacetime.) The inclusion of these wormhole geometries in the computation prevents the entropy from increasing indefinitely.
These calculations also imply that for sufficiently old black holes, one can perform operations on the Hawking radiation that affect the black hole interior. This result has implications for the related firewall paradox, and provides evidence for the physical picture suggested by the ER=EPR proposal, black hole complementarity, and the Papadodimas–Raju proposal.
It has been noted that the models used to perform the Page curve computations above have consistently involved theories where the graviton has mass, unlike the real world, where the graviton is massless. These models have also involved a "nongravitational bath", which can be thought of as an artificial interface where gravity ceases to act. It has also been argued that a key technique used in the Page-curve computations, the "island proposal", is inconsistent in standard theories of gravity with a Gauss law. This would suggest that the Page curve computations are inapplicable to realistic black holes and work only in special toy models of gravity. The validity of these criticisms remains under investigation; there is no consensus in the research community.
In 2020, Laddha, Prabhu, Raju, and Shrivastava argued that, as a result of the effects of quantum gravity, information should always be available outside the black hole. This would imply that the von Neumann entropy of the region outside the black hole always remains zero, as opposed to the proposal above, where the von Neumann entropy first rises and then falls. Extending this, Raju argued that Hawking's error was to assume that the region outside the black hole would have no information about its interior.
Hawking formalized this assumption in terms of a "principle of ignorance". The principle of ignorance is correct in classical gravity, when quantum-mechanical effects are neglected, by virtue of the no-hair theorem. It is also correct when only quantum-mechanical effects are considered and gravitational effects are neglected. But Raju argued that when both quantum mechanical and gravitational effects are accounted for, the principle of ignorance should be replaced by a "principle of holography of information" that would imply just the opposite: all the information about the interior can be regained from the exterior through suitably precise measurements.
The two recent resolutions of the information paradox described above—via replica wormholes and the holography of information—share the feature that observables in the black-hole interior also describe observables far from the black hole. This implies a loss of exact locality in quantum gravity. Although this loss of locality is very small, it persists over large distance scales. This feature has been challenged by some researchers.
See also
AdS/CFT correspondence
Beyond black holes
Black hole complementarity
Cosmic censorship hypothesis
Firewall (physics)
Fuzzball (string theory)
Holographic principle
List of paradoxes
Maxwell's demon
No-hair theorem
No-hiding theorem
Thorne–Hawking–Preskill bet
References
External links
Black Hole Information Loss Problem, a USENET physics FAQ page
. Discusses methods of attack on the problem, and their apparent shortcomings.
Report on Hawking's 2004 theory in Nature.
Stephen Hawking's purported solution to the black hole unitarity paradox.
Hawking and unitarity: a July 2005 discussion of the information loss paradox and Stephen Hawking's role in it
The Hawking Paradox - BBC Horizon documentary (2005)
A Black Hole Mystery Wrapped in a Firewall Paradox
Black holes
Physical paradoxes
Relativistic paradoxes | 0.782167 | 0.996059 | 0.779085 |
Primitive equations | The primitive equations are a set of nonlinear partial differential equations that are used to approximate global atmospheric flow and are used in most atmospheric models. They consist of three main sets of balance equations:
A continuity equation: Representing the conservation of mass.
Conservation of momentum: Consisting of a form of the Navier–Stokes equations that describe hydrodynamical flow on the surface of a sphere under the assumption that vertical motion is much smaller than horizontal motion (hydrostasis) and that the fluid layer depth is small compared to the radius of the sphere
A thermal energy equation: Relating the overall temperature of the system to heat sources and sinks
The primitive equations may be linearized to yield Laplace's tidal equations, an eigenvalue problem from which the analytical solution to the latitudinal structure of the flow may be determined.
In general, nearly all forms of the primitive equations relate the five variables u, v, ω, T, W, and their evolution over space and time.
The equations were first written down by Vilhelm Bjerknes.
Definitions
is the zonal velocity (velocity in the east–west direction tangent to the sphere)
is the meridional velocity (velocity in the north–south direction tangent to the sphere)
is the vertical velocity in isobaric coordinates
is the temperature
is the geopotential
is the term corresponding to the Coriolis force, and is equal to , where is the angular rotation rate of the Earth ( radians per sidereal hour), and is the latitude
is the gas constant
is the pressure
is the density
is the specific heat on a constant pressure surface
is the heat flow per unit time per unit mass
is the precipitable water
is the Exner function
is the potential temperature
is the Absolute vorticity
Forces that cause atmospheric motion
Forces that cause atmospheric motion include the pressure gradient force, gravity, and viscous friction. Together, they create the forces that accelerate our atmosphere.
The pressure gradient force causes an acceleration forcing air from regions of high pressure to regions of low pressure. Mathematically, this can be written as:
The gravitational force accelerates objects at approximately 9.8 m/s2 directly towards the center of the Earth.
The force due to viscous friction can be approximated as:
Using Newton's second law, these forces (referenced in the equations above as the accelerations due to these forces) may be summed to produce an equation of motion that describes this system. This equation can be written in the form:
Therefore, to complete the system of equations and obtain 6 equations and 6 variables:
where n is the number density in mol, and T:=RT is the temperature equivalent value in Joule/mol.
Forms of the primitive equations
The precise form of the primitive equations depends on the vertical coordinate system chosen, such as pressure coordinates, log pressure coordinates, or sigma coordinates. Furthermore, the velocity, temperature, and geopotential variables may be decomposed into mean and perturbation components using Reynolds decomposition.
Pressure coordinate in vertical, Cartesian tangential plane
In this form pressure is selected as the vertical coordinate and the horizontal coordinates are written for the Cartesian tangential plane (i.e. a plane tangent to some point on the surface of the Earth). This form does not take the curvature of the Earth into account, but is useful for visualizing some of the physical processes involved in formulating the equations due to its relative simplicity.
Note that the capital D time derivatives are material derivatives. Five equations in five unknowns comprise the system.
the inviscid (frictionless) momentum equations:
the hydrostatic equation, a special case of the vertical momentum equation in which vertical acceleration is considered negligible:
the continuity equation, connecting horizontal divergence/convergence to vertical motion under the hydrostatic approximation:
and the thermodynamic energy equation, a consequence of the first law of thermodynamics
When a statement of the conservation of water vapor substance is included, these six equations form the basis for any numerical weather prediction scheme.
Primitive equations using sigma coordinate system, polar stereographic projection
According to the National Weather Service Handbook No. 1 – Facsimile Products, the primitive equations can be simplified into the following equations:
Zonal wind:
Meridional wind:
Temperature:
The first term is equal to the change in temperature due to incoming solar radiation and outgoing longwave radiation, which changes with time throughout the day. The second, third, and fourth terms are due to advection. Additionally, the variable T with subscript is the change in temperature on that plane. Each T is actually different and related to its respective plane. This is divided by the distance between grid points to get the change in temperature with the change in distance. When multiplied by the wind velocity on that plane, the units kelvins per meter and meters per second give kelvins per second. The sum of all the changes in temperature due to motions in the x, y, and z directions give the total change in temperature with time.
Precipitable water:
This equation and notation works in much the same way as the temperature equation. This equation describes the motion of water from one place to another at a point without taking into account water that changes form. Inside a given system, the total change in water with time is zero. However, concentrations are allowed to move with the wind.
Pressure thickness:
These simplifications make it much easier to understand what is happening in the model. Things like the temperature (potential temperature), precipitable water, and to an extent the pressure thickness simply move from one spot on the grid to another with the wind. The wind is forecast slightly differently. It uses geopotential, specific heat, the Exner function π, and change in sigma coordinate.
Solution to the linearized primitive equations
The analytic solution to the linearized primitive equations involves a sinusoidal oscillation in time and longitude, modulated by coefficients related to height and latitude.
where s and are the zonal wavenumber and angular frequency, respectively. The solution represents atmospheric waves and tides.
When the coefficients are separated into their height and latitude components, the height dependence takes the form of propagating or evanescent waves (depending on conditions), while the latitude dependence is given by the Hough functions.
This analytic solution is only possible when the primitive equations are linearized and simplified. Unfortunately many of these simplifications (i.e. no dissipation, isothermal atmosphere) do not correspond to conditions in the actual atmosphere. As a result, a numerical solution which takes these factors into account is often calculated using general circulation models and climate models.
See also
Barometric formula
Climate model
Euler equations
Fluid dynamics
General circulation model
Numerical weather prediction
References
Beniston, Martin. From Turbulence to Climate: Numerical Investigations of the Atmosphere with a Hierarchy of Models. Berlin: Springer, 1998.
Firth, Robert. Mesoscale and Microscale Meteorological Model Grid Construction and Accuracy. LSMSA, 2006.
Thompson, Philip. Numerical Weather Analysis and Prediction. New York: The Macmillan Company, 1961.
Pielke, Roger A. Mesoscale Meteorological Modeling. Orlando: Academic Press, Inc., 1984.
U.S. Department of Commerce, National Oceanic and Atmospheric Administration, National Weather Service. National Weather Service Handbook No. 1 – Facsimile Products. Washington, DC: Department of Commerce, 1979.
External links
National Weather Service – NCSU
Collaborative Research and Training Site, Review of the Primitive Equations.
Partial differential equations
Equations of fluid dynamics
Numerical climate and weather models
Atmospheric dynamics | 0.800803 | 0.972821 | 0.779038 |
Accretion disk | An accretion disk is a structure (often a circumstellar disk) formed by diffuse material in orbital motion around a massive central body. The central body is most frequently a star. Friction, uneven irradiance, magnetohydrodynamic effects, and other forces induce instabilities causing orbiting material in the disk to spiral inward toward the central body. Gravitational and frictional forces compress and raise the temperature of the material, causing the emission of electromagnetic radiation. The frequency range of that radiation depends on the central object's mass. Accretion disks of young stars and protostars radiate in the infrared; those around neutron stars and black holes in the X-ray part of the spectrum. The study of oscillation modes in accretion disks is referred to as diskoseismology.
Manifestations
Accretion disks are a ubiquitous phenomenon in astrophysics; active galactic nuclei, protoplanetary disks, and gamma ray bursts all involve accretion disks. These disks very often give rise to astrophysical jets coming from the vicinity of the central object. Jets are an efficient way for the star-disk system to shed angular momentum without losing too much mass.
The most prominent accretion disks are those of active galactic nuclei and of quasars, which are thought to be massive black holes at the center of galaxies. As matter enters the accretion disc, it follows a trajectory called a tendex line, which describes an inward spiral. This is because particles rub and bounce against each other in a turbulent flow, causing frictional heating which radiates energy away, reducing the particles' angular momentum, allowing the particle to drift inward, driving the inward spiral. The loss of angular momentum manifests as a reduction in velocity; at a slower velocity, the particle must adopt a lower orbit. As the particle falls to this lower orbit, a portion of its gravitational potential energy is converted to increased velocity and the particle gains speed. Thus, the particle has lost energy even though it is now travelling faster than before; however, it has lost angular momentum. As a particle orbits closer and closer, its velocity increases; as velocity increases frictional heating increases as more and more of the particle's potential energy (relative to the black hole) is radiated away; the accretion disk of a black hole is hot enough to emit X-rays just outside the event horizon. The large luminosity of quasars is believed to be a result of gas being accreted by supermassive black holes. Elliptical accretion disks formed at tidal disruption of stars can be typical in galactic nuclei and quasars. The accretion process can convert about 10 percent to over 40 percent of the mass of an object into energy as compared to around 0.7 percent for nuclear fusion processes. In close binary systems the more massive primary component evolves faster and has already become a white dwarf, a neutron star, or a black hole, when the less massive companion reaches the giant state and exceeds its Roche lobe. A gas flow then develops from the companion star to the primary. Angular momentum conservation prevents a straight flow from one star to the other and an accretion disk forms instead.
Accretion disks surrounding T Tauri stars or Herbig stars are called protoplanetary disks because they are thought to be the progenitors of planetary systems. The accreted gas in this case comes from the molecular cloud out of which the star has formed rather than a companion star.
Accretion disk physics
In the 1940s, models were first derived from basic physical principles. In order to agree with observations, those models had to invoke a yet unknown mechanism for angular momentum redistribution. If matter is to fall inward it must lose not only gravitational energy but also lose angular momentum. Since the total angular momentum of the disk is conserved, the angular momentum loss of the mass falling into the center has to be compensated by an angular momentum gain of the mass far from the center. In other words, angular momentum should be transported outward for matter to accrete. According to the Rayleigh stability criterion,
where represents the angular velocity of a fluid element and its distance to the rotation center,
an accretion disk is expected to be a laminar flow. This prevents the existence of a hydrodynamic mechanism for angular momentum transport.
On one hand, it was clear that viscous stresses would eventually cause the matter toward the center to heat up and radiate away some of its gravitational energy. On the other hand, viscosity itself was not enough to explain the transport of angular momentum to the exterior parts of the disk. Turbulence-enhanced viscosity was the mechanism thought to be responsible for such angular-momentum redistribution, although the origin of the turbulence itself was not well understood. The conventional -model (discussed below) introduces an adjustable parameter describing the effective increase of viscosity due to turbulent eddies within the disk. In 1991, with the rediscovery of the magnetorotational instability (MRI), S. A. Balbus, and J. F. Hawley established that a weakly magnetized disk accreting around a heavy, compact central object would be highly unstable, providing a direct mechanism for angular-momentum redistribution.
α-Disk model
Shakura and Sunyaev (1973) proposed turbulence in the gas as the source of an increased viscosity. Assuming subsonic turbulence and the disk height as an upper limit for the size of the eddies, the disk viscosity can be estimated as where is the sound speed, is the scale height of the disk, and is a free parameter between zero (no accretion) and approximately one. In a turbulent medium , where is the velocity of turbulent cells relative to the mean gas motion, and is the size of the largest turbulent cells, which is estimated as and , where is the Keplerian orbital angular velocity, is the radial distance from the central object of mass . By using the equation of hydrostatic equilibrium, combined with conservation of angular momentum and assuming that the disk is thin, the equations of disk structure may be solved in terms of the parameter. Many of the observables depend only weakly on , so this theory is predictive even though it has a free parameter.
Using Kramers' opacity law it is found that
where and are the mid-plane temperature and density respectively. is the accretion rate, in units of , is the mass of the central accreting object in units of a solar mass, , is the radius of a point in the disk, in units of , and , where is the radius where angular momentum stops being transported inward.
The Shakura–Sunyaev α-disk model is both thermally and viscously unstable. An alternative model, known as the -disk, which is stable in both senses assumes that the viscosity is proportional to the gas pressure . In the standard Shakura–Sunyaev model, viscosity is assumed to be proportional to the total pressure since
.
The Shakura–Sunyaev model assumes that the disk is in local thermal equilibrium, and can radiate its heat efficiently. In this case, the disk radiates away the viscous heat, cools, and becomes geometrically thin. However, this assumption may break down. In the radiatively inefficient case, the disk may "puff up" into a torus or some other three-dimensional solution like an Advection Dominated Accretion Flow (ADAF). The ADAF solutions usually require that the accretion rate is smaller than a few percent of the Eddington limit. Another extreme is the case of Saturn's rings, where the disk is so gas-poor that its angular momentum transport is dominated by solid body collisions and disk-moon gravitational interactions. The model is in agreement with recent astrophysical measurements using gravitational lensing.
Magnetorotational instability
Balbus and Hawley (1991) proposed a mechanism which involves magnetic fields to generate the angular momentum transport. A simple system displaying this mechanism is a gas disk in the presence of a weak axial magnetic field. Two radially neighboring fluid elements will behave as two mass points connected by a massless spring, the spring tension playing the role of the magnetic tension. In a Keplerian disk the inner fluid element would be orbiting more rapidly than the outer, causing the spring to stretch. The inner fluid element is then forced by the spring to slow down, reduce correspondingly its angular momentum causing it to move to a lower orbit. The outer fluid element being pulled forward will speed up, increasing its angular momentum and move to a larger radius orbit. The spring tension will increase as the two fluid elements move further apart and the process runs away.
It can be shown that in the presence of such a spring-like tension the Rayleigh stability criterion is replaced by
Most astrophysical disks do not meet this criterion and are therefore prone to this magnetorotational instability. The magnetic fields present in astrophysical objects (required for the instability to occur) are believed to be generated via dynamo action.
Magnetic fields and jets
Accretion disks are usually assumed to be threaded by the external magnetic fields present in the interstellar medium. These fields are typically weak (about few micro-Gauss), but they can get anchored to the matter in the disk, because of its high electrical conductivity, and carried inward toward the central star. This process can concentrate the magnetic flux around the centre of the disk giving rise to very strong magnetic fields. Formation of powerful astrophysical jets along the rotation axis of accretion disks requires a large scale poloidal magnetic field in the inner regions of the disk.
Such magnetic fields may be advected inward from the interstellar medium or generated by a magnetic dynamo within the disk. Magnetic fields strengths at least of order 100 Gauss seem necessary for the magneto-centrifugal mechanism to launch powerful jets. There are problems, however, in carrying external magnetic flux inward toward the central star of the disk. High electric conductivity dictates that the magnetic field is frozen into the matter which is being accreted onto the central object with a slow velocity. However, the plasma is not a perfect electric conductor, so there is always some degree of dissipation. The magnetic field diffuses away faster than the rate at which it is being carried inward by accretion of matter. A simple solution is assuming a viscosity much larger than the magnetic diffusivity in the disk. However, numerical simulations and theoretical models show that the viscosity and magnetic diffusivity have almost the same order of magnitude in magneto-rotationally turbulent disks. Some other factors may possibly affect the advection/diffusion rate: reduced turbulent magnetic diffusion on the surface layers; reduction of the Shakura–Sunyaev viscosity by magnetic fields; and the generation of large scale fields by small scale MHD turbulence –a large scale dynamo. In fact, a combination of different mechanisms might be responsible for efficiently carrying the external field inward toward the central parts of the disk where the jet is launched. Magnetic buoyancy, turbulent pumping and turbulent diamagnetism exemplify such physical phenomena invoked to explain such efficient concentration of external fields.
Analytic models of sub-Eddington accretion disks (thin disks, ADAFs)
When the accretion rate is sub-Eddington and the opacity very high, the standard thin accretion disk is formed. It is geometrically thin in the vertical direction (has a disk-like shape), and is made of a relatively cold gas, with a negligible radiation pressure. The gas goes down on very tight spirals, resembling almost circular, almost free (Keplerian) orbits. Thin disks are relatively luminous and they have thermal electromagnetic spectra, i.e. not much different from that of a sum of black bodies. Radiative cooling is very efficient in thin disks. The classic 1974 work by Shakura and Sunyaev on thin accretion disks is one of the most often quoted papers in modern astrophysics. Thin disks were independently worked out by Lynden-Bell, Pringle, and Rees. Pringle contributed in the past thirty years many key results to accretion disk theory, and wrote the classic 1981 review that for many years was the main source of information about accretion disks, and is still very useful today.
A fully general relativistic treatment, as needed for the inner part of the disk when the central object is a black hole, has been provided by Page and Thorne, and used for producing simulated optical images by Luminet and Marck, in which, although such a system is intrinsically symmetric its image is not, because the relativistic rotation speed needed for centrifugal equilibrium in the very strong gravitational field near the black hole produces a strong Doppler redshift on the receding side (taken here to be on the right) whereas there will be a strong blueshift on the approaching side. Due to light bending, the disk appears distorted but is nowhere hidden by the black hole.
When the accretion rate is sub-Eddington and the opacity very low, an ADAF (advection dominated accretion flow) is formed. This type of accretion disk was predicted in 1977 by Ichimaru. Although Ichimaru's paper was largely ignored, some elements of the ADAF model were present in the influential 1982 ion-tori paper by Rees, Phinney, Begelman, and Blandford.
ADAFs started to be intensely studied by many authors only after their rediscovery in the early 1990s by Popham and Narayan in numerical models of accretion disk boundary layers.
Self-similar solutions for advection-dominated accretion were found by Narayan and Yi, and independently by Abramowicz, Chen, Kato, Lasota (who coined the name ADAF), and Regev.
Most important contributions to astrophysical applications of ADAFs have been made by Narayan and his collaborators. ADAFs are cooled by advection (heat captured in matter) rather than by radiation. They are very radiatively inefficient, geometrically extended, similar in shape to a sphere (or a "corona") rather than a disk, and very hot (close to the virial temperature). Because of their low efficiency, ADAFs are much less luminous than the Shakura–Sunyaev thin disks. ADAFs emit a power-law, non-thermal radiation, often with a strong Compton component.
Analytic models of super-Eddington accretion disks (slim disks, Polish doughnuts)
The theory of highly super-Eddington black hole accretion, M≫MEdd, was developed in the 1980s by Abramowicz, Jaroszynski, Paczyński, Sikora, and others in terms of "Polish doughnuts" (the name was coined by Rees). Polish doughnuts are low viscosity, optically thick, radiation pressure supported accretion disks cooled by advection. They are radiatively very inefficient. Polish doughnuts resemble in shape a fat torus (a doughnut) with two narrow funnels along the rotation axis. The funnels collimate the radiation into beams with highly super-Eddington luminosities.
Slim disks (name coined by Kolakowska) have only moderately super-Eddington accretion rates, M≥MEdd, rather disk-like shapes, and almost thermal spectra. They are cooled by advection, and are radiatively ineffective. They were introduced by Abramowicz, Lasota, Czerny, and Szuszkiewicz in 1988.
Excretion disk
The opposite of an accretion disk is an excretion disk where instead of material accreting from a disk on to a central object, material is excreted from the center outward onto the disk. Excretion disks are formed when stars merge.
See also
Accretion
Astrophysical jet
Blandford–Znajek process
Circumstellar disc
Circumplanetary disk
Dynamo theory
Exoasteroid
Gravitational singularity
Quasi-star
Reverberation mapping
Ring system
Solar nebula
Spin-flip
Notes
References
External links
Professor John F. Hawley homepage, University of Virginia (archived 2015)
The Dynamical Structure of Nonradiative Black Hole Accretion Flows, John F. Hawley and Steven A. Balbus, 2002 March 19, The Astrophysical Journal, 573:738-748, 2002 July 10
Accretion discs, Scholarpedia
-
Black holes
Space plasmas
Concepts in astronomy
Unsolved problems in physics
Vortices
Articles containing video clips
Circumstellar disks | 0.782182 | 0.995979 | 0.779037 |
Hemodynamics | Hemodynamics or haemodynamics are the dynamics of blood flow. The circulatory system is controlled by homeostatic mechanisms of autoregulation, just as hydraulic circuits are controlled by control systems. The hemodynamic response continuously monitors and adjusts to conditions in the body and its environment. Hemodynamics explains the physical laws that govern the flow of blood in the blood vessels.
Blood flow ensures the transportation of nutrients, hormones, metabolic waste products, oxygen, and carbon dioxide throughout the body to maintain cell-level metabolism, the regulation of the pH, osmotic pressure and temperature of the whole body, and the protection from microbial and mechanical harm.
Blood is a non-Newtonian fluid, and is most efficiently studied using rheology rather than hydrodynamics. Because blood vessels are not rigid tubes, classic hydrodynamics and fluids mechanics based on the use of classical viscometers are not capable of explaining haemodynamics.
The study of the blood flow is called hemodynamics, and the study of the properties of the blood flow is called hemorheology.
Blood
Blood is a complex liquid. Blood is composed of plasma and formed elements. The plasma contains 91.5% water, 7% proteins and 1.5% other solutes. The formed elements are platelets, white blood cells, and red blood cells. The presence of these formed elements and their interaction with plasma molecules are the main reasons why blood differs so much from ideal Newtonian fluids.
Viscosity of plasma
Normal blood plasma behaves like a Newtonian fluid at physiological rates of shear. Typical values for the viscosity of normal human plasma at 37 °C is 1.4 mN·s/m2. The viscosity of normal plasma varies with temperature in the same way as does that of its solvent water;a 3°C change in temperature in the physiological range (36.5°C to 39.5°C)reduces plasma viscosity by about 10%.
Osmotic pressure of plasma
The osmotic pressure of solution is determined by the number of particles present and by the temperature. For example, a 1 molar solution of a substance contains molecules per liter of that substance and at 0 °C it has an osmotic pressure of . The osmotic pressure of the plasma affects the mechanics of the circulation in several ways. An alteration of the osmotic pressure difference across the membrane of a blood cell causes a shift of water and a change of cell volume. The changes in shape and flexibility affect the mechanical properties of whole blood. A change in plasma osmotic pressure alters the hematocrit, that is, the volume concentration of red cells in the whole blood by redistributing water between the intravascular and extravascular spaces. This in turn affects the mechanics of the whole blood.
Red blood cells
The red blood cell is highly flexible and biconcave in shape. Its membrane has a Young's modulus in the region of 106 Pa. Deformation in red blood cells is induced by shear stress. When a suspension is sheared, the red blood cells deform and spin because of the velocity gradient, with the rate of deformation and spin depending on the shear rate and the concentration.
This can influence the mechanics of the circulation and may complicate the measurement of blood viscosity. It is true that in a steady state flow of a viscous fluid through a rigid spherical body immersed in the fluid, where we assume the inertia is negligible in such a flow, it is believed that the downward gravitational force of the particle is balanced by the viscous drag force. From this force balance the speed of fall can be shown to be given by Stokes' law
Where a is the particle radius, ρp, ρf are the respectively particle and fluid density μ is the fluid viscosity, g is the gravitational acceleration. From the above equation we can see that the sedimentation velocity of the particle depends on the square of the radius. If the particle is released from rest in the fluid, its sedimentation velocity Us increases until it attains the steady value called the terminal velocity (U), as shown above.
Hemodilution
Hemodilution is the dilution of the concentration of red blood cells and plasma constituents by partially substituting the blood with colloids or crystalloids. It is a strategy to avoid exposure of patients to the potential hazards of homologous blood transfusions.
Hemodilution can be normovolemic, which implies the dilution of normal blood constituents by the use of expanders. During acute normovolemic hemodilution (ANH), blood subsequently lost during surgery contains proportionally fewer red blood cells per milliliter, thus minimizing intraoperative loss of the whole blood. Therefore, blood lost by the patient during surgery is not actually lost by the patient, for this volume is purified and redirected into the patient.
On the other hand, hypervolemic hemodilution (HVH) uses acute preoperative volume expansion without any blood removal. In choosing a fluid, however, it must be assured that when mixed, the remaining blood behaves in the microcirculation as in the original blood fluid, retaining all its properties of viscosity.
In presenting what volume of ANH should be applied one study suggests a mathematical model of ANH which calculates the maximum possible RCM savings using ANH, given the patients weight Hi and Hm.
To maintain the normovolemia, the withdrawal of autologous blood must be simultaneously replaced by a suitable hemodilute. Ideally, this is achieved by isovolemia exchange transfusion of a plasma substitute with a colloid osmotic pressure (OP). A colloid is a fluid containing particles that are large enough to exert an oncotic pressure across the micro-vascular membrane.
When debating the use of colloid or crystalloid, it is imperative to think about all the components of the starling equation:
To identify the minimum safe hematocrit desirable for a given patient the following equation is useful:
where EBV is the estimated blood volume; 70 mL/kg was used in this model and Hi (initial hematocrit) is the patient's initial hematocrit.
From the equation above it is clear that the volume of blood removed during the ANH to the Hm is the same as the BLs.
How much blood is to be removed is usually based on the weight, not the volume. The number of units that need to be removed to hemodilute to the maximum safe hematocrit (ANH) can be found by
This is based on the assumption that each unit removed by hemodilution has a volume of 450 mL (the actual volume of a unit will vary somewhat since completion of collection is dependent on weight and not volume).
The model assumes that the hemodilute value is equal to the Hm prior to surgery, therefore, the re-transfusion of blood obtained by hemodilution must begin when SBL begins.
The RCM available for retransfusion after ANH (RCMm) can be calculated from the patient's Hi and the final hematocrit after hemodilution(Hm)
The maximum SBL that is possible when ANH is used without falling below Hm(BLH) is found by assuming that all the blood removed during ANH is returned to the patient at a rate sufficient to maintain the hematocrit at the minimum safe level
If ANH is used as long as SBL does not exceed BLH there will not be any need for blood transfusion. We can conclude from the foregoing that H should therefore not exceed s.
The difference between the BLH and the BLs therefore is the incremental surgical blood loss (BLi) possible when using ANH.
When expressed in terms of the RCM
Where RCMi is the red cell mass that would have to be administered using homologous blood to maintain the Hm if ANH is not used and blood loss equals BLH.
The model used assumes ANH used for a 70 kg patient with an estimated blood volume of 70 ml/kg (4900 ml). A range of Hi and Hm was evaluated to understand conditions where hemodilution is necessary to benefit the patient.
Result
The result of the model calculations are presented in a table given in the appendix for a range of Hi from 0.30 to 0.50 with ANH performed to minimum hematocrits from 0.30 to 0.15. Given a Hi of 0.40, if the Hm is assumed to be 0.25.then from the equation above the RCM count is still high and ANH is not necessary, if BLs does not exceed 2303 ml, since the hemotocrit will not fall below Hm, although five units of blood must be removed during hemodilution. Under these conditions, to achieve the maximum benefit from the technique if ANH is used, no homologous blood will be required to maintain the Hm if blood loss does not exceed 2940 ml. In such a case, ANH can save a maximum of 1.1 packed red blood cell unit equivalent, and homologous blood transfusion is necessary to maintain Hm, even if ANH is used.
This model can be used to identify when ANH may be used for a given patient and the degree of ANH necessary to maximize that benefit.
For example, if Hi is 0.30 or less it is not possible to save a red cell mass equivalent to two units of homologous PRBC even if the patient is hemodiluted to an Hm of 0.15. That is because from the RCM equation the patient RCM falls short from the equation giving above.
If Hi is 0.40 one must remove at least 7.5 units of blood during ANH, resulting in an Hm of 0.20 to save two units equivalence. Clearly, the greater the Hi and the greater the number of units removed during hemodilution, the more effective ANH is for preventing homologous blood transfusion. The model here is designed to allow doctors to determine where ANH may be beneficial for a patient based on their knowledge of the Hi, the potential for SBL, and an estimate of the Hm. Though the model used a 70 kg patient, the result can be applied to any patient. To apply these result to any body weight, any of the values BLs, BLH and ANHH or PRBC given in the table need to be multiplied by the factor we will call T
Basically, the model considered above is designed to predict the maximum RCM that can save ANH.
In summary, the efficacy of ANH has been described mathematically by means of measurements of surgical blood loss and blood volume flow measurement. This form of analysis permits accurate estimation of the potential efficiency of the techniques and shows the application of measurement in the medical field.
Blood flow
Cardiac output
The heart is the driver of the circulatory system, pumping blood through rhythmic contraction and relaxation. The rate of blood flow out of the heart (often expressed in L/min) is known as the cardiac output (CO).
Blood being pumped out of the heart first enters the aorta, the largest artery of the body. It then proceeds to divide into smaller and smaller arteries, then into arterioles, and eventually capillaries, where oxygen transfer occurs. The capillaries connect to venules, and the blood then travels back through the network of veins to the venae cavae into the right heart. The micro-circulation — the arterioles, capillaries, and venules —constitutes most of the area of the vascular system and is the site of the transfer of O2, glucose, and enzyme substrates into the cells. The venous system returns the de-oxygenated blood to the right heart where it is pumped into the lungs to become oxygenated and CO2 and other gaseous wastes exchanged and expelled during breathing. Blood then returns to the left side of the heart where it begins the process again.
In a normal circulatory system, the volume of blood returning to the heart each minute is approximately equal to the volume that is pumped out each minute (the cardiac output). Because of this, the velocity of blood flow across each level of the circulatory system is primarily determined by the total cross-sectional area of that level.
Cardiac output is determined by two methods. One is to use the Fick equation:
The other thermodilution method is to sense the temperature change from a liquid injected in the proximal port of a Swan-Ganz to the distal port.
Cardiac output is mathematically expressed by the following equation:
where
CO = cardiac output (L/sec)
SV = stroke volume (ml)
HR = heart rate (bpm)
The normal human cardiac output is 5-6 L/min at rest. Not all blood that enters the left ventricle exits the heart. What is left at the end of diastole (EDV) minus the stroke volume make up the end systolic volume (ESV).
Anatomical features
Circulatory system of species subjected to orthostatic blood pressure (such as arboreal snakes) has evolved with physiological and morphological features to overcome the circulatory disturbance. For instance, in arboreal snakes the heart is closer to the head, in comparison with aquatic snakes. This facilitates blood perfusion to the brain.
Turbulence
Blood flow is also affected by the smoothness of the vessels, resulting in either turbulent (chaotic) or laminar (smooth) flow. Smoothness is reduced by the buildup of fatty deposits on the arterial walls.
The Reynolds number (denoted NR or Re) is a relationship that helps determine the behavior of a fluid in a tube, in this case blood in the vessel.
The equation for this dimensionless relationship is written as:
ρ: density of the blood
v: mean velocity of the blood
L: characteristic dimension of the vessel, in this case diameter
μ: viscosity of blood
The Reynolds number is directly proportional to the velocity and diameter of the tube. Note that NR is directly proportional to the mean velocity as well as the diameter. A Reynolds number of less than 2300 is laminar fluid flow, which is characterized by constant flow motion, whereas a value of over 4000, is represented as turbulent flow. Due to its smaller radius and lowest velocity compared to other vessels, the Reynolds number at the capillaries is very low, resulting in laminar instead of turbulent flow.
Velocity
Often expressed in cm/s. This value is inversely related to the total cross-sectional area of the blood vessel and also differs per cross-section, because in normal condition the blood flow has laminar characteristics. For this reason, the blood flow velocity is the fastest in the middle of the vessel and slowest at the vessel wall. In most cases, the mean velocity is used. There are many ways to measure blood flow velocity, like videocapillary microscoping with frame-to-frame analysis, or laser Doppler anemometry.
Blood velocities in arteries are higher during systole than during diastole. One parameter to quantify this difference is the pulsatility index (PI), which is equal to the difference between the peak systolic velocity and the minimum diastolic velocity divided by the mean velocity during the cardiac cycle. This value decreases with distance from the heart.
Blood vessels
Vascular resistance
Resistance is also related to vessel radius, vessel length, and blood viscosity.
In a first approach based on fluids, as indicated by the Hagen–Poiseuille equation. The equation is as follows:
∆P: pressure drop/gradient
μ: viscosity
l: length of tube. In the case of vessels with infinitely long lengths, l is replaced with diameter of the vessel.
Q: flow rate of the blood in the vessel
r: radius of the vessel
In a second approach, more realistic of the vascular resistance and coming from experimental observations on blood flows, according to Thurston, there is a plasma release-cell layering at the walls surrounding a plugged flow. It is a fluid layer in which at a distance δ, viscosity η is a function of δ written as η(δ), and these surrounding layers do not meet at the vessel centre in real blood flow. Instead, there is the plugged flow which is hyperviscous because holding high concentration of RBCs. Thurston assembled this layer to the flow resistance to describe blood flow by means of a viscosity η(δ) and thickness δ from the wall layer.
The blood resistance law appears as R adapted to blood flow profile :
where
R = resistance to blood flow
c = constant coefficient of flow
L = length of the vessel
η(δ) = viscosity of blood in the wall plasma release-cell layering
r = radius of the blood vessel
δ = distance in the plasma release-cell layer
Blood resistance varies depending on blood viscosity and its plugged flow (or sheath flow since they are complementary across the vessel section) size as well, and on the size of the vessels.
Assuming steady, laminar flow in the vessel, the blood vessels behavior is similar to that of a pipe. For instance if p1 and p2 are pressures are at the ends of the tube, the pressure drop/gradient is:
The larger arteries, including all large enough to see without magnification, are conduits with low vascular resistance (assuming no advanced atherosclerotic changes) with high flow rates that generate only small drops in pressure. The smaller arteries and arterioles have higher resistance, and confer the main blood pressure drop across major arteries to capillaries in the circulatory system.
In the arterioles blood pressure is lower than in the major arteries. This is due to bifurcations, which cause a drop in pressure. The more bifurcations, the higher the total cross-sectional area, therefore the pressure across the surface drops. This is why the arterioles have the highest pressure-drop. The pressure drop of the arterioles is the product of flow rate and resistance: ∆P=Q xresistance. The high resistance observed in the arterioles, which factor largely in the ∆P is a result of a smaller radius of about 30 μm. The smaller the radius of a tube, the larger the resistance to fluid flow.
Immediately following the arterioles are the capillaries. Following the logic observed in the arterioles, we expect the blood pressure to be lower in the capillaries compared to the arterioles. Since pressure is a function of force per unit area, (P = F/A), the larger the surface area, the lesser the pressure when an external force acts on it. Though the radii of the capillaries are very small, the network of capillaries has the largest surface area in the vascular network. They are known to have the largest surface area (485 mm^2) in the human vascular network. The larger the total cross-sectional area, the lower the mean velocity as well as the pressure.
Substances called vasoconstrictors can reduce the size of blood vessels, thereby increasing blood pressure. Vasodilators (such as nitroglycerin) increase the size of blood vessels, thereby decreasing arterial pressure.
If the blood viscosity increases (gets thicker), the result is an increase in arterial pressure. Certain medical conditions can change the viscosity of the blood. For instance, anemia (low red blood cell concentration) reduces viscosity, whereas increased red blood cell concentration increases viscosity. It had been thought that aspirin and related "blood thinner" drugs decreased the viscosity of blood, but instead studies found that they act by reducing the tendency of the blood to clot.
To determine the systemic vascular resistance (SVR) the formula for calculating all resistance is used.
This translates for SVR into:
Where
SVR = systemic vascular resistance (mmHg/L/min)
MAP = mean arterial pressure (mmHg)
CVP = central venous pressure (mmHg)
CO = cardiac output (L/min)
To get this in Wood units the answer is multiplied by 80.
Normal systemic vascular resistance is between 900 and 1440 dynes/sec/cm−5.
Wall tension
Regardless of site, blood pressure is related to the wall tension of the vessel according to the Young–Laplace equation (assuming that the thickness of the vessel wall is very small as compared to the diameter of the lumen):
where
P is the blood pressure
t is the wall thickness
r is the inside radius of the cylinder.
is the cylinder stress or "hoop stress".
For the thin-walled assumption to be valid the vessel must have a wall thickness of no more than about one-tenth (often cited as one twentieth) of its radius.
The cylinder stress, in turn, is the average force exerted circumferentially (perpendicular both to the axis and to the radius of the object) in the cylinder wall, and can be described as:
where:
F is the force exerted circumferentially on an area of the cylinder wall that has the following two lengths as sides:
t is the radial thickness of the cylinder
l is the axial length of the cylinder
Stress
When force is applied to a material it starts to deform or move. As the force needed to deform a material (e.g. to make a fluid flow) increases with the size of the surface of the material A., the magnitude of this force F is proportional to the area A of the portion of the surface. Therefore, the quantity (F/A) that is the force per unit area is called the stress. The shear stress at the wall that is associated with blood flow through an artery depends on the artery size and geometry and can range between 0.5 and 4 Pa.
.
Under normal conditions, to avoid atherogenesis, thrombosis, smooth muscle proliferation and endothelial apoptosis, shear stress maintains its magnitude and direction within an acceptable range. In some cases occurring due to blood hammer, shear stress reaches larger values. While the direction of the stress may also change by the reverse flow, depending on the hemodynamic conditions. Therefore, this situation can lead to atherosclerosis disease.
Capacitance
Veins are described as the "capacitance vessels" of the body because over 70% of the blood volume resides in the venous system. Veins are more compliant than arteries and expand to accommodate changing volume.
Blood pressure
The blood pressure in the circulation is principally due to the pumping action of the heart. The pumping action of the heart generates pulsatile blood flow, which is conducted into the arteries, across the micro-circulation and eventually, back via the venous system to the heart. During each heartbeat, systemic arterial blood pressure varies between a maximum (systolic) and a minimum (diastolic) pressure. In physiology, these are often simplified into one value, the mean arterial pressure (MAP), which is calculated as follows:
where:
MAP = Mean Arterial Pressure
DP = Diastolic blood pressure
PP = Pulse pressure which is systolic pressure minus diastolic pressure.
Differences in mean blood pressure are responsible for blood flow from one location to another in the circulation. The rate of mean blood flow depends on both blood pressure and the resistance to flow presented by the blood vessels. Mean blood pressure decreases as the circulating blood moves away from the heart through arteries and capillaries due to viscous losses of energy. Mean blood pressure drops over the whole circulation, although most of the fall occurs along the small arteries and arterioles. Gravity affects blood pressure via hydrostatic forces (e.g., during standing), and valves in veins, breathing, and pumping from contraction of skeletal muscles also influence blood pressure in veins.
The relationship between pressure, flow, and resistance is expressed in the following equation:
When applied to the circulatory system, we get:
where
CO = cardiac output (in L/min)
MAP = mean arterial pressure (in mmHg), the average pressure of blood as it leaves the heart
RAP = right atrial pressure (in mmHg), the average pressure of blood as it returns to the heart
SVR = systemic vascular resistance (in mmHg * min/L)
A simplified form of this equation assumes right atrial pressure is approximately 0:
The ideal blood pressure in the brachial artery, where standard blood pressure cuffs measure pressure, is <120/80 mmHg. Other major arteries have similar levels of blood pressure recordings indicating very low disparities among major arteries. In the innominate artery, the average reading is 110/70 mmHg, the right subclavian artery averages 120/80 and the abdominal aorta is 110/70 mmHg. The relatively uniform pressure in the arteries indicate that these blood vessels act as a pressure reservoir for fluids that are transported within them.
Pressure drops gradually as blood flows from the major arteries, through the arterioles, the capillaries until blood is pushed up back into the heart via the venules, the veins through the vena cava with the help of the muscles. At any given pressure drop, the flow rate is determined by the resistance to the blood flow. In the arteries, with the absence of diseases, there is very little or no resistance to blood. The vessel diameter is the most principal determinant to control resistance. Compared to other smaller vessels in the body, the artery has a much bigger diameter (4 mm), therefore the resistance is low.
The arm–leg (blood pressure) gradient is the difference between the blood pressure measured in the arms and that measured in the legs. It is normally less than 10 mm Hg, but may be increased in e.g. coarctation of the aorta.
Clinical significance
Pressure monitoring
Hemodynamic monitoring is the observation of hemodynamic parameters over time, such as blood pressure and heart rate. Blood pressure can be monitored either invasively through an inserted blood pressure transducer assembly (providing continuous monitoring), or noninvasively by repeatedly measuring the blood pressure with an inflatable blood pressure cuff.
Hypertension is diagnosed by the presence of arterial blood pressures of 140/90 or greater for two clinical visits.
Pulmonary Artery Wedge Pressure can show if there is congestive heart failure, mitral and aortic valve disorders, hypervolemia, shunts, or cardiac tamponade.
Remote, indirect monitoring of blood flow by laser Doppler
Noninvasive hemodynamic monitoring of eye fundus vessels can be performed by Laser Doppler holography, with near infrared light. The eye offers a unique opportunity for the non-invasive exploration of cardiovascular diseases. Laser Doppler imaging by digital holography can measure blood flow in the retina and choroid, whose Doppler responses exhibit a pulse-shaped profile with time This technique enables non invasive functional microangiography by high-contrast measurement of Doppler responses from endoluminal blood flow profiles in vessels in the posterior segment of the eye. Differences in blood pressure drive the flow of blood throughout the circulation. The rate of mean blood flow depends on both blood pressure and the hemodynamic resistance to flow presented by the blood vessels.
Glossary
ANHAcute Normovolemic Hemodilution
ANHuNumber of Units During ANH
BLHMaximum Blood Loss Possible When ANH Is Used Before Homologous Blood Transfusion Is Needed
BLIIncremental Blood Loss Possible with ANH.(BLH – BLs)
BLsMaximum blood loss without ANH before homologous blood transfusion is required
EBVEstimated Blood Volume(70 mL/kg)
HctHaematocrit Always Expressed Here As A Fraction
HiInitial Haematocrit
HmMinimum Safe Haematocrit
PRBCPacked Red Blood Cell Equivalent Saved by ANH
RCMRed cell mass.
RCMHCell Mass Available For Transfusion after ANH
RCMIRed Cell Mass Saved by ANH
SBLSurgical Blood Loss
Etymology and pronunciation
The word hemodynamics uses combining forms of hemo- (which comes from the ancient Greek haima, meaning blood) and dynamics, thus "the dynamics of blood". The vowel of the hemo- syllable is variously written according to the ae/e variation.
Blood hammer
Blood pressure
Cardiac output
Cardiovascular System Dynamics Society
Electrical cardiometry
Esophogeal doppler
Hemodynamics of the aorta
Impedance cardiography
Photoplethysmogram
Laser Doppler imaging
Windkessel effect
Functional near-infrared spectroscopy
Notes and references
Bibliography
Berne RM, Levy MN. Cardiovascular physiology. 7th Ed Mosby 1997
Rowell LB. Human Cardiovascular Control. Oxford University press 1993
Braunwald E (Editor). Heart Disease: A Textbook of Cardiovascular Medicine. 5th Ed. W.B.Saunders 1997
Siderman S, Beyar R, Kleber AG. Cardiac Electrophysiology, Circulation and Transport. Kluwer Academic Publishers 1991
American Heart Association
Otto CM, Stoddard M, Waggoner A, Zoghbi WA. Recommendations for Quantification of Doppler Echocardiography: A Report from the Doppler Quantification Task Force of the Nomenclature and Standards Committee of the American Society of Echocardiography. J Am Soc Echocardiogr 2002;15:167-184
Peterson LH, The Dynamics of Pulsatile Blood Flow, Circ. Res. 1954;2;127-139
Hemodynamic Monitoring, Bigatello LM, George E., Minerva Anestesiol, 2002 Apr;68(4):219-25
Claude Franceschi L'investigation vasculaire par ultrasonographie Doppler Masson 1979 ISBN Nr 2-225-63679-6
Claude Franceschi; Paolo Zamboni Principles of Venous Hemodynamics Nova Science Publishers 2009-01 ISBN Nr 1606924850/9781606924853
Claude Franceschi Venous Insufficiency of the pelvis and lower extremities-Hemodynamic Rationale
WR Milnor: Hemodynamics, Williams & Wilkins, 1982
B Bo Sramek: Systemic Hemodynamics and Hemodynamic Management, 4th Edition, ESBN 1-59196-046-0
External links
Learn hemodynamics
Fluid mechanics
Computational fluid dynamics
Cardiovascular physiology
Exercise physiology
Blood
Mathematics in medicine
Fluid dynamics | 0.785729 | 0.991439 | 0.779002 |
Helmholtz free energy | In thermodynamics, the Helmholtz free energy (or Helmholtz energy) is a thermodynamic potential that measures the useful work obtainable from a closed thermodynamic system at a constant temperature (isothermal). The change in the Helmholtz energy during a process is equal to the maximum amount of work that the system can perform in a thermodynamic process in which temperature is held constant. At constant temperature, the Helmholtz free energy is minimized at equilibrium.
In contrast, the Gibbs free energy or free enthalpy is most commonly used as a measure of thermodynamic potential (especially in chemistry) when it is convenient for applications that occur at constant pressure. For example, in explosives research Helmholtz free energy is often used, since explosive reactions by their nature induce pressure changes. It is also frequently used to define fundamental equations of state of pure substances.
The concept of free energy was developed by Hermann von Helmholtz, a German physicist, and first presented in 1882 in a lecture called "On the thermodynamics of chemical processes". From the German word Arbeit (work), the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol A and the name Helmholtz energy. In physics, the symbol F is also used in reference to free energy or Helmholtz function.
Definition
The Helmholtz free energy is defined as
where
F is the Helmholtz free energy (sometimes also called A, particularly in the field of chemistry) (SI: joules, CGS: ergs),
U is the internal energy of the system (SI: joules, CGS: ergs),
T is the absolute temperature (kelvins) of the surroundings, modelled as a heat bath,
S is the entropy of the system (SI: joules per kelvin, CGS: ergs per kelvin).
The Helmholtz energy is the Legendre transformation of the internal energy U, in which temperature replaces entropy as the independent variable.
Formal development
The first law of thermodynamics in a closed system provides
where is the internal energy, is the energy added as heat, and is the work done on the system. The second law of thermodynamics for a reversible process yields . In case of a reversible change, the work done can be expressed as (ignoring electrical and other non-PV work) and so:
Applying the product rule for differentiation to , it follows
and
The definition of allows us to rewrite this as
Because F is a thermodynamic function of state, this relation is also valid for a process (without electrical work or composition change) that is not reversible.
Minimum free energy and maximum work principles
The laws of thermodynamics are only directly applicable to systems in thermal equilibrium. If we wish to describe phenomena like chemical reactions, then the best we can do is to consider suitably chosen initial and final states in which the system is in (metastable) thermal equilibrium. If the system is kept at fixed volume and is in contact with a heat bath at some constant temperature, then we can reason as follows.
Since the thermodynamical variables of the system are well defined in the initial state and the final state, the internal energy increase , the entropy increase , and the total amount of work that can be extracted, performed by the system, , are well defined quantities. Conservation of energy implies
The volume of the system is kept constant. This means that the volume of the heat bath does not change either, and we can conclude that the heat bath does not perform any work. This implies that the amount of heat that flows into the heat bath is given by
The heat bath remains in thermal equilibrium at temperature T no matter what the system does. Therefore, the entropy change of the heat bath is
The total entropy change is thus given by
Since the system is in thermal equilibrium with the heat bath in the initial and the final states, T is also the temperature of the system in these states. The fact that the system's temperature does not change allows us to express the numerator as the free energy change of the system:
Since the total change in entropy must always be larger or equal to zero, we obtain the inequality
We see that the total amount of work that can be extracted in an isothermal process is limited by the free-energy decrease, and that increasing the free energy in a reversible process requires work to be done on the system. If no work is extracted from the system, then
and thus for a system kept at constant temperature and volume and not capable of performing electrical or other non-PV work, the total free energy during a spontaneous change can only decrease.
This result seems to contradict the equation dF = −S dT − P dV, as keeping T and V constant seems to imply dF = 0, and hence F = constant. In reality there is no contradiction: In a simple one-component system, to which the validity of the equation dF = −S dT − P dV is restricted, no process can occur at constant T and V, since there is a unique P(T, V) relation, and thus T, V, and P are all fixed. To allow for spontaneous processes at constant T and V, one needs to enlarge the thermodynamical state space of the system. In case of a chemical reaction, one must allow for changes in the numbers Nj of particles of each type j. The differential of the free energy then generalizes to
where the are the numbers of particles of type j and the are the corresponding chemical potentials. This equation is then again valid for both reversible and non-reversible changes. In case of a spontaneous change at constant T and V, the last term will thus be negative.
In case there are other external parameters, the above relation further generalizes to
Here the are the external variables, and the the corresponding generalized forces.
Relation to the canonical partition function
A system kept at constant volume, temperature, and particle number is described by the canonical ensemble. The probability of finding the system in some energy eigenstate r, for any microstate i, is given by
where
is the energy of accessible state
Z is called the partition function of the system. The fact that the system does not have a unique energy means that the various thermodynamical quantities must be defined as expectation values. In the thermodynamical limit of infinite system size, the relative fluctuations in these averages will go to zero.
The average internal energy of the system is the expectation value of the energy and can be expressed in terms of Z as follows:
If the system is in state r, then the generalized force corresponding to an external variable x is given by
The thermal average of this can be written as
Suppose that the system has one external variable . Then changing the system's temperature parameter by and the external variable by will lead to a change in :
If we write as
we get
This means that the change in the internal energy is given by
In the thermodynamic limit, the fundamental thermodynamic relation should hold:
This then implies that the entropy of the system is given by
where c is some constant. The value of c can be determined by considering the limit T → 0. In this limit the entropy becomes , where is the ground-state degeneracy. The partition function in this limit is , where is the ground-state energy. Thus, we see that and that
Relating free energy to other variables
Combining the definition of Helmholtz free energy
along with the fundamental thermodynamic relation
one can find expressions for entropy, pressure and chemical potential:
These three equations, along with the free energy in terms of the partition function,
allow an efficient way of calculating thermodynamic variables of interest given the partition function and are often used in density of state calculations. One can also do Legendre transformations for different systems. For example, for a system with a magnetic field or potential, it is true that
Bogoliubov inequality
Computing the free energy is an intractable problem for all but the simplest models in statistical physics. A powerful approximation method is mean-field theory, which is a variational method based on the Bogoliubov inequality. This inequality can be formulated as follows.
Suppose we replace the real Hamiltonian of the model by a trial Hamiltonian , which has different interactions and may depend on extra parameters that are not present in the original model. If we choose this trial Hamiltonian such that
where both averages are taken with respect to the canonical distribution defined by the trial Hamiltonian , then the Bogoliubov inequality states
where is the free energy of the original Hamiltonian, and is the free energy of the trial Hamiltonian. We will prove this below.
By including a large number of parameters in the trial Hamiltonian and minimizing the free energy, we can expect to get a close approximation to the exact free energy.
The Bogoliubov inequality is often applied in the following way. If we write the Hamiltonian as
where is some exactly solvable Hamiltonian, then we can apply the above inequality by defining
Here we have defined to be the average of X over the canonical ensemble defined by . Since defined this way differs from by a constant, we have in general
where is still the average over , as specified above. Therefore,
and thus the inequality
holds. The free energy is the free energy of the model defined by plus . This means that
and thus
Proof of the Bogoliubov inequality
For a classical model we can prove the Bogoliubov inequality as follows. We denote the canonical probability distributions for the Hamiltonian and the trial Hamiltonian by and , respectively. From Gibbs' inequality we know that:
holds. To see this, consider the difference between the left hand side and the right hand side. We can write this as:
Since
it follows that:
where in the last step we have used that both probability distributions are normalized to 1.
We can write the inequality as:
where the averages are taken with respect to . If we now substitute in here the expressions for the probability distributions:
and
we get:
Since the averages of and are, by assumption, identical we have:
Here we have used that the partition functions are constants with respect to taking averages and that the free energy is proportional to minus the logarithm of the partition function.
We can easily generalize this proof to the case of quantum mechanical models. We denote the eigenstates of by . We denote the diagonal components of the density matrices for the canonical distributions for and in this basis as:
and
where the are the eigenvalues of
We assume again that the averages of H and in the canonical ensemble defined by are the same:
where
The inequality
still holds as both the and the sum to 1. On the l.h.s. we can replace:
On the right-hand side we can use the inequality
where we have introduced the notation
for the expectation value of the operator Y in the state r. See here for a proof. Taking the logarithm of this inequality gives:
This allows us to write:
The fact that the averages of H and are the same then leads to the same conclusion as in the classical case:
Generalized Helmholtz energy
In the more general case, the mechanical term must be replaced by the product of volume, stress, and an infinitesimal strain:
where is the stress tensor, and is the strain tensor. In the case of linear elastic materials that obey Hooke's law, the stress is related to the strain by
where we are now using Einstein notation for the tensors, in which repeated indices in a product are summed. We may integrate the expression for to obtain the Helmholtz energy:
Application to fundamental equations of state
The Helmholtz free energy function for a pure substance (together with its partial derivatives) can be used to determine all other thermodynamic properties for the substance. See, for example, the equations of state for water, as given by the IAPWS in their IAPWS-95 release.
Application to training auto-encoders
Hinton and Zemel "derive an objective function for training auto-encoder based on the minimum description length (MDL) principle". "The description length of an input vector using a particular code is the sum of the code cost and reconstruction cost. They define this to be the energy of the code. Given an input vector, they define the energy of a code to be the sum of the code cost and the reconstruction cost." The true expected combined cost is
"which has exactly the form of Helmholtz free energy".
See also
Gibbs free energy and thermodynamic free energy for thermodynamics history overview and discussion of free energy
Grand potential
Enthalpy
Statistical mechanics
This page details the Helmholtz energy from the point of view of thermal and statistical physics.
Bennett acceptance ratio for an efficient way to calculate free energy differences and comparison with other methods.
References
Further reading
Atkins' Physical Chemistry, 7th edition, by Peter Atkins and Julio de Paula, Oxford University Press
HyperPhysics Helmholtz Free Energy Helmholtz and Gibbs Free Energies
Physical quantities
Hermann von Helmholtz
State functions
Thermodynamic free energy | 0.782877 | 0.994986 | 0.778951 |
Stochastic thermodynamics | Overview
When a microscopic machine (e.g. a MEM) performs useful work it generates heat and entropy as a byproduct of the process, however it is also predicted that this machine will operate in "reverse" or "backwards" over appreciable short periods. That is, heat energy from the surroundings will be converted into useful work. For larger engines, this would be described as a violation of the second law of thermodynamics, as entropy is consumed rather than generated. Loschmidt's paradox states that in a time reversible system, for every trajectory there exists a time-reversed anti-trajectory. As the entropy production of a trajectory and its equal anti-trajectory are of identical magnitude but opposite sign, then, so the argument goes, one cannot prove that entropy production is positive.
For a long time, exact results in thermodynamics were only possible in linear systems capable of reaching equilibrium, leaving other questions like the Loschmidt paradox unsolved. During the last few decades fresh approaches have revealed general laws applicable to non-equilibrium system which are described by nonlinear equations, pushing the range of exact thermodynamic statements beyond the realm of traditional linear solutions. These exact results are particularly relevant for small systems where appreciable (typically non-Gaussian) fluctuations occur. Thanks to stochastic thermodynamics it is now possible to accurately predict distribution functions of thermodynamic quantities relating to exchanged heat, applied work or entropy production for these systems.
Fluctuation theorem
The mathematical resolution to Loschmidt's paradox is called the (steady state) fluctuation theorem (FT), which is a generalisation of the second law of thermodynamics. The FT shows that as a system gets larger or the trajectory duration becomes longer, entropy-consuming trajectories become more unlikely, and the expected second law behaviour is recovered.
The FT was first put forward by and much of the work done in developing and extending the theorem was accomplished by theoreticians and mathematicians interested in nonequilibrium statistical mechanics.
The first observation and experimental proof of Evan's fluctuation theorem (FT) was performed by
Jarzynski equality
Siefert writes:
proved a remarkable relation which allows to express the free energy difference between two equilibrium systems by a nonlinear average over the work required to drive the system in a non-equilibrium process from one state to the other. By comparing probability distributions for the work spent in the original process with the time-reversed one, Crooks found a “refinement” of the Jarzynski relation (JR), now called the Crooks fluctuation theorem. Both, this relation and another refinement of the JR, the Hummer-Szabo relation became particularly useful for determining free energy differences and landscapes of biomolecules. These relations are the most prominent ones within a class of exact results (some of which found even earlier and then rediscovered) valid for non-equilibrium systems driven by time-dependent forces. A close analogy to the JR, which relates different equilibrium states, is the Hatano-Sasa relation that applies to transitions between two different non-equilibrium steady states.
This is shown to be a special case of a more general relation.
Stochastic energetics
History
Siefert writes:
Classical thermodynamics, at its heart, deals with general laws governing the transformations of a system, in particular, those involving the exchange of heat, work and matter with an environment. As a central result, total entropy production is identified that in any such process can never decrease, leading, inter alia, to fundamental limits on the efficiency of heat engines and refrigerators.
The thermodynamic characterisation of systems in equilibrium got its microscopic justification from equilibrium statistical mechanics which states that for a system in contact with a heat bath the probability to find it in any specific microstate is given by the Boltzmann factor. For small deviations from equilibrium, linear response theory allows to express transport properties caused by small external fields through equilibrium correlation functions. On a more phenomenological level, linear irreversible thermodynamics provides a relation between such transport coefficients and entropy production in terms of forces and fluxes. Beyond this linear response regime, for a long time, no universal exact results were available.
During the last 20 years fresh approaches have revealed general laws applicable to non-equilibrium system thus pushing the range of validity of exact thermodynamic statements beyond the realm of linear response deep into the genuine non-equilibrium region. These exact results, which become particularly relevant for small systems with appreciable (typically non-Gaussian) fluctuations, generically refer to distribution functions of thermodynamic quantities like exchanged heat, applied work or entropy production.
Stochastic thermodynamics combines the stochastic energetics introduced by with the idea that entropy can consistently be assigned to a single fluctuating trajectory.
Open research
Quantum stochastic thermodynamics
Stochastic thermodynamics can be applied to driven (i.e. open) quantum systems whenever the effects of quantum coherence can be ignored. The dynamics of an open quantum system is then equivalent to a classical stochastic one. However, this is sometimes at the cost of requiring unrealistic measurements at the beginning and end of a process.
Understanding non-equilibrium quantum thermodynamics more broadly is an important and active area of research. The efficiency of some computing and information theory tasks can be greatly enhanced when using quantum correlated states; quantum correlations can be used not only as a valuable resource in quantum computation, but also in the realm of quantum thermodynamics. New types of quantum devices in non-equilibrium states function very differently to their classical counterparts. For example, it has been theoretically shown that non-equilibrium quantum ratchet systems function far more efficiently then that predicted by classical thermodynamics. It has also been shown that quantum coherence can be used to enhance the efficiency of systems beyond the classical Carnot limit. This is because it could be possible to extract work, in the form of photons, from a single heat bath. Quantum coherence can be used in effect to play the role of Maxwell's demon though the broader information theory based interpretation of the second law of thermodynamics is not violated.
Quantum versions of stochastic thermodynamics have been studied for some time and the past few years have seen a surge of interest in this topic. Quantum mechanics involves profound issues around the interpretation of reality (e.g. the Copenhagen interpretation, many-worlds, de Broglie-Bohm theory etc are all competing interpretations that try to explain the unintuitive results of quantum theory) . It is hoped that by trying to specify the quantum-mechanical definition of work, dealing with open quantum systems, analyzing exactly solvable models, or proposing and performing experiments to test non-equilibrium predictions, important insights into the interpretation of quantum mechanics and the true nature of reality will be gained.
Applications of non-equilibrium work relations, like the Jarzynski equality, have recently been proposed for the purposes of detecting quantum entanglement and to improving optimization problems (minimize or maximize a function of multivariables called the cost function) via quantum annealing .
Active baths
Until recently thermodynamics has only considered systems coupled to a thermal bath and, therefore, satisfying Boltzmann statistics. However, some systems do not satisfy these conditions and are far from equilibrium such as living matter, for which fluctuations are expected to be non-Gaussian.
Active particle systems are able to take energy from their environment and drive themselves far from equilibrium. An important example of active matter is constituted by objects capable of self propulsion. Thanks to this property, they feature a series of novel behaviours that are not attainable by matter at thermal equilibrium, including, for example, swarming and the emergence of other collective properties. A passive particle is considered in an active bath when it is in an environment where a wealth of active particles are present. These particles will exert nonthermal forces on the passive object so that it will experience non-thermal fluctuations and will behave widely different from a passive Brownian particle in a thermal bath. The presence of an active bath can significantly influence the microscopic thermodynamics of a particle. Experiments have suggested that the Jarzynski equality does not hold in some cases due to the presence of non-Boltzmann statistics in active baths. This observation points towards a new direction in the study of non-equilibrium statistical physics and stochastic thermodynamics, where also the environment itself is far from equilibrium.
Active baths are a question of particular importance in biochemistry. For example, biomolecules within cells are coupled with an active bath due to the presence of molecular motors within the cytoplasm, which leads to striking and largely not yet understood phenomena such as the emergence of anomalous diffusion (Barkai et al., 2012). Also, protein folding might be facilitated by the presence of active fluctuations (Harder et al., 2014b) and active matter dynamics could play a central role in several biological functions (Mallory et al., 2015; Shin et al., 2015; Suzuki et al., 2015). It is an open question to what degree stochastic thermodynamics can be applied to systems coupled to active baths.
References
Notes
Citations
Academic references
Press
Statistical mechanics
Thermodynamics
Non-equilibrium thermodynamics
Branches of thermodynamics
Stochastic models | 0.782778 | 0.99497 | 0.77884 |
Inertia coupling | In aeronautics, inertia coupling, also referred to as inertial coupling and inertial roll coupling, is a potentially catastrophic phenomenon of high-speed flight in a long, thin aircraft, in which an intentional rotation of the aircraft about one axis prevents the aircraft's design from inhibiting other unintended rotations. The problem became apparent in the 1950s, when the first supersonic jet fighter aircraft and research aircraft were developed with narrow wingspans, and caused the loss of aircraft and pilots before the design features to counter it (e.g. a big enough fin) were understood.
The term "inertia/inertial coupling" has been criticized as misleading, because the phenomenon is not solely an instability of inertial movement, like the Janibekov effect. Instead, the phenomenon arises because aerodynamic forces react too slowly to track an aircraft's orientation. At low speeds and thick air, aerodynamic forces match aircraft translational velocity to orientation, avoiding the dangerous dynamical regime. But at high speeds or thin air, the wing and empennage may not generate sufficient forces and moments to stabilize the aircraft.
Description
Inertia coupling tends to occur in aircraft with a long, slender, high-density fuselage. A simple, yet accurate mental model describing the aircraft's mass distribution is a rhombus of point masses: one large mass fore and aft, and a small one on each wing. The inertia tensor that this distribution generates has a large yaw component and small pitch and roll components, with the pitch component slightly larger.
Euler's equations govern the rotation of an aircraft. When , the angular rate of roll, is controlled by the aircraft, then the other rotations must satisfy where y, p, and r indicate yaw, pitch, and roll; is the moment of inertia along an axis; the external torque from aerodynamic forces along an axis; and dots indicate time derivatives. When aerodynamic forces are absent, this 2variable system is the equation of a simple harmonic oscillator with frequency : a rolling Space Shuttle will naturally undergo small oscillations in pitch and yaw.
Conversely, when the craft does not roll at all, the only terms on the right-hand side are the aerodynamic torques, which are (at small angles) proportional to the craft's angular orientation to the freestream air. That is: there are natural constants such that an unrolling aircraft experiences
In the full case of a rolling aircraft, the connection between orientation and angular velocity is not entirely straightforward, because the aircraft is a rotating reference frame. The roll inherently exchanges yaw for pitch and vice-versa: Assuming nonzero roll, time can always be rescaled so that . The full equations of the body are then of two damped, coupled harmonic oscillators: where But if in either axis, then the damping is eliminated and the system is unstable.
In dimensional terms (that is, unscaled time), instability requires . Since is small, In particular, one is at least 1. In thick air, are too large to matter. But in thin air and supersonic speeds, they decrease, and may become comparable to during a rapid roll.
Techniques to prevent inertial roll coupling include increased directional stability and reduced roll rate. Alternatively, the unstable aircraft dynamics may be mitigated: the unstable modes require time to grow, and a sufficiently short-duration roll at limited angle of attack may allow recovery to a controlled state post-roll.
Early history
In 1948, William Phillips described inertial roll coupling in the context of missiles in an NACA report. However, his predictions appeared primarily theoretical in the case of planes. The violent motions he predicted were first seen in the X-series research aircraft and Century-series fighter aircraft in the early 1950s. Before this time, aircraft tended to have greater width than length, and their mass was generally distributed closer to the center of mass. This was especially true for propeller aircraft, but equally true for early jet fighters as well. The effect became obvious only when aircraft began to sacrifice aerodynamic surface area to reduce drag, and use longer fineness ratios to reduce supersonic drag. Such aircraft were generally much more fuselage-heavy, allowing gyroscopic effects to overwhelm the small control surfaces.
The roll coupling study of the X-3 Stiletto, first flown in 1952, was extremely short but produced valuable data. Abrupt aileron rolls were conducted at Mach 0.92 and 1.05 and produced "disturbing" motions and excessive accelerations and loads.
In 1953, inertial roll coupling nearly killed Chuck Yeager in the X-1A.
Inertial roll coupling was one of three distinct coupling modes that followed one another as the rocket-powered Bell X-2 hit Mach 3.2 during a flight on 27 September 1956, killing pilot Captain Mel Apt. Although simulators had predicted that Apt's maneuvers would produce an uncontrollable flight regime, at the time most pilots did not believe that the simulators accurately modeled the plane's flight characteristics.
The first two production aircraft to experience inertial roll coupling were the F-100 Super Sabre and F-102 Delta Dagger (both first flown in 1953). The F-100 was modified with a larger vertical tail to increase its directional stability. The F-102 was modified to increase wing and tail areas and was fitted with an augmented control system. To enable pilot control during dynamic motion maneuvers the tail area of the F-102A was increased 40%.
In the case of the F-101 Voodoo (first flown in 1954), a stability augmentation system was retrofitted to the A models to help combat this problem.
The Douglas Skyray was not able to incorporate any design changes to control inertial roll coupling and instead had restricted maneuver limits at which coupling effects did not cause problems.
The Lockheed F-104 Starfighter (first flown in 1956) had its stabilator (horizontal tail surface) mounted atop its vertical fin to reduce inertia coupling.
See also
Upset Prevention and Recovery Training
References
Aircraft aerodynamics
Aviation risks
Chuck Yeager | 0.791906 | 0.983362 | 0.77873 |
Nernst–Planck equation | The Nernst–Planck equation is a conservation of mass equation used to describe the motion of a charged chemical species in a fluid medium. It extends Fick's law of diffusion for the case where the diffusing particles are also moved with respect to the fluid by electrostatic forces. It is named after Walther Nernst and Max Planck.
Equation
The Nernst–Planck equation is a continuity equation for the time-dependent concentration of a chemical species:
where is the flux. It is assumed that the total flux is composed of three elements: diffusion, advection, and electromigration. This implies that the concentration is affected by an ionic concentration gradient , flow velocity , and an electric field :
where is the diffusivity of the chemical species, is the valence of ionic species, is the elementary charge, is the Boltzmann constant, and is the absolute temperature. The electric field may be further decomposed as:
where is the electric potential and is the magnetic vector potential. Therefore, the Nernst–Planck equation is given by:
Simplifications
Assuming that the concentration is at equilibrium and the flow velocity is zero, meaning that only the ion species moves, the Nernst–Planck equation takes the form:
Rather than a general electric field, if we assume that only the electrostatic component is significant, the equation is further simplified by removing the time derivative of the magnetic vector potential:
Finally, in units of mol/(m2·s) and the gas constant , one obtains the more familiar form:
where is the Faraday constant equal to ; the product of Avogadro constant and the elementary charge.
Applications
The Nernst–Planck equation is applied in describing the ion-exchange kinetics in soils. It has also been applied to membrane electrochemistry.
See also
Goldman–Hodgkin–Katz equation
Bioelectrochemistry
References
Walther Nernst
Diffusion
Physical chemistry
Electrochemical equations
Statistical mechanics
Max Planck
Transport phenomena
Electrochemistry | 0.79078 | 0.984719 | 0.778697 |
Laws of thermodynamics | The laws of thermodynamics are a set of scientific laws which define a group of physical quantities, such as temperature, energy, and entropy, that characterize thermodynamic systems in thermodynamic equilibrium. The laws also use various parameters for thermodynamic processes, such as thermodynamic work and heat, and establish relationships between them. They state empirical facts that form a basis of precluding the possibility of certain phenomena, such as perpetual motion. In addition to their use in thermodynamics, they are important fundamental laws of physics in general and are applicable in other natural sciences.
Traditionally, thermodynamics has recognized three fundamental laws, simply named by an ordinal identification, the first law, the second law, and the third law. A more fundamental statement was later labelled as the zeroth law after the first three laws had been established.
The zeroth law of thermodynamics defines thermal equilibrium and forms a basis for the definition of temperature: If two systems are each in thermal equilibrium with a third system, then they are in thermal equilibrium with each other.
The first law of thermodynamics states that, when energy passes into or out of a system (as work, heat, or matter), the system's internal energy changes in accordance with the law of conservation of energy.
The second law of thermodynamics states that in a natural thermodynamic process, the sum of the entropies of the interacting thermodynamic systems never decreases. A common corollary of the statement is that heat does not spontaneously pass from a colder body to a warmer body.
The third law of thermodynamics states that a system's entropy approaches a constant value as the temperature approaches absolute zero. With the exception of non-crystalline solids (glasses), the entropy of a system at absolute zero is typically close to zero.
The first and second laws prohibit two kinds of perpetual motion machines, respectively: the perpetual motion machine of the first kind which produces work with no energy input, and the perpetual motion machine of the second kind which spontaneously converts thermal energy into mechanical work.
History
The history of thermodynamics is fundamentally interwoven with the history of physics and the history of chemistry, and ultimately dates back to theories of heat in antiquity. The laws of thermodynamics are the result of progress made in this field over the nineteenth and early twentieth centuries. The first established thermodynamic principle, which eventually became the second law of thermodynamics, was formulated by Sadi Carnot in 1824 in his book Reflections on the Motive Power of Fire. By 1860, as formalized in the works of scientists such as Rudolf Clausius and William Thomson, what are now known as the first and second laws were established. Later, Nernst's theorem (or Nernst's postulate), which is now known as the third law, was formulated by Walther Nernst over the period 1906–1912. While the numbering of the laws is universal today, various textbooks throughout the 20th century have numbered the laws differently. In some fields, the second law was considered to deal with the efficiency of heat engines only, whereas what was called the third law dealt with entropy increases. Gradually, this resolved itself and a zeroth law was later added to allow for a self-consistent definition of temperature. Additional laws have been suggested, but have not achieved the generality of the four accepted laws, and are generally not discussed in standard textbooks.
Zeroth law
The zeroth law of thermodynamics provides for the foundation of temperature as an empirical parameter in thermodynamic systems and establishes the transitive relation between the temperatures of multiple bodies in thermal equilibrium. The law may be stated in the following form:
Though this version of the law is one of the most commonly stated versions, it is only one of a diversity of statements that are labeled as "the zeroth law". Some statements go further, so as to supply the important physical fact that temperature is one-dimensional and that one can conceptually arrange bodies in a real number sequence from colder to hotter.
These concepts of temperature and of thermal equilibrium are fundamental to thermodynamics and were clearly stated in the nineteenth century. The name 'zeroth law' was invented by Ralph H. Fowler in the 1930s, long after the first, second, and third laws were widely recognized. The law allows the definition of temperature in a non-circular way without reference to entropy, its conjugate variable. Such a temperature definition is said to be 'empirical'.
First law
The first law of thermodynamics is a version of the law of conservation of energy, adapted for thermodynamic processes. In general, the conservation law states that the total energy of an isolated system is constant; energy can be transformed from one form to another, but can be neither created nor destroyed.
For processes that include the transfer of matter, a further statement is needed.
The First Law encompasses several principles:
Conservation of energy, which says that energy can be neither created nor destroyed, but can only change form. A particular consequence of this is that the total energy of an isolated system does not change.
The concept of internal energy and its relationship to temperature. If a system has a definite temperature, then its total energy has three distinguishable components, termed kinetic energy (energy due to the motion of the system as a whole), potential energy (energy resulting from an externally imposed force field), and internal energy. The establishment of the concept of internal energy distinguishes the first law of thermodynamics from the more general law of conservation of energy.
Work is a process of transferring energy to or from a system in ways that can be described by macroscopic mechanical forces acting between the system and its surroundings. The work done by the system can come from its overall kinetic energy, from its overall potential energy, or from its internal energy. For example, when a machine (not a part of the system) lifts a system upwards, some energy is transferred from the machine to the system. The system's energy increases as work is done on the system and in this particular case, the energy increase of the system is manifested as an increase in the system's gravitational potential energy. Work added to the system increases the potential energy of the system.
When matter is transferred into a system, the internal energy and potential energy associated with it are transferred into the new combined system. where denotes the internal energy per unit mass of the transferred matter, as measured while in the surroundings; and denotes the amount of transferred mass.
The flow of heat is a form of energy transfer. Heat transfer is the natural process of moving energy to or from a system, other than by work or the transfer of matter. In a diathermal system, the internal energy can only be changed by the transfer of energy as heat:
Combining these principles leads to one traditional statement of the first law of thermodynamics: it is not possible to construct a machine which will perpetually output work without an equal amount of energy input to that machine. Or more briefly, a perpetual motion machine of the first kind is impossible.
Second law
The second law of thermodynamics indicates the irreversibility of natural processes, and in many cases, the tendency of natural processes to lead towards spatial homogeneity of matter and energy, especially of temperature. It can be formulated in a variety of interesting and important ways. One of the simplest is the Clausius statement, that heat does not spontaneously pass from a colder to a hotter body.
It implies the existence of a quantity called the entropy of a thermodynamic system. In terms of this quantity it implies that
The second law is applicable to a wide variety of processes, both reversible and irreversible. According to the second law, in a reversible heat transfer, an element of heat transferred, , is the product of the temperature, both of the system and of the sources or destination of the heat, with the increment of the system's conjugate variable, its entropy:
While reversible processes are a useful and convenient theoretical limiting case, all natural processes are irreversible. A prime example of this irreversibility is the transfer of heat by conduction or radiation. It was known long before the discovery of the notion of entropy that when two bodies, initially of different temperatures, come into direct thermal connection, then heat immediately and spontaneously flows from the hotter body to the colder one.
Entropy may also be viewed as a physical measure concerning the microscopic details of the motion and configuration of a system, when only the macroscopic states are known. Such details are often referred to as disorder on a microscopic or molecular scale, and less often as dispersal of energy. For two given macroscopically specified states of a system, there is a mathematically defined quantity called the 'difference of information entropy between them'. This defines how much additional microscopic physical information is needed to specify one of the macroscopically specified states, given the macroscopic specification of the other – often a conveniently chosen reference state which may be presupposed to exist rather than explicitly stated. A final condition of a natural process always contains microscopically specifiable effects which are not fully and exactly predictable from the macroscopic specification of the initial condition of the process. This is why entropy increases in natural processes – the increase tells how much extra microscopic information is needed to distinguish the initial macroscopically specified state from the final macroscopically specified state. Equivalently, in a thermodynamic process, energy spreads.
Third law
The third law of thermodynamics can be stated as:
At absolute zero temperature, the system is in the state with the minimum thermal energy, the ground state. The constant value (not necessarily zero) of entropy at this point is called the residual entropy of the system. With the exception of non-crystalline solids (e.g. glass) the residual entropy of a system is typically close to zero. However, it reaches zero only when the system has a unique ground state (i.e., the state with the minimum thermal energy has only one configuration, or microstate). Microstates are used here to describe the probability of a system being in a specific state, as each microstate is assumed to have the same probability of occurring, so macroscopic states with fewer microstates are less probable. In general, entropy is related to the number of possible microstates according to the Boltzmann principle
where S is the entropy of the system, kB is the Boltzmann constant, and Ω the number of microstates. At absolute zero there is only 1 microstate possible (Ω = 1 as all the atoms are identical for a pure substance, and as a result all orders are identical as there is only one combination) and .
Onsager relations
The Onsager reciprocal relations have been considered the fourth law of thermodynamics. They describe the relation between thermodynamic flows and forces in non-equilibrium thermodynamics, under the assumption that thermodynamic variables can be defined locally in a condition of local equilibrium. These relations are derived from statistical mechanics under the principle of microscopic reversibility (in the absence of external magnetic fields). Given a set of extensive parameters (energy, mass, entropy, number of particles and so on) and thermodynamic forces (related to their related intrinsic parameters, such as temperature and pressure), the Onsager theorem states that
where index every parameter and its related force, and
are called the thermodynamic flows.
See also
Chemical thermodynamics
Enthalpy
Entropy production
Ginsberg's theorem (Parody of the laws of thermodynamics)
H-theorem
Statistical mechanics
Table of thermodynamic equations
References
Further reading
Atkins, Peter (2007). Four Laws That Drive the Universe. OUP Oxford.
Goldstein, Martin & Inge F. (1993). The Refrigerator and the Universe. Harvard Univ. Press.
Guggenheim, E.A. (1985). Thermodynamics. An Advanced Treatment for Chemists and Physicists, seventh edition.
Adkins, C. J., (1968) Equilibrium Thermodynamics. McGraw-Hill
External links
Scientific laws | 0.779118 | 0.999345 | 0.778607 |
Kugelblitz (astrophysics) | A kugelblitz is a theoretical astrophysical object predicted by general relativity. It is a concentration of heat, light or radiation so intense that its energy forms an event horizon and becomes self-trapped. In other words, if enough radiation is aimed into a region of space, the concentration of energy can warp spacetime so much that it creates a black hole. This would be a black hole whose original mass–energy was in the form of radiant energy rather than matter; however, there is currently no uniformly accepted method of distinguishing black holes by origin.
John Archibald Wheeler's 1955 Physical Review paper entitled "Geons" refers to the kugelblitz phenomenon and explores the idea of creating such particles (or toy models of particles) from spacetime curvature.
A study published in Physical Review Letters in 2024 argues that the formation of a kugelblitz is impossible due to dissipative quantum effects like vacuum polarization, which prevent sufficient energy buildup to create an event horizon. The study concludes that such a phenomenon cannot occur in any realistic scenario within our universe.
The kugelblitz phenomenon has been considered a possible basis for interstellar engines (drives) for future black hole starships.
In fiction
A kugelblitz is a major plot point in the third season of the American superhero television series The Umbrella Academy.
A kugelblitz is the home of a major faction in Frederik Pohl's "Gateway" novels.
See also
Bekenstein bound
Micro black hole
References
Black holes
General relativity
Light | 0.780624 | 0.997405 | 0.778599 |
Geodynamics | Geodynamics is a subfield of geophysics dealing with dynamics of the Earth. It applies physics, chemistry and mathematics to the understanding of how mantle convection leads to plate tectonics and geologic phenomena such as seafloor spreading, mountain building, volcanoes, earthquakes, faulting. It also attempts to probe the internal activity by measuring magnetic fields, gravity, and seismic waves, as well as the mineralogy of rocks and their isotopic composition. Methods of geodynamics are also applied to exploration of other planets.
Overview
Geodynamics is generally concerned with processes that move materials throughout the Earth. In the Earth's interior, movement happens when rocks melt or deform and flow in response to a stress field. This deformation may be brittle, elastic, or plastic, depending on the magnitude of the stress and the material's physical properties, especially the stress relaxation time scale. Rocks are structurally and compositionally heterogeneous and are subjected to variable stresses, so it is common to see different types of deformation in close spatial and temporal proximity. When working with geological timescales and lengths, it is convenient to use the continuous medium approximation and equilibrium stress fields to consider the average response to average stress.
Experts in geodynamics commonly use data from geodetic GPS, InSAR, and seismology, along with numerical models, to study the evolution of the Earth's lithosphere, mantle and core.
Work performed by geodynamicists may include:
Modeling brittle and ductile deformation of geologic materials
Predicting patterns of continental accretion and breakup of continents and supercontinents
Observing surface deformation and relaxation due to ice sheets and post-glacial rebound, and making related conjectures about the viscosity of the mantle
Finding and understanding the driving mechanisms behind plate tectonics.
Deformation of rocks
Rocks and other geological materials experience strain according to three distinct modes, elastic, plastic, and brittle depending on the properties of the material and the magnitude of the stress field. Stress is defined as the average force per unit area exerted on each part of the rock. Pressure is the part of stress that changes the volume of a solid; shear stress changes the shape. If there is no shear, the fluid is in hydrostatic equilibrium. Since, over long periods, rocks readily deform under pressure, the Earth is in hydrostatic equilibrium to a good approximation. The pressure on rock depends only on the weight of the rock above, and this depends on gravity and the density of the rock. In a body like the Moon, the density is almost constant, so a pressure profile is readily calculated. In the Earth, the compression of rocks with depth is significant, and an equation of state is needed to calculate changes in density of rock even when it is of uniform composition.
Elastic
Elastic deformation is always reversible, which means that if the stress field associated with elastic deformation is removed, the material will return to its previous state. Materials only behave elastically when the relative arrangement along the axis being considered of material components (e.g. atoms or crystals) remains unchanged. This means that the magnitude of the stress cannot exceed the yield strength of a material, and the time scale of the stress cannot approach the relaxation time of the material. If stress exceeds the yield strength of a material, bonds begin to break (and reform), which can lead to ductile or brittle deformation.
Ductile
Ductile or plastic deformation happens when the temperature of a system is high enough so that a significant fraction of the material microstates (figure 1) are unbound, which means that a large fraction of the chemical bonds are in the process of being broken and reformed. During ductile deformation, this process of atomic rearrangement redistributes stress and strain towards equilibrium faster than they can accumulate. Examples include bending of the lithosphere under volcanic islands or sedimentary basins, and bending at oceanic trenches. Ductile deformation happens when transport processes such as diffusion and advection that rely on chemical bonds to be broken and reformed redistribute strain about as fast as it accumulates.
Brittle
When strain localizes faster than these relaxation processes can redistribute it, brittle deformation occurs. The mechanism for brittle deformation involves a positive feedback between the accumulation or propagation of defects especially those produced by strain in areas of high strain, and the localization of strain along these dislocations and fractures. In other words, any fracture, however small, tends to focus strain at its leading edge, which causes the fracture to extend.
In general, the mode of deformation is controlled not only by the amount of stress, but also by the distribution of strain and strain associated features. Whichever mode of deformation ultimately occurs is the result of a competition between processes that tend to localize strain, such as fracture propagation, and relaxational processes, such as annealing, that tend to delocalize strain.
Deformation structures
Structural geologists study the results of deformation, using observations of rock, especially the mode and geometry of deformation to reconstruct the stress field that affected the rock over time. Structural geology is an important complement to geodynamics because it provides the most direct source of data about the movements of the Earth. Different modes of deformation result in distinct geological structures, e.g. brittle fracture in rocks or ductile folding.
Thermodynamics
The physical characteristics of rocks that control the rate and mode of strain, such as yield strength or viscosity, depend on the thermodynamic state of the rock and composition. The most important thermodynamic variables in this case are temperature and pressure. Both of these increase with depth, so to a first approximation the mode of deformation can be understood in terms of depth. Within the upper lithosphere, brittle deformation is common because under low pressure rocks have relatively low brittle strength, while at the same time low temperature reduces the likelihood of ductile flow. After the brittle-ductile transition zone, ductile deformation becomes dominant. Elastic deformation happens when the time scale of stress is shorter than the relaxation time for the material. Seismic waves are a common example of this type of deformation. At temperatures high enough to melt rocks, the ductile shear strength approaches zero, which is why shear mode elastic deformation (S-Waves) will not propagate through melts.
Forces
The main motive force behind stress in the Earth is provided by thermal energy from radioisotope decay, friction, and residual heat. Cooling at the surface and heat production within the Earth create a metastable thermal gradient from the hot core to the relatively cool lithosphere. This thermal energy is converted into mechanical energy by thermal expansion. Deeper and hotter rocks often have higher thermal expansion and lower density relative to overlying rocks. Conversely, rock that is cooled at the surface can become less buoyant than the rock below it. Eventually this can lead to a Rayleigh-Taylor instability (Figure 2), or interpenetration of rock on different sides of the buoyancy contrast.
Negative thermal buoyancy of the oceanic plates is the primary cause of subduction and plate tectonics, while positive thermal buoyancy may lead to mantle plumes, which could explain intraplate volcanism. The relative importance of heat production vs. heat loss for buoyant convection throughout the whole Earth remains uncertain and understanding the details of buoyant convection is a key focus of geodynamics.
Methods
Geodynamics is a broad field which combines observations from many different types of geological study into a broad picture of the dynamics of Earth. Close to the surface of the Earth, data includes field observations, geodesy, radiometric dating, petrology, mineralogy, drilling boreholes and remote sensing techniques. However, beyond a few kilometers depth, most of these kinds of observations become impractical. Geologists studying the geodynamics of the mantle and core must rely entirely on remote sensing, especially seismology, and experimentally recreating the conditions found in the Earth in high pressure high temperature experiments.(see also Adams–Williamson equation).
Numerical modeling
Because of the complexity of geological systems, computer modeling is used to test theoretical predictions about geodynamics using data from these sources.
There are two main ways of geodynamic numerical modeling.
Modelling to reproduce a specific observation: This approach aims to answer what causes a specific state of a particular system.
Modelling to produce basic fluid dynamics: This approach aims to answer how a specific system works in general.
Basic fluid dynamics modelling can further be subdivided into instantaneous studies, which aim to reproduce the instantaneous flow in a system due to a given buoyancy distribution, and time-dependent studies, which either aim to reproduce a possible evolution of a given initial condition over time or a statistical (quasi) steady-state of a given system.
See also
Cytherodynamics
References
Bibliography
External links
Geological Survey of Canada - Geodynamics Program
Geodynamics Homepage - JPL/NASA
NASA Planetary geodynamics
Los Alamos National Laboratory–Geodynamics & National Security
Computational Infrastructure for Geodynamics
Geophysics
Geodesy
Plate tectonics | 0.801049 | 0.971912 | 0.77855 |
Earnshaw's theorem | Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges. This was first proven by British mathematician Samuel Earnshaw in 1842.
It is usually cited in reference to magnetic fields, but was first applied to electrostatic field.
Earnshaw's theorem applies to classical inverse-square law forces (electric and gravitational) and also to the magnetic forces of permanent magnets, if the magnets are hard (the magnets do not vary in strength with external fields). Earnshaw's theorem forbids magnetic levitation in many common situations.
If the materials are not hard, Braunbeck's extension shows that materials with relative magnetic permeability greater than one (paramagnetism) are further destabilising, but materials with a permeability less than one (diamagnetic materials) permit stable configurations.
Explanation
Informally, the case of a point charge in an arbitrary static electric field is a simple consequence of Gauss's law. For a particle to be in a stable equilibrium, small perturbations ("pushes") on the particle in any direction should not break the equilibrium; the particle should "fall back" to its previous position. This means that the force field lines around the particle's equilibrium position should all point inward, toward that position. If all of the surrounding field lines point toward the equilibrium point, then the divergence of the field at that point must be negative (i.e. that point acts as a sink). However, Gauss's law says that the divergence of any possible electric force field is zero in free space. In mathematical notation, an electrical force deriving from a potential will always be divergenceless (satisfy Laplace's equation):
Therefore, there are no local minima or maxima of the field potential in free space, only saddle points. A stable equilibrium of the particle cannot exist and there must be an instability in some direction. This argument may not be sufficient if all the second derivatives of U are null.
To be completely rigorous, strictly speaking, the existence of a stable point does not require that all neighbouring force vectors point exactly toward the stable point; the force vectors could spiral in toward the stable point, for example. One method for dealing with this invokes the fact that, in addition to the divergence, the curl of any electric field in free space is also zero (in the absence of any magnetic currents).
It is also possible to prove this theorem directly from the force/energy equations for static magnetic dipoles (below). Intuitively, though, it is plausible that if the theorem holds for a single point charge then it would also hold for two opposite point charges connected together. In particular, it would hold in the limit where the distance between the charges is decreased to zero while maintaining the dipole moment – that is, it would hold for an electric dipole. But if the theorem holds for an electric dipole, then it will also hold for a magnetic dipole, since the (static) force/energy equations take the same form for both electric and magnetic dipoles.
As a practical consequence, this theorem also states that there is no possible static configuration of ferromagnets that can stably levitate an object against gravity, even when the magnetic forces are stronger than the gravitational forces.
Earnshaw's theorem has even been proven for the general case of extended bodies, and this is so even if they are flexible and conducting, provided they are not diamagnetic, as diamagnetism constitutes a (small) repulsive force, but no attraction.
There are, however, several exceptions to the rule's assumptions, which allow magnetic levitation.
Loopholes
Earnshaw's theorem has no exceptions for non-moving permanent ferromagnets. However, Earnshaw's theorem does not necessarily apply to moving ferromagnets, certain electromagnetic systems, pseudo-levitation and diamagnetic materials. These can thus seem to be exceptions, though in fact they exploit the constraints of the theorem.
Spin-stabilized magnetic levitation: Spinning ferromagnets (such as the Levitron) can, while spinning, magnetically levitate using only permanent ferromagnets, the system adding gyroscopic forces. (The spinning ferromagnet is not a "non-moving ferromagnet").
Switching the polarity of an electromagnet or system of electromagnets can levitate a system by continuous expenditure of energy. Maglev trains are one application.
Pseudo-levitation constrains the movement of the magnets usually using some form of a tether or wall. This works because the theorem shows only that there is some direction in which there will be an instability. Limiting movement in that direction allows levitation with fewer than the full 3 dimensions available for movement (note that the theorem is proven for 3 dimensions, not 1D or 2D).
Diamagnetic materials are excepted because they exhibit only repulsion against the magnetic field, whereas the theorem requires materials that have both repulsion and attraction. An example of this is the famous levitating frog (see Diamagnetism).
Earnshaw's theorem applies in an inertial reference frame. But it is sometimes more natural to work in a rotating reference frame that contains a fictitious centrifugal force that violates the assumptions of Earnshaw's theorem. Points that are stationary in a rotating reference frame (but moving in an inertial frame) can be absolutely stable or absolutely unstable. For example, in the restricted three-body problem, the effective potential from the fictitious centrifugal force allows the Lagrange points L4 and L5 to lie at local maxima of the effective potential field even if there is only negligible mass at those locations. (Even though these Lagrange points lie at local maxima of the potential field rather than local minima, they are still absolutely stable in a certain parameter regime due to the fictitious velocity-dependent Coriolis force, which is not captured by the scalar potential field.)
Effect on physics
For quite some time, Earnshaw's theorem posed a startling question of why matter is stable and holds together, since much evidence was found that matter was held together electromagnetically despite the proven instability of static charge configurations. Since Earnshaw's theorem only applies to stationary charges, there were attempts to explain stability of atoms using planetary models, such as Nagaoka's Saturnian model (1904) and Rutherford's planetary model (1911), where the point electrons are circling a positive point charge in the center. Yet, the stability of such planetary models was immediately questioned: electrons have nonzero acceleration when moving along a circle, and hence they would radiate the energy via a non-stationary electromagnetic field. Bohr's model of 1913 formally prohibited this radiation without giving an explanation for its absence.
On the other hand, Earnshaw's theorem only applies to point charges, but not to distributed charges. This led J. J. Thomson in 1904 to his plum pudding model, where the negative point charges (electrons, or "plums") are embedded into a distributed positive charge "pudding", where they could be either stationary or moving along circles; this is a configuration which is non-point positive charges (and also non-stationary negative charges), not covered by Earnshaw's theorem. Eventually this led the way to Schrödinger's model of 1926, where the existence of non-radiative states in which the electron is not a point but rather a distributed charge density resolves the above conundrum at a fundamental level: not only there was no contradiction to Earnshaw's theorem, but also the resulting charge density and the current density are stationary, and so is the corresponding electromagnetic field, no longer radiating the energy to infinity. This gave a quantum mechanical explanation of the stability of the atom.
At a more practical level, it can be said that the Pauli exclusion principle and the existence of discrete electron orbitals are responsible for making bulk matter rigid.
Proofs for magnetic dipoles
Introduction
While a more general proof may be possible, three specific cases are considered here. The first case is a magnetic dipole of constant magnitude that has a fast (fixed) orientation. The second and third cases are magnetic dipoles where the orientation changes to remain aligned either parallel or antiparallel to the field lines of the external magnetic field. In paramagnetic and diamagnetic materials the dipoles are aligned parallel and antiparallel to the field lines, respectively.
Background
The proofs considered here are based on the following principles.
The energy U of a magnetic dipole with a magnetic dipole moment M in an external magnetic field B is given by
The dipole will only be stably levitated at points where the energy has a minimum. The energy can only have a minimum at points where the Laplacian of the energy is greater than zero. That is, where
Finally, because both the divergence and the curl of a magnetic field are zero (in the absence of current or a changing electric field), the Laplacians of the individual components of a magnetic field are zero. That is,
This is proven at the very end of this article as it is central to understanding the overall proof.
Summary of proofs
For a magnetic dipole of fixed orientation (and constant magnitude) the energy will be given by
where Mx, My and Mz are constant. In this case the Laplacian of the energy is always zero,
so the dipole can have neither an energy minimum nor an energy maximum. That is, there is no point in free space where the dipole is either stable in all directions or unstable in all directions.
Magnetic dipoles aligned parallel or antiparallel to an external field with the magnitude of the dipole proportional to the external field will correspond to paramagnetic and diamagnetic materials respectively. In these cases the energy will be given by
where k is a constant greater than zero for paramagnetic materials and less than zero for diamagnetic materials.
In this case, it will be shown that
which, combined with the constant , shows that paramagnetic materials can have energy maxima but not energy minima and diamagnetic materials can have energy minima but not energy maxima. That is, paramagnetic materials can be unstable in all directions but not stable in all directions and diamagnetic materials can be stable in all directions but not unstable in all directions. Of course, both materials can have saddle points.
Finally, the magnetic dipole of a ferromagnetic material (a permanent magnet) that is aligned parallel or antiparallel to a magnetic field will be given by
so the energy will be given by
but this is just the square root of the energy for the paramagnetic and diamagnetic case discussed above and, since the square root function is monotonically increasing, any minimum or maximum in the paramagnetic and diamagnetic case will be a minimum or maximum here as well. There are, however, no known configurations of permanent magnets that stably levitate so there may be other reasons not discussed here why it is not possible to maintain permanent magnets in orientations antiparallel to magnetic fields (at least not without rotation—see spin-stabilized magnetic levitation.
Detailed proofs
Earnshaw's theorem was originally formulated for electrostatics (point charges) to show that there is no stable configuration of a collection of point charges. The proofs presented here for individual dipoles should be generalizable to collections of magnetic dipoles because they are formulated in terms of energy, which is additive. A rigorous treatment of this topic is, however, currently beyond the scope of this article.
Fixed-orientation magnetic dipole
It will be proven that at all points in free space
The energy U of the magnetic dipole M in the external magnetic field B is given by
The Laplacian will be
Expanding and rearranging the terms (and noting that the dipole M is constant) we have
but the Laplacians of the individual components of a magnetic field are zero in free space (not counting electromagnetic radiation) so
which completes the proof.
Magnetic dipole aligned with external field lines
The case of a paramagnetic or diamagnetic dipole is considered first. The energy is given by
Expanding and rearranging terms,
but since the Laplacian of each individual component of the magnetic field is zero,
and since the square of a magnitude is always positive,
As discussed above, this means that the Laplacian of the energy of a paramagnetic material can never be positive (no stable levitation) and the Laplacian of the energy of a diamagnetic material can never be negative (no instability in all directions).
Further, because the energy for a dipole of fixed magnitude aligned with the external field will be the square root of the energy above, the same analysis applies.
Laplacian of individual components of a magnetic field
It is proven here that the Laplacian of each individual component of a magnetic field is zero. This shows the need to invoke the properties of magnetic fields that the divergence of a magnetic field is always zero and the curl of a magnetic field is zero in free space. (That is, in the absence of current or a changing electric field.) See Maxwell's equations for a more detailed discussion of these properties of magnetic fields.
Consider the Laplacian of the x component of the magnetic field
Because the curl of B is zero,
and
so we have
But since Bx is continuous, the order of differentiation doesn't matter giving
The divergence of B is zero,
so
The Laplacian of the y component of the magnetic field By field and the Laplacian of the z component of the magnetic field Bz can be calculated analogously. Alternatively, one can use the identity
where both terms in the parentheses vanish.
See also
Electrostatic levitation
Magnetic levitation
References
External links
"Levitation Possible", a discussion of Earnshaw's theorem and its consequences for levitation, along with several ways to levitate with electromagnetic fields
Electrostatics
Eponymous theorems of physics
Levitation
No-go theorems | 0.78855 | 0.987237 | 0.778486 |
Electron | The electron (, or in nuclear reactions) is a subatomic particle with a negative one elementary electric charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron's mass is approximately 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, . Being fermions, no two electrons can occupy the same quantum state, per the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: They can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy.
Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry, and thermal conductivity; they also participate in gravitational, electromagnetic, and weak interactions. Since an electron has charge, it has a surrounding electric field; if that electron is moving relative to an observer, the observer will observe it to generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated.
Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications, such as tribology or frictional charging, electrolysis, electrochemistry, battery technologies, electronics, welding, cathode-ray tubes, photoelectricity, photovoltaic solar panels, electron microscopes, radiation therapy, lasers, gaseous ionization detectors, and particle accelerators.
Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.
In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms. Irish physicist George Johnstone Stoney named this charge "electron" in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897 during the cathode-ray tube experiment.
Electrons participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance, when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron, except that it carries electrical charge of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons.
History
Discovery of effect of electric force
The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity. In his 1600 treatise , the English scientist William Gilbert coined the Neo-Latin term , to refer to those substances with property similar to that of amber which attract small objects after being rubbed. Both electric and electricity are derived from the Latin (also the root of the alloy of the same name), which came from the Greek word for amber, .
Discovery of two kinds of charges
In the early 1700s, French chemist Charles François du Fay found that if a charged gold-leaf is repulsed by glass rubbed with silk, then the same charged gold-leaf is attracted by amber rubbed with wool. From this and other results of similar types of experiments, du Fay concluded that electricity consists of two electrical fluids, vitreous fluid from glass rubbed with silk and resinous fluid from amber rubbed with wool. These two fluids can neutralize each other when combined. American scientist Ebenezer Kinnersley later also independently reached the same conclusion. A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (−). He gave them the modern charge nomenclature of positive and negative respectively. Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit.
Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges. Beginning in 1846, German physicist Wilhelm Eduard Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis. However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".
Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron. The word electron is a combination of the words electric and ion. The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.
Discovery of free electrons outside matter
While studying electrical conductivity in rarefied gases in 1859, the German physicist Julius Plücker observed the radiation emitted from the cathode caused phosphorescent light to appear on the tube wall near the cathode; and the region of the phosphorescent light could be moved by application of a magnetic field. In 1869, Plücker's student Johann Wilhelm Hittorf found that a solid body placed in between the cathode and the phosphorescence would cast a shadow upon the phosphorescent region of the tube. Hittorf inferred that there are straight rays emitted from the cathode and that the phosphorescence was caused by the rays striking the tube walls. Furthermore, he also discovered that these rays are deflected by magnets just like lines of current.
In 1876, the German physicist Eugen Goldstein showed that the rays were emitted perpendicular to the cathode surface, which distinguished between the rays that were emitted from the cathode and the incandescent light. Goldstein dubbed the rays cathode rays. Decades of experimental and theoretical research involving cathode rays were important in J. J. Thomson's eventual discovery of electrons. Goldstein also experimented with double cathodes and hypothesized that one ray may repulse another, although he didn't believe that any particles might be involved.
During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode-ray tube to have a high vacuum inside. He then showed in 1874 that the cathode rays can turn a small paddle wheel when placed in their path. Therefore, he concluded that the rays carried momentum. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged. In 1879, he proposed that these properties could be explained by regarding cathode rays as composed of negatively charged gaseous molecules in a fourth state of matter in which the mean free path of the particles is so long that collisions may be ignored.
In 1883, not yet well-known German physicist Heinrich Hertz tried to prove that cathode rays are electrically neutral and got what he interpreted as a confident absence of deflection in electrostatic, as opposed to magnetic, field. However, as J. J. Thomson explained in 1897, Hertz placed the deflecting electrodes in a highly-conductive area of the tube, resulting in a strong screening effect close to their surface.
The German-born British physicist Arthur Schuster expanded upon Crookes's experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given electric and magnetic field, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time. This is because it was assumed that the charge carriers were much heavier hydrogen or nitrogen atoms. Schuster's estimates would subsequently turn out to be largely correct.
In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge.
While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter. In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays. This evidence strengthened the view that electrons existed as components of atoms.
In 1897, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier. By 1899 he showed that their charge-to-mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal. Thomson measured m/e for cathode ray "corpuscles", and made good estimates of the charge e, leading to value for the mass m, finding a value 1400 times less massive than the least massive ion known: hydrogen. In the same year Emil Wiechert and Walter Kaufmann also calculated the e/m ratio but did not take the step of interpreting their results as showing a new particle, while J. J. Thomson would subsequently in 1899 give estimates for the electron charge and mass as well: e ~ and m ~
The name "electron" was adopted for these particles by the scientific community, mainly due to the advocation by G. F. FitzGerald, J. Larmor, and H. A. Lorentz. The term was originally coined by George Johnstone Stoney in 1891 as a tentative name for the basic unit of electrical charge (which had then yet to be discovered).
The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team, using clouds of charged water droplets generated by electrolysis, and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913. However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.
Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons.
Atomic theory
By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons. In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom. However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.
Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them. Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics. In 1919, the American chemist Irving Langmuir elaborated on the Lewis's static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness". In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table, which were known to largely repeat themselves according to the periodic law.
In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle. The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment. This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.
Quantum mechanics
In his 1924 dissertation (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment. The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927, George Paget Thomson and Alexander Reid discovered the interference effect was produced when a beam of electrons was passed through thin celluloid foils and later metal films, and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel. Alexander Reid, who was Thomson's graduate student, performed the first experiments but he died soon after in a motorcycle accident and is rarely mentioned.
De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated. Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum. Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen.
In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field. In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron. This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatrons and using electron as a generic term to describe both the positively and negatively charged variants.
In 1947, Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other; the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and
Richard Feynman in the late 1940s.
Particle accelerators
With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light.
With a beam energy of 1.5 GeV, the first high-energy
particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron. The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.
Confinement of individual electrons
Individual electrons can now be easily confined in ultra small (, ) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K). The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor.
Characteristics
Classification
In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles. The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions because they all have half-odd integer spin; the electron has spin .
Fundamental properties
The invariant mass of an electron is approximately , or . Due to mass–energy equivalence, this corresponds to a rest energy of . The ratio between the mass of a proton and that of an electron is about 1836. Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe.
Electrons have an electric charge of coulombs, which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. The electron is commonly symbolized by , and the positron is symbolized by .
The electron has an intrinsic angular momentum or spin of . This property is usually stated by referring to the electron as a spin-1/2 particle. For such particles the spin magnitude is , while the result of the measurement of a projection of the spin on any axis can only be ±. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis. It is approximately equal to one Bohr magneton, which is a physical constant that is equal to The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.
The electron has no known substructure. Nevertheless, in condensed matter physics, spin–charge separation can occur in some materials. In such cases, electrons 'split' into three independent particles, the spinon, the orbiton and the holon (or chargon). The electron can always be theoretically considered as a bound state of the three, with the spinon carrying the spin of the electron, the orbiton carrying the orbital degree of freedom and the chargon carrying the charge, but in certain conditions they can behave as independent quasiparticles.
The issue of the radius of the electron is a challenging problem of modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity. Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters.
The upper bound of the electron radius of 10−18 meters can be derived using the uncertainty relation in energy. There is also a physical constant called the "classical electron radius", with the much larger value of , greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.
There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation. The experimental lower bound for the electron's mean lifetime is years, at a 90% confidence level.
Quantum properties
As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment.
The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density.
Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, , where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead.
In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit.
Virtual particles
In a simplified picture, which often tends to give the wrong idea but may serve to illustrate some aspects, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, . Thus, for a virtual electron, Δt is at most .
While an electron–positron virtual pair is in existence, the Coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. Virtual particles cause a comparable shielding effect for the mass of the electron.
The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment). The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.
The apparent paradox in classical physics of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons can heuristically be thought of as causing the electron to shift about in a jittery fashion (known as zitterbewegung), which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron. In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines. The Compton Wavelength shows that near elementary particles such as the electron, the uncertainty of the energy allows for the creation of virtual particles near the electron. This wavelength explains the "static" of virtual particles around elementary particles at a close distance.
Interaction
An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law. When an electron is in motion, it generates a magnetic field. The Ampère–Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor. The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).
When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation. The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.
Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force. Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The deceleration of the electron results in the emission of Bremsstrahlung radiation.
An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength. For an electron, it has a value of . When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering.
The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by which is approximately equal to .
When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV. On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.
In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a exchange, and this is responsible for neutrino–electron elastic scattering.
Atoms and molecules
An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus's electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number.
Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential. Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect. To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.
The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital, called paired electrons, cancel each other out.
The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics. The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules. Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms. A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.<ref
></ref>
Conductivity
If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect.
Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass. When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.
At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation. On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas) through the material much like free electrons.
Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed. This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material.
Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law, which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current.
When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, pairs of electrons called Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance. (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.) However, the mechanism by which higher temperature superconductors operate remains uncertain.
Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons. The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge.
Motion and energy
According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in vacuum, c. However, when relativistic electrons—that is, electrons moving at a speed close to c—are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.
The effects of special relativity are based on a quantity known as the Lorentz factor, defined as where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is:
where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.
Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum. For the 51 GeV electron above, the wavelength is about , small enough to explore structures well below the size of an atomic nucleus.
Formation
The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron–electron pairs annihilated each other and emitted energetic photons:
+ ↔ +
An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe.
For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron–positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe. The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes. Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process,
→ + +
For about the next –, the excess electrons remained too energetic to bind with atomic nuclei. What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.
Roughly one million years after the big bang, the first generation of stars began to form. Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus. An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60.
At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole. According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants.
When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space. In exchange, the other member of the pair is given negative energy, which results in a net loss of mass–energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.
Cosmic rays are particles traveling through space with high energies. Energy events as high as have been recorded. When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions. More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion.
→ +
A muon, in turn, can decay to form an electron or positron.
→ + +
Observation
Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes.
The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct dark lines appear in the spectrum of transmitted radiation in places where the corresponding frequency is absorbed by the atom's electrons. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. When detected, spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.
In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge. The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months. The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.
The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time.
The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.
Plasma applications
Particle beams
Electron beams are used in welding. They allow energy densities up to across a narrow focus diameter of and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.
Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer. This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.
Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products. Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.
Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.
Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect. Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies.
Imaging
Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV. The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.
The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material. In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm. By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential. The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms. This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain.
Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.
Other applications
In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery.
Electrons are important in cathode-ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets. In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse. Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.
See also
Notes
References
External links
Leptons
Elementary particles
Quantum electrodynamics
Spintronics
Charge carriers
1897 in science | 0.778882 | 0.999335 | 0.778364 |
Electrical energy | Electrical energy is energy related to forces on electrically charged particles and the movement of those particles (often electrons in wires, but not always). This energy is supplied by the combination of current and electric potential (often referred to as voltage because electric potential is measured in volts) that is delivered by a circuit (e.g., provided by an electric power utility). Motion (current) is not required; for example, if there is a voltage difference in combination with charged particles, such as static electricity or a charged capacitor, the moving electrical energy is typically converted to another form of energy (e.g., thermal, motion, sound, light, radio waves, etc.).
Electrical energy is usually sold by the kilowatt hour (1 kW·h = 3.6 MJ) which is the product of the power in kilowatts multiplied by running time in hours. Electric utilities measure energy using an electricity meter, which keeps a running total of the electric energy delivered to a customer.
Electric heating is an example of converting electrical energy into another form of energy, heat. The simplest and most common type of electric heater uses electrical resistance to convert the energy. There are other ways to use electrical energy. In computers for example, tiny amounts of electrical energy are rapidly moving into, out of, and through millions of transistors, where the energy is both moving (current through a transistor) and non-moving (electric charge on the gate of a transistor which controls the current going through).
Electricity generation
Electricity generation is the process of generating electrical energy from other forms of energy.
The fundamental principle of electricity generation was discovered during the 1820s and early 1830s by the British scientist Michael Faraday. His basic method is still used today: electric current is generated by the movement of a loop of wire, or disc of copper between the poles of a magnet.
For electrical utilities, it is the first step in the delivery of electricity to consumers. The other processes, electricity transmission, distribution, and electrical energy storage and recovery using pumped-storage methods are normally carried out by the electric power industry.
Electricity is most often generated at a power station by electromechanical generators, primarily driven by heat engines fueled by chemical combustion or nuclear fission but also by other means such as the kinetic energy of flowing water and wind. There are many other technologies that can be and are used to generate electricity such as solar photovoltaics and geothermal power.
References
Electricity
Forms of energy
Electromagnetic quantities | 0.781398 | 0.996057 | 0.778317 |
Moment of inertia | The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is defined relative to a rotational axis. It is the ratio between the torque applied and the resulting angular acceleration about that axis. It plays the same role in rotational motion as mass does in linear motion. A body's moment of inertia about a particular axis depends both on the mass and its distribution relative to the axis, increasing with mass & distance from the axis.
It is an extensive (additive) property: for a point mass the moment of inertia is simply the mass times the square of the perpendicular distance to the axis of rotation. The moment of inertia of a rigid composite system is the sum of the moments of inertia of its component subsystems (all taken about the same axis). Its simplest definition is the second moment of mass with respect to distance from an axis.
For bodies constrained to rotate in a plane, only their moment of inertia about an axis perpendicular to the plane, a scalar value, matters. For bodies free to rotate in three dimensions, their moments can be described by a symmetric 3-by-3 matrix, with a set of mutually perpendicular principal axes for which this matrix is diagonal and torques around the axes act independently of each other.
In mechanical engineering, simply "inertia" is often used to refer to "inertial mass" or "moment of inertia".
Introduction
When a body is free to rotate around an axis, torque must be applied to change its angular momentum. The amount of torque needed to cause any given angular acceleration (the rate of change in angular velocity) is proportional to the moment of inertia of the body. Moments of inertia may be expressed in units of kilogram metre squared (kg·m2) in SI units and pound-foot-second squared (lbf·ft·s2) in imperial or US units.
The moment of inertia plays the role in rotational kinetics that mass (inertia) plays in linear kinetics—both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, and will vary depending on the chosen axis. For a point-like mass, the moment of inertia about some axis is given by , where is the distance of the point from the axis, and is the mass. For an extended rigid body, the moment of inertia is just the sum of all the small pieces of mass multiplied by the square of their distances from the axis in rotation. For an extended body of a regular shape and uniform density, this summation sometimes produces a simple expression that depends on the dimensions, shape and total mass of the object.
In 1673, Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, known as a compound pendulum. The term moment of inertia ("momentum inertiae" in Latin) was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Euler's second law.
The natural frequency of oscillation of a compound pendulum is obtained from the ratio of the torque imposed by gravity on the mass of the pendulum to the resistance to acceleration defined by the moment of inertia. Comparison of this natural frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body.
The moment of inertia also appears in momentum, kinetic energy, and in Newton's laws of motion for a rigid body as a physical parameter that combines its shape and mass. There is an interesting difference in the way moment of inertia appears in planar and spatial movement. Planar movement has a single scalar that defines the moment of inertia, while for spatial movement the same calculations yield a 3 × 3 matrix of moments of inertia, called the inertia matrix or inertia tensor.
The moment of inertia of a rotating flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. The moment of inertia of an airplane about its longitudinal, horizontal and vertical axes determine how steering forces on the control surfaces of its wings, elevators and rudder(s) affect the plane's motions in roll, pitch and yaw.
Definition
The moment of inertia is defined as the product of mass of section and the square of the distance between the reference axis and the centroid of the section.
The moment of inertia is also defined as the ratio of the net angular momentum of a system to its angular velocity around a principal axis, that is
If the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their outstretched arms or divers curl their bodies into a tuck position during a dive, to spin faster.
If the shape of the body does not change, then its moment of inertia appears in Newton's law of motion as the ratio of an applied torque on a body to the angular acceleration around a principal axis, that is
For a simple pendulum, this definition yields a formula for the moment of inertia in terms of the mass of the pendulum and its distance from the pivot point as,
Thus, the moment of inertia of the pendulum depends on both the mass of a body and its geometry, or shape, as defined by the distance to the axis of rotation.
This simple formula generalizes to define moment of inertia for an arbitrarily shaped body as the sum of all the elemental point masses each multiplied by the square of its perpendicular distance to an axis . An arbitrary object's moment of inertia thus depends on the spatial distribution of its mass.
In general, given an object of mass , an effective radius can be defined, dependent on a particular axis of rotation, with such a value that its moment of inertia around the axis is
where is known as the radius of gyration around the axis.
Examples
Simple pendulum
Mathematically, the moment of inertia of a simple pendulum is the ratio of the torque due to gravity about the pivot of a pendulum to its angular acceleration about that pivot point. For a simple pendulum this is found to be the product of the mass of the particle with the square of its distance to the pivot, that is
This can be shown as follows: The force of gravity on the mass of a simple pendulum generates a torque around the axis perpendicular to the plane of the pendulum movement. Here is the distance vector from the torque axis to the pendulum center of mass, and is the net force on the mass. Associated with this torque is an angular acceleration, , of the string and mass around this axis. Since the mass is constrained to a circle the tangential acceleration of the mass is . Since the torque equation becomes:
where is a unit vector perpendicular to the plane of the pendulum. (The second to last step uses the vector triple product expansion with the perpendicularity of and .) The quantity is the moment of inertia of this single mass around the pivot point.
The quantity also appears in the angular momentum of a simple pendulum, which is calculated from the velocity of the pendulum mass around the pivot, where is the angular velocity of the mass about the pivot point. This angular momentum is given by
using a similar derivation to the previous equation.
Similarly, the kinetic energy of the pendulum mass is defined by the velocity of the pendulum around the pivot to yield
This shows that the quantity is how mass combines with the shape of a body to define rotational inertia. The moment of inertia of an arbitrarily shaped body is the sum of the values for all of the elements of mass in the body.
Compound pendulums
A compound pendulum is a body formed from an assembly of particles of continuous shape that rotates rigidly around a pivot. Its moment of inertia is the sum of the moments of inertia of each of the particles that it is composed of. The natural frequency of a compound pendulum depends on its moment of inertia, ,
where is the mass of the object, is local acceleration of gravity, and is the distance from the pivot point to the center of mass of the object. Measuring this frequency of oscillation over small angular displacements provides an effective way of measuring moment of inertia of a body.
Thus, to determine the moment of inertia of the body, simply suspend it from a convenient pivot point so that it swings freely in a plane perpendicular to the direction of the desired moment of inertia, then measure its natural frequency or period of oscillation, to obtain
where is the period (duration) of oscillation (usually averaged over multiple periods).
Center of oscillation
A simple pendulum that has the same natural frequency as a compound pendulum defines the length from the pivot to a point called the center of oscillation of the compound pendulum. This point also corresponds to the center of percussion. The length is determined from the formula,
or
The seconds pendulum, which provides the "tick" and "tock" of a grandfather clock, takes one second to swing from side-to-side. This is a period of two seconds, or a natural frequency of for the pendulum. In this case, the distance to the center of oscillation, , can be computed to be
Notice that the distance to the center of oscillation of the seconds pendulum must be adjusted to accommodate different values for the local acceleration of gravity. Kater's pendulum is a compound pendulum that uses this property to measure the local acceleration of gravity, and is called a gravimeter.
Measuring moment of inertia
The moment of inertia of a complex system such as a vehicle or airplane around its vertical axis can be measured by suspending the system from three points to form a trifilar pendulum. A trifilar pendulum is a platform supported by three wires designed to oscillate in torsion around its vertical centroidal axis. The period of oscillation of the trifilar pendulum yields the moment of inertia of the system.
Moment of inertia of area
Moment of inertia of area is also known as the second moment of area.
These calculations are commonly used in civil engineering for structural design of beams and columns. Cross-sectional areas calculated for vertical moment of the x-axis and horizontal moment of the y-axis .
Height (h) and breadth (b) are the linear measures, except for circles, which are effectively half-breadth derived,
Sectional areas moment calculated thus
Square:
Rectangular: and;
Triangular:
Circular:
Motion in a fixed plane
Point mass
The moment of inertia about an axis of a body is calculated by summing for every particle in the body, where is the perpendicular distance to the specified axis. To see how moment of inertia arises in the study of the movement of an extended body, it is convenient to consider a rigid assembly of point masses. (This equation can be used for axes that are not principal axes provided that it is understood that this does not fully describe the moment of inertia.)
Consider the kinetic energy of an assembly of masses that lie at the distances from the pivot point , which is the nearest point on the axis of rotation. It is the sum of the kinetic energy of the individual masses,
This shows that the moment of inertia of the body is the sum of each of the terms, that is
Thus, moment of inertia is a physical property that combines the mass and distribution of the particles around the rotation axis. Notice that rotation about different axes of the same body yield different moments of inertia.
The moment of inertia of a continuous body rotating about a specified axis is calculated in the same way, except with infinitely many point particles. Thus the limits of summation are removed, and the sum is written as follows:
Another expression replaces the summation with an integral,
Here, the function gives the mass density at each point , is a vector perpendicular to the axis of rotation and extending from a point on the rotation axis to a point in the solid, and the integration is evaluated over the volume of the body . The moment of inertia of a flat surface is similar with the mass density being replaced by its areal mass density with the integral evaluated over its area.
Note on second moment of area: The moment of inertia of a body moving in a plane and the second moment of area of a beam's cross-section are often confused. The moment of inertia of a body with the shape of the cross-section is the second moment of this area about the -axis perpendicular to the cross-section, weighted by its density. This is also called the polar moment of the area, and is the sum of the second moments about the - and -axes. The stresses in a beam are calculated using the second moment of the cross-sectional area around either the -axis or -axis depending on the load.
Examples
The moment of inertia of a compound pendulum constructed from a thin disc mounted at the end of a thin rod that oscillates around a pivot at the other end of the rod, begins with the calculation of the moment of inertia of the thin rod and thin disc about their respective centers of mass.
The moment of inertia of a thin rod with constant cross-section and density and with length about a perpendicular axis through its center of mass is determined by integration. Align the -axis with the rod and locate the origin its center of mass at the center of the rod, then where is the mass of the rod.
The moment of inertia of a thin disc of constant thickness , radius , and density about an axis through its center and perpendicular to its face (parallel to its axis of rotational symmetry) is determined by integration. Align the -axis with the axis of the disc and define a volume element as , then where is its mass.
The moment of inertia of the compound pendulum is now obtained by adding the moment of inertia of the rod and the disc around the pivot point as, where is the length of the pendulum. Notice that the parallel axis theorem is used to shift the moment of inertia from the center of mass to the pivot point of the pendulum.
A list of moments of inertia formulas for standard body shapes provides a way to obtain the moment of inertia of a complex body as an assembly of simpler shaped bodies. The parallel axis theorem is used to shift the reference point of the individual bodies to the reference point of the assembly.
As one more example, consider the moment of inertia of a solid sphere of constant density about an axis through its center of mass. This is determined by summing the moments of inertia of the thin discs that can form the sphere whose centers are along the axis chosen for consideration. If the surface of the sphere is defined by the equation
then the square of the radius of the disc at the cross-section along the -axis is
Therefore, the moment of inertia of the sphere is the sum of the moments of inertia of the discs along the -axis,
where is the mass of the sphere.
Rigid body
If a mechanical system is constrained to move parallel to a fixed plane, then the rotation of a body in the system occurs around an axis parallel to this plane. In this case, the moment of inertia of the mass in this system is a scalar known as the polar moment of inertia. The definition of the polar moment of inertia can be obtained by considering momentum, kinetic energy and Newton's laws for the planar movement of a rigid system of particles.
If a system of particles, , are assembled into a rigid body, then the momentum of the system can be written in terms of positions relative to a reference point , and absolute velocities :
where is the angular velocity of the system and is the velocity of .
For planar movement the angular velocity vector is directed along the unit vector which is perpendicular to the plane of movement. Introduce the unit vectors from the reference point to a point , and the unit vector , so
This defines the relative position vector and the velocity vector for the rigid system of the particles moving in a plane.
Note on the cross product: When a body moves parallel to a ground plane, the trajectories of all the points in the body lie in planes parallel to this ground plane. This means that any rotation that the body undergoes must be around an axis perpendicular to this plane. Planar movement is often presented as projected onto this ground plane so that the axis of rotation appears as a point. In this case, the angular velocity and angular acceleration of the body are scalars and the fact that they are vectors along the rotation axis is ignored. This is usually preferred for introductions to the topic. But in the case of moment of inertia, the combination of mass and geometry benefits from the geometric properties of the cross product. For this reason, in this section on planar movement the angular velocity and accelerations of the body are vectors perpendicular to the ground plane, and the cross product operations are the same as used for the study of spatial rigid body movement.
Angular momentum
The angular momentum vector for the planar movement of a rigid system of particles is given by
Use the center of mass as the reference point so
and define the moment of inertia relative to the center of mass as
then the equation for angular momentum simplifies to
The moment of inertia about an axis perpendicular to the movement of the rigid system and through the center of mass is known as the polar moment of inertia. Specifically, it is the second moment of mass with respect to the orthogonal distance from an axis (or pole).
For a given amount of angular momentum, a decrease in the moment of inertia results in an increase in the angular velocity. Figure skaters can change their moment of inertia by pulling in their arms. Thus, the angular velocity achieved by a skater with outstretched arms results in a greater angular velocity when the arms are pulled in, because of the reduced moment of inertia. A figure skater is not, however, a rigid body.
Kinetic energy
The kinetic energy of a rigid system of particles moving in the plane is given by
Let the reference point be the center of mass of the system so the second term becomes zero, and introduce the moment of inertia so the kinetic energy is given by
The moment of inertia is the polar moment of inertia of the body.
Newton's laws
Newton's laws for a rigid system of particles, , can be written in terms of a resultant force and torque at a reference point , to yield
where denotes the trajectory of each particle.
The kinematics of a rigid body yields the formula for the acceleration of the particle in terms of the position and acceleration of the reference particle as well as the angular velocity vector and angular acceleration vector of the rigid system of particles as,
For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors from the reference point to a point and the unit vectors , so
This yields the resultant torque on the system as
where , and is the unit vector perpendicular to the plane for all of the particles .
Use the center of mass as the reference point and define the moment of inertia relative to the center of mass , then the equation for the resultant torque simplifies to
Motion in space of a rigid body, and the inertia matrix
The scalar moments of inertia appear as elements in a matrix when a system of particles is assembled into a rigid body that moves in three-dimensional space. This inertia matrix appears in the calculation of the angular momentum, kinetic energy and resultant torque of the rigid system of particles.
Let the system of particles, be located at the coordinates with velocities relative to a fixed reference frame. For a (possibly moving) reference point , the relative positions are
and the (absolute) velocities are
where is the angular velocity of the system, and is the velocity of .
Angular momentum
Note that the cross product can be equivalently written as matrix multiplication by combining the first operand and the operator into a skew-symmetric matrix, , constructed from the components of :
The inertia matrix is constructed by considering the angular momentum, with the reference point of the body chosen to be the center of mass :
where the terms containing sum to zero by the definition of center of mass.
Then, the skew-symmetric matrix obtained from the relative position vector , can be used to define,
where defined by
is the symmetric inertia matrix of the rigid system of particles measured relative to the center of mass .
Kinetic energy
The kinetic energy of a rigid system of particles can be formulated in terms of the center of mass and a matrix of mass moments of inertia of the system. Let the system of particles be located at the coordinates with velocities , then the kinetic energy is
where is the position vector of a particle relative to the center of mass.
This equation expands to yield three terms
Since the center of mass is defined by
, the second term in this equation is zero. Introduce the skew-symmetric matrix so the kinetic energy becomes
Thus, the kinetic energy of the rigid system of particles is given by
where is the inertia matrix relative to the center of mass and is the total mass.
Resultant torque
The inertia matrix appears in the application of Newton's second law to a rigid assembly of particles. The resultant torque on this system is,
where is the acceleration of the particle . The kinematics of a rigid body yields the formula for the acceleration of the particle in terms of the position and acceleration of the reference point, as well as the angular velocity vector and angular acceleration vector of the rigid system as,
Use the center of mass as the reference point, and introduce the skew-symmetric matrix to represent the cross product , to obtain
The calculation uses the identity
obtained from the Jacobi identity for the triple cross product as shown in the proof below:
Thus, the resultant torque on the rigid system of particles is given by
where is the inertia matrix relative to the center of mass.
Parallel axis theorem
The inertia matrix of a body depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the center of mass and the inertia matrix relative to another point . This relationship is called the parallel axis theorem.
Consider the inertia matrix obtained for a rigid system of particles measured relative to a reference point , given by
Let be the center of mass of the rigid system, then
where is the vector from the center of mass to the reference point . Use this equation to compute the inertia matrix,
Distribute over the cross product to obtain
The first term is the inertia matrix relative to the center of mass. The second and third terms are zero by definition of the center of mass . And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix constructed from .
The result is the parallel axis theorem,
where is the vector from the center of mass to the reference point .
Note on the minus sign: By using the skew symmetric matrix of position vectors relative to the reference point, the inertia matrix of each particle has the form , which is similar to the that appears in planar movement. However, to make this to work out correctly a minus sign is needed. This minus sign can be absorbed into the term , if desired, by using the skew-symmetry property of .
Scalar moment of inertia in a plane
The scalar moment of inertia, , of a body about a specified axis whose direction is specified by the unit vector and passes through the body at a point is as follows:
where is the moment of inertia matrix of the system relative to the reference point , and is the skew symmetric matrix obtained from the vector .
This is derived as follows. Let a rigid assembly of particles, , have coordinates . Choose as a reference point and compute the moment of inertia around a line L defined by the unit vector through the reference point , . The perpendicular vector from this line to the particle is obtained from by removing the component that projects onto .
where is the identity matrix, so as to avoid confusion with the inertia matrix, and is the outer product matrix formed from the unit vector along the line .
To relate this scalar moment of inertia to the inertia matrix of the body, introduce the skew-symmetric matrix such that , then we have the identity
noting that is a unit vector.
The magnitude squared of the perpendicular vector is
The simplification of this equation uses the triple scalar product identity
where the dot and the cross products have been interchanged. Exchanging products, and simplifying by noting that and are orthogonal:
Thus, the moment of inertia around the line through in the direction is obtained from the calculation
where is the moment of inertia matrix of the system relative to the reference point .
This shows that the inertia matrix can be used to calculate the moment of inertia of a body around any specified rotation axis in the body.
Inertia tensor
For the same object, different axes of rotation will have different moments of inertia about those axes. In general, the moments of inertia are not equal unless the object is symmetric about all axes. The moment of inertia tensor is a convenient way to summarize all moments of inertia of an object with one quantity. It may be calculated with respect to any point in space, although for practical purposes the center of mass is most commonly used.
Definition
For a rigid object of point masses , the moment of inertia tensor is given by
Its components are defined as
where
, is equal to 1, 2 or 3 for , , and , respectively,
is the vector to the point mass from the point about which the tensor is calculated and
is the Kronecker delta.
Note that, by the definition, is a symmetric tensor.
The diagonal elements are more succinctly written as
while the off-diagonal elements, also called the products of inertia, are
Here denotes the moment of inertia around the -axis when the objects are rotated around the x-axis, denotes the moment of inertia around the -axis when the objects are rotated around the -axis, and so on.
These quantities can be generalized to an object with distributed mass, described by a mass density function, in a similar fashion to the scalar moment of inertia. One then has
where is their outer product, E3 is the 3×3 identity matrix, and V is a region of space completely containing the object.
Alternatively it can also be written in terms of the angular momentum operator :
The inertia tensor can be used in the same way as the inertia matrix to compute the scalar moment of inertia about an arbitrary axis in the direction ,
where the dot product is taken with the corresponding elements in the component tensors. A product of inertia term such as is obtained by the computation
and can be interpreted as the moment of inertia around the -axis when the object rotates around the -axis.
The components of tensors of degree two can be assembled into a matrix. For the inertia tensor this matrix is given by,
It is common in rigid body mechanics to use notation that explicitly identifies the , , and -axes, such as and , for the components of the inertia tensor.
Alternate inertia convention
There are some CAD and CAE applications such as SolidWorks, Unigraphics NX/Siemens NX and MSC Adams that use an alternate convention for the products of inertia. According to this convention, the minus sign is removed from the product of inertia formulas and instead inserted in the inertia matrix:
Determine inertia convention (Principal axes method)
If one has the inertia data without knowing which inertia convention that has been used, it can be determined if one also has the principal axes. With the principal axes method, one makes inertia matrices from the following two assumptions:
The standard inertia convention has been used .
The alternate inertia convention has been used .
Next, one calculates the eigenvectors for the two matrices. The matrix whose eigenvectors are parallel to the principal axes corresponds to the inertia convention that has been used.
Derivation of the tensor components
The distance of a particle at from the axis of rotation passing through the origin in the direction is , where is unit vector. The moment of inertia on the axis is
Rewrite the equation using matrix transpose:
where E3 is the 3×3 identity matrix.
This leads to a tensor formula for the moment of inertia
For multiple particles, we need only recall that the moment of inertia is additive in order to see that this formula is correct.
Inertia tensor of translation
Let be the inertia tensor of a body calculated at its center of mass, and be the displacement vector of the body. The inertia tensor of the translated body respect to its original center of mass is given by:
where is the body's mass, E3 is the 3 × 3 identity matrix, and is the outer product.
Inertia tensor of rotation
Let be the matrix that represents a body's rotation. The inertia tensor of the rotated body is given by:
Inertia matrix in different reference frames
The use of the inertia matrix in Newton's second law assumes its components are computed relative to axes parallel to the inertial frame and not relative to a body-fixed reference frame. This means that as the body moves the components of the inertia matrix change with time. In contrast, the components of the inertia matrix measured in a body-fixed frame are constant.
Body frame
Let the body frame inertia matrix relative to the center of mass be denoted , and define the orientation of the body frame relative to the inertial frame by the rotation matrix , such that,
where vectors in the body fixed coordinate frame have coordinates in the inertial frame. Then, the inertia matrix of the body measured in the inertial frame is given by
Notice that changes as the body moves, while remains constant.
Principal axes
Measured in the body frame, the inertia matrix is a constant real symmetric matrix. A real symmetric matrix has the eigendecomposition into the product of a rotation matrix and a diagonal matrix , given by
where
The columns of the rotation matrix define the directions of the principal axes of the body, and the constants , , and are called the principal moments of inertia. This result was first shown by J. J. Sylvester (1852), and is a form of Sylvester's law of inertia. The principal axis with the highest moment of inertia is sometimes called the figure axis or axis of figure.
A toy top is an example of a rotating rigid body, and the word top is used in the names of types of rigid bodies. When all principal moments of inertia are distinct, the principal axes through center of mass are uniquely specified and the rigid body is called an asymmetric top. If two principal moments are the same, the rigid body is called a symmetric top and there is no unique choice for the two corresponding principal axes. If all three principal moments are the same, the rigid body is called a spherical top (although it need not be spherical) and any axis can be considered a principal axis, meaning that the moment of inertia is the same about any axis.
The principal axes are often aligned with the object's symmetry axes. If a rigid body has an axis of symmetry of order , meaning it is symmetrical under rotations of about the given axis, that axis is a principal axis. When , the rigid body is a symmetric top. If a rigid body has at least two symmetry axes that are not parallel or perpendicular to each other, it is a spherical top, for example, a cube or any other Platonic solid.
The motion of vehicles is often described in terms of yaw, pitch, and roll which usually correspond approximately to rotations about the three principal axes. If the vehicle has bilateral symmetry then one of the principal axes will correspond exactly to the transverse (pitch) axis.
A practical example of this mathematical phenomenon is the routine automotive task of balancing a tire, which basically means adjusting the distribution of mass of a car wheel such that its principal axis of inertia is aligned with the axle so the wheel does not wobble.
Rotating molecules are also classified as asymmetric, symmetric, or spherical tops, and the structure of their rotational spectra is different for each type.
Ellipsoid
The moment of inertia matrix in body-frame coordinates is a quadratic form that defines a surface in the body called Poinsot's ellipsoid. Let be the inertia matrix relative to the center of mass aligned with the principal axes, then the surface
or
defines an ellipsoid in the body frame. Write this equation in the form,
to see that the semi-principal diameters of this ellipsoid are given by
Let a point on this ellipsoid be defined in terms of its magnitude and direction, , where is a unit vector. Then the relationship presented above, between the inertia matrix and the scalar moment of inertia around an axis in the direction , yields
Thus, the magnitude of a point in the direction on the inertia ellipsoid is
See also
Central moment
List of moments of inertia
Planar lamina
Rotational energy
Moment of inertia factor
References
External links
Angular momentum and rigid-body rotation in two and three dimensions
Lecture notes on rigid-body rotation and moments of inertia
The moment of inertia tensor
An introductory lesson on moment of inertia: keeping a vertical pole not falling down (Java simulation)
Tutorial on finding moments of inertia, with problems and solutions on various basic shapes
Notes on mechanics of manipulation: the angular inertia tensor
Easy to use and Free Moment of Inertia Calculator online
Mechanical quantities
Rigid bodies
Rotation
Articles containing video clips
Moment (physics) | 0.778778 | 0.999248 | 0.778193 |
Renormalization group | In theoretical physics, the term renormalization group (RG) refers to a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In particle physics, it reflects the changes in the underlying force laws (codified in a quantum field theory) as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the uncertainty principle.
A change in scale is called a scale transformation. The renormalization group is intimately related to scale invariance and conformal invariance, symmetries in which a system appears the same at all scales (self-similarity).
As the scale varies, it is as if one is changing the magnifying power of a notional microscope viewing the system. In so-called renormalizable theories, the system at one scale will generally consist of self-similar copies of itself when viewed at a smaller scale, with different parameters describing the components of the system. The components, or fundamental variables, may relate to atoms, elementary particles, atomic spins, etc. The parameters of the theory typically describe the interactions of the components. These may be variable couplings which measure the strength of various forces, or mass parameters themselves. The components themselves may appear to be composed of more of the self-same components as one goes to shorter distances.
For example, in quantum electrodynamics (QED), an electron appears to be composed of electron and positron pairs and photons, as one views it at higher resolution, at very short distances. The electron at such short distances has a slightly different electric charge than does the dressed electron seen at large distances, and this change, or running, in the value of the electric charge is determined by the renormalization group equation.
History
The idea of scale transformations and scale invariance is old in physics: Scaling arguments were commonplace for the Pythagorean school, Euclid, and up to Galileo. They became popular again at the end of the 19th century, perhaps the first example being the idea of enhanced viscosity of Osborne Reynolds, as a way to explain turbulence.
The renormalization group was initially devised in particle physics, but nowadays its applications extend to solid-state physics, fluid mechanics, physical cosmology, and even nanotechnology. An early article by Ernst Stueckelberg and André Petermann in 1953 anticipates the idea in quantum field theory. Stueckelberg and Petermann opened the field conceptually. They noted that renormalization exhibits a group of transformations which transfer quantities from the bare terms to the counter terms. They introduced a function h(e) in quantum electrodynamics (QED), which is now called the beta function (see below).
Beginnings
Murray Gell-Mann and Francis E. Low restricted the idea to scale transformations in QED in 1954, which are the most physically significant, and focused on asymptotic forms of the photon propagator at high energies. They determined the variation of the electromagnetic coupling in QED, by appreciating the simplicity of the scaling structure of that theory. They thus discovered that the coupling parameter g(μ) at the energy scale μ is effectively given by the (one-dimensional translation) group equation
or equivalently, , for some function G (unspecified—nowadays called Wegner's scaling function) and a constant d, in terms of the coupling g(M) at a reference scale M.
Gell-Mann and Low realized in these results that the effective scale can be arbitrarily taken as μ, and can vary to define the theory at any other scale:
The gist of the RG is this group property: as the scale μ varies, the theory presents a self-similar replica of itself, and any scale can be accessed similarly from any other scale, by group action, a formal transitive conjugacy of couplings in the mathematical sense (Schröder's equation).
On the basis of this (finite) group equation and its scaling property, Gell-Mann and Low could then focus on infinitesimal transformations, and invented a computational method based on a mathematical flow function of the coupling parameter g, which they introduced. Like the function h(e) of Stueckelberg and Petermann, their function determines the differential change of the coupling g(μ) with respect to a small change in energy scale μ through a differential equation, the renormalization group equation:
The modern name is also indicated, the beta function, introduced by C. Callan and K. Symanzik in 1970. Since it is a mere function of g, integration in g of a perturbative estimate of it permits specification of the renormalization trajectory of the coupling, that is, its variation with energy, effectively the function G in this perturbative approximation. The renormalization group prediction (cf. Stueckelberg–Petermann and Gell-Mann–Low works) was confirmed 40 years later at the LEP accelerator experiments: the fine structure "constant" of QED was measured to be about at energies close to 200 GeV, as opposed to the standard low-energy physics value of .
Deeper understanding
The renormalization group emerges from the renormalization of the quantum field variables, which normally has to address the problem of infinities in a quantum field theory. This problem of systematically handling the infinities of quantum field theory to obtain finite physical quantities was solved for QED by Richard Feynman, Julian Schwinger and Shin'ichirō Tomonaga, who received the 1965 Nobel prize for these contributions. They effectively devised the theory of mass and charge renormalization, in which the infinity in the momentum scale is cut off by an ultra-large regulator, Λ.
The dependence of physical quantities, such as the electric charge or electron mass, on the scale Λ is hidden, effectively swapped for the longer-distance scales at which the physical quantities are measured, and, as a result, all observable quantities end up being finite instead, even for an infinite Λ. Gell-Mann and Low thus realized in these results that, infinitesimally, while a tiny change in g is provided by the above RG equation given ψ(g), the self-similarity is expressed by the fact that ψ(g) depends explicitly only upon the parameter(s) of the theory, and not upon the scale μ. Consequently, the above renormalization group equation may be solved for (G and thus) g(μ).
A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilation group of conventional renormalizable theories, considers methods where widely different scales of lengths appear simultaneously. It came from condensed matter physics: Leo P. Kadanoff's paper in 1966 proposed the "block-spin" renormalization group. The "blocking idea" is a way to define the components of the theory at large distances as aggregates of components at shorter distances.
This approach covered the conceptual point and was given full computational substance in the extensive important contributions of Kenneth Wilson. The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem, in 1975, as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. He was awarded the Nobel prize for these decisive contributions in 1982.
Reformulation
Meanwhile, the RG in particle physics had been reformulated in more practical terms by Callan and Symanzik in 1970. The above beta function, which describes the "running of the coupling" parameter with scale, was also found to amount to the "canonical trace anomaly", which represents the quantum-mechanical breaking of scale (dilation) symmetry in a field theory. Applications of the RG to particle physics exploded in number in the 1970s with the establishment of the Standard Model.
In 1973, it was discovered that a theory of interacting colored quarks, called quantum chromodynamics, had a negative beta function. This means that an initial high-energy value of the coupling will eventuate a special value of at which the coupling blows up (diverges). This special value is the scale of the strong interactions, = and occurs at about 200 MeV. Conversely, the coupling becomes weak at very high energies (asymptotic freedom), and the quarks become observable as point-like particles, in deep inelastic scattering, as anticipated by Feynman–Bjorken scaling. QCD was thereby established as the quantum field theory controlling the strong interactions of particles.
Momentum space RG also became a highly developed tool in solid state physics, but was hindered by the extensive use of perturbation theory, which prevented the theory from succeeding in strongly correlated systems.
Conformal symmetry
Conformal symmetry is associated with the vanishing of the beta function. This can occur naturally if a coupling constant is attracted, by running, toward a fixed point at which β(g) = 0. In QCD, the fixed point occurs at short distances where g → 0 and is called a (trivial) ultraviolet fixed point. For heavy quarks, such as the top quark, the coupling to the mass-giving Higgs boson runs toward a fixed non-zero (non-trivial) infrared fixed point, first predicted by Pendleton and Ross (1981), and C. T. Hill.
The top quark Yukawa coupling lies slightly below the infrared fixed point of the Standard Model suggesting the possibility of additional new physics, such as sequential heavy Higgs bosons.
In string theory, conformal invariance of the string world-sheet is a fundamental symmetry: β = 0 is a requirement. Here, β is a function of the geometry of the space-time in which the string moves. This determines the space-time dimensionality of the string theory and enforces Einstein's equations of general relativity on the geometry. The RG is of fundamental importance to string theory and theories of grand unification.
It is also the modern key idea underlying critical phenomena in condensed matter physics. Indeed, the RG has become one of the most important tools of modern physics. It is often used in combination with the Monte Carlo method.
Block spin
This section introduces pedagogically a picture of RG which may be easiest to grasp: the block spin RG, devised by Leo P. Kadanoff in 1966.
Consider a 2D solid, a set of atoms in a perfect square array, as depicted in the figure.
Assume that atoms interact among themselves only with their nearest neighbours, and that the system is at a given temperature . The strength of their interaction is quantified by a certain coupling . The physics of the system will be described by a certain formula, say the Hamiltonian .
Now proceed to divide the solid into blocks of 2×2 squares; we attempt to describe the system in terms of block variables, i.e., variables which describe the average behavior of the block. Further assume that, by some lucky coincidence, the physics of block variables is described by a formula of the same kind, but with different values for and : . (This isn't exactly true, in general, but it is often a good first approximation.)
Perhaps, the initial problem was too hard to solve, since there were too many atoms. Now, in the renormalized problem we have only one fourth of them. But why stop now? Another iteration of the same kind leads to , and only one sixteenth of the atoms. We are increasing the observation scale with each RG step.
Of course, the best idea is to iterate until there is only one very big block. Since the number of atoms in any real sample of material is very large, this is more or less equivalent to finding the long range behaviour of the RG transformation which took and . Often, when iterated many times, this RG transformation leads to a certain number of fixed points.
To be more concrete, consider a magnetic system (e.g., the Ising model), in which the coupling denotes the trend of neighbour spins to be aligned. The configuration of the system is the result of the tradeoff between the ordering term and the disordering effect of temperature.
For many models of this kind there are three fixed points:
and . This means that, at the largest size, temperature becomes unimportant, i.e., the disordering factor vanishes. Thus, in large scales, the system appears to be ordered. We are in a ferromagnetic phase.
and . Exactly the opposite; here, temperature dominates, and the system is disordered at large scales.
A nontrivial point between them, and . In this point, changing the scale does not change the physics, because the system is in a fractal state. It corresponds to the Curie phase transition, and is also called a critical point.
So, if we are given a certain material with given values of and , all we have to do in order to find out the large-scale behaviour of the system is to iterate the pair until we find the corresponding fixed point.
Elementary theory
In more technical terms, let us assume that we have a theory described by a certain function of the state variables and a certain set of coupling constants . This function may be a partition function, an action, a Hamiltonian, etc. It must contain the whole description of the physics of the system.
Now we consider a certain blocking transformation of the state variables , the number of must be lower than the number of . Now let us try to rewrite the function only in terms of the . If this is achievable by a certain change in the parameters, , then the theory is said to be renormalizable.
Most fundamental theories of physics such as quantum electrodynamics, quantum chromodynamics and electro-weak interaction, but not gravity, are exactly renormalizable. Also, most theories in condensed matter physics are approximately renormalizable, from superconductivity to fluid turbulence.
The change in the parameters is implemented by a certain beta function: , which is said to induce a renormalization group flow (or RG flow) on the -space. The values of under the flow are called running couplings.
As was stated in the previous section, the most important information in the RG flow are its fixed points. The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to exhibit quantum triviality, possessing what is called a Landau pole, as in quantum electrodynamics. For a 4 interaction, Michael Aizenman proved that this theory is indeed trivial, for space-time dimension ≥ 5. For = 4, the triviality has yet to be proven rigorously, but lattice computations have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the Higgs boson mass in asymptotic safety scenarios. Numerous fixed points appear in the study of lattice Higgs theories, but the nature of the quantum field theories associated with these remains an open question.
Since the RG transformations in such systems are lossy (i.e.: the number of variables decreases - see as an example in a different context, Lossy data compression), there need not be an inverse for a given RG transformation. Thus, in such lossy systems, the renormalization group is, in fact, a semigroup, as lossiness implies that there is no unique inverse for each element.
Relevant and irrelevant operators and universality classes
Consider a certain observable of a physical system undergoing an RG transformation. The magnitude of the observable as the length scale of the system goes from small to large determines the importance of the observable(s) for the scaling law:
A relevant observable is needed to describe the macroscopic behaviour of the system; irrelevant observables are not needed. Marginal observables may or may not need to be taken into account. A remarkable broad fact is that most observables are irrelevant, i.e., the macroscopic physics is dominated by only a few observables in most systems.
As an example, in microscopic physics, to describe a system consisting of a mole of carbon-12 atoms we need of the order of 10 (the Avogadro number) variables, while to describe it as a macroscopic system (12 grams of carbon-12) we only need a few.
Before Wilson's RG approach, there was an astonishing empirical fact to explain: The coincidence of the critical exponents (i.e., the exponents of the reduced-temperature dependence of several quantities near a second order phase transition) in very disparate phenomena, such as magnetic systems, superfluid transition (Lambda transition), alloy physics, etc. So in general, thermodynamic features of a system near a phase transition depend only on a small number of variables, such as the dimensionality and symmetry, but are insensitive to details of the underlying microscopic properties of the system.
This coincidence of critical exponents for ostensibly quite different physical systems, called universality, is easily explained using the renormalization group, by demonstrating that the differences in phenomena among the individual fine-scale components are determined by irrelevant observables, while the relevant observables are shared in common. Hence many macroscopic phenomena may be grouped into a small set of universality classes, specified by the shared sets of relevant observables.
Momentum space
Renormalization groups, in practice, come in two main "flavors". The Kadanoff picture explained above refers mainly to the so-called real-space RG.
Momentum-space RG on the other hand, has a longer history despite its relative subtlety. It can be used for systems where the degrees of freedom can be cast in terms of the Fourier modes of a given field. The RG transformation proceeds by integrating out a certain set of high-momentum (large-wavenumber) modes. Since large wavenumbers are related to short-length scales, the momentum-space RG results in an essentially analogous coarse-graining effect as with real-space RG.
Momentum-space RG is usually performed on a perturbation expansion. The validity of such an expansion is predicated upon the actual physics of a system being close to that of a free field system. In this case, one may calculate observables by summing the leading terms in the expansion.
This approach has proved successful for many theories, including most of particle physics, but fails for systems whose physics is very far from any free system, i.e., systems with strong correlations.
As an example of the physical meaning of RG in particle physics, consider an overview of charge renormalization in quantum electrodynamics (QED). Suppose we have a point positive charge of a certain true (or bare) magnitude. The electromagnetic field around it has a certain energy, and thus may produce some virtual electron-positron pairs (for example). Although virtual particles annihilate very quickly, during their short lives the electron will be attracted by the charge, and the positron will be repelled. Since this happens uniformly everywhere near the point charge, where its electric field is sufficiently strong, these pairs effectively create a screen around the charge when viewed from far away. The measured strength of the charge will depend on how close our measuring probe can approach the point charge, bypassing more of the screen of virtual particles the closer it gets. Hence a dependence of a certain coupling constant (here, the electric charge) with distance scale.
Momentum and length scales are related inversely, according to the de Broglie relation: The higher the energy or momentum scale we may reach, the lower the length scale we may probe and resolve. Therefore, the momentum-space RG practitioners sometimes claim to integrate out high momenta or high energy from their theories.
Exact renormalization group equations
An exact renormalization group equation (ERGE) is one that takes irrelevant couplings into account. There are several formulations.
The Wilson ERGE is the simplest conceptually, but is practically impossible to implement. Fourier transform into momentum space after Wick rotating into Euclidean space. Insist upon a hard momentum cutoff, so that the only degrees of freedom are those with momenta less than . The partition function is
For any positive less than , define (a functional over field configurations whose Fourier transform has momentum support within ) as
If depends only on and not on derivatives of , this may be rewritten as
in which it becomes clear that, since only functions with support between and are integrated over, the left hand side may still depend on with support outside that range. Obviously,
In fact, this transformation is transitive. If you compute from and then compute from , this gives you the same Wilsonian action as computing SΛ″ directly from SΛ.
The Polchinski ERGE involves a smooth UV regulator cutoff. Basically, the idea is an improvement over the Wilson ERGE. Instead of a sharp momentum cutoff, it uses a smooth cutoff. Essentially, we suppress contributions from momenta greater than heavily. The smoothness of the cutoff, however, allows us to derive a functional differential equation in the cutoff scale . As in Wilson's approach, we have a different action functional for each cutoff energy scale . Each of these actions are supposed to describe exactly the same model which means that their partition functionals have to match exactly.
In other words, (for a real scalar field; generalizations to other fields are obvious),
and ZΛ is really independent of ! We have used the condensed deWitt notation here. We have also split the bare action SΛ into a quadratic kinetic part and an interacting part Sint Λ. This split most certainly isn't clean. The "interacting" part can very well also contain quadratic kinetic terms. In fact, if there is any wave function renormalization, it most certainly will. This can be somewhat reduced by introducing field rescalings. RΛ is a function of the momentum p and the second term in the exponent is
when expanded.
When , is essentially 1. When , becomes very very huge and approaches infinity. is always greater than or equal to 1 and is smooth. Basically, this leaves the fluctuations with momenta less than the cutoff unaffected but heavily suppresses contributions from fluctuations with momenta greater than the cutoff. This is obviously a huge improvement over Wilson.
The condition that
can be satisfied by (but not only by)
Jacques Distler claimed without proof that this ERGE is not correct nonperturbatively.
The effective average action ERGE involves a smooth IR regulator cutoff.
The idea is to take all fluctuations right up to an IR scale into account. The effective average action will be accurate for fluctuations with momenta larger than . As the parameter is lowered, the effective average action approaches the effective action which includes all quantum and classical fluctuations. In contrast, for large the effective average action is close to the "bare action". So, the effective average action interpolates between the "bare action" and the effective action.
For a real scalar field, one adds an IR cutoff
to the action , where Rk is a function of both and such that for
, Rk(p) is very tiny and approaches 0 and for , . Rk is both smooth and nonnegative. Its large value for small momenta leads to a suppression of their contribution to the partition function which is effectively the same thing as neglecting large-scale fluctuations.
One can use the condensed deWitt notation
for this IR regulator.
So,
where is the source field. The Legendre transform of Wk ordinarily gives the effective action. However, the action that we started off with is really S[φ]+1/2 φ⋅Rk⋅φ and so, to get the effective average action, we subtract off 1/2 φ⋅Rk⋅φ. In other words,
can be inverted to give Jk[φ] and we define the effective average action Γk as
Hence,
thus
is the ERGE which is also known as the Wetterich equation. As shown by Morris the effective action Γk is in fact simply related to Polchinski's effective action Sint via a Legendre transform relation.
As there are infinitely many choices of k, there are also infinitely many different interpolating ERGEs.
Generalization to other fields like spinorial fields is straightforward.
Although the Polchinski ERGE and the effective average action ERGE look similar, they are based upon very different philosophies. In the effective average action ERGE, the bare action is left unchanged (and the UV cutoff scale—if there is one—is also left unchanged) but the IR contributions to the effective action are suppressed whereas in the Polchinski ERGE, the QFT is fixed once and for all but the "bare action" is varied at different energy scales to reproduce the prespecified model. Polchinski's version is certainly much closer to Wilson's idea in spirit. Note that one uses "bare actions" whereas the other uses effective (average) actions.
Renormalization group improvement of the effective potential
The renormalization group can also be used to compute effective potentials at orders higher than 1-loop. This kind of approach is particularly interesting to compute corrections to the Coleman–Weinberg mechanism. To do so, one must write the renormalization group equation in terms of the effective potential. To the case of the model:
In order to determine the effective potential, it is useful to write as
where is a power series in :
Using the above ansatz, it is possible to solve the renormalization group equation perturbatively and find the effective potential up to desired order. A pedagogical explanation of this technique is shown in reference.
See also
Quantum triviality
Scale invariance
Schröder's equation
Regularization (physics)
Density matrix renormalization group
Functional renormalization group
Critical phenomena
Universality (dynamical systems)
C-theorem
History of quantum field theory
Top quark
Asymptotic safety
Remarks
Citations
References
Historical references
Pedagogical and historical reviews
The most successful variational RG method.
A mathematical introduction and historical overview with a stress on group theory and the application in high-energy physics.
A pedestrian introduction to renormalization and the renormalization group.
A pedestrian introduction to the renormalization group as applied in condensed matter physics.
Books
T. D. Lee; Particle physics and introduction to field theory, Harwood academic publishers, 1981, . Contains a Concise, simple, and trenchant summary of the group structure, in whose discovery he was also involved, as acknowledged in Gell-Mann and Low's paper.
L. Ts. Adzhemyan, N. V. Antonov and A. N. Vasiliev; The Field Theoretic Renormalization Group in Fully Developed Turbulence; Gordon and Breach, 1999. .
Vasil'ev, A. N.; The field theoretic renormalization group in critical behavior theory and stochastic dynamics; Chapman & Hall/CRC, 2004. (Self-contained treatment of renormalization group applications with complete computations);
Zinn-Justin, Jean (2002). Quantum field theory and critical phenomena, Oxford, Clarendon Press (2002), (an exceptionally solid and thorough treatise on both topics);
Zinn-Justin, Jean: Renormalization and renormalization group: From the discovery of UV divergences to the concept of effective field theories, in: de Witt-Morette C., Zuber J.-B. (eds), Proceedings of the NATO ASI on Quantum Field Theory: Perspective and Prospective, June 15–26, 1998, Les Houches, France, Kluwer Academic Publishers, NATO ASI Series C 530, 375-388 (1999) [ISBN ]. Full text available in PostScript.
Kleinert, H. and Schulte Frohlinde, V; Critical Properties of 4-Theories, World Scientific (Singapore, 2001); Paperback ''. Full text available in PDF.
Quantum field theory
Statistical mechanics
Scaling symmetries
Mathematical physics | 0.783894 | 0.992699 | 0.778171 |
Larmor precession | In physics, Larmor precession (named after Joseph Larmor) is the precession of the magnetic moment of an object about an external magnetic field. The phenomenon is conceptually similar to the precession of a tilted classical gyroscope in an external torque-exerting gravitational field. Objects with a magnetic moment also have angular momentum and effective internal electric current proportional to their angular momentum; these include electrons, protons, other fermions, many atomic and nuclear systems, as well as classical macroscopic systems. The external magnetic field exerts a torque on the magnetic moment,
where is the torque, is the magnetic dipole moment, is the angular momentum vector, is the external magnetic field, symbolizes the cross product, and is the gyromagnetic ratio which gives the proportionality constant between the magnetic moment and the angular momentum.
The angular momentum vector precesses about the external field axis with an angular frequency known as the Larmor frequency,
,
where is the angular frequency, and
is the magnitude of the applied magnetic field.
is the gyromagnetic ratio for a particle of charge , equal to , where is the mass of the precessing system, while is the g-factor of the system. The g-factor is the unit-less proportionality factor relating the system's angular momentum to the intrinsic magnetic moment; in classical physics it is 1 for any rigid object in which the charge and mass density are identically distributed. The Larmor frequency is independent of the angle between and .
In nuclear physics the g-factor of a given system includes the effect of the nucleon spins, their orbital angular momenta, and their couplings. Generally, the g-factors are very difficult to calculate for such many-body systems, but they have been measured to high precision for most nuclei. The Larmor frequency is important in NMR spectroscopy. The gyromagnetic ratios, which give the Larmor frequencies at a given magnetic field strength, have been measured and tabulated.
Crucially, the Larmor frequency is independent of the polar angle between the applied magnetic field and the magnetic moment direction. This is what makes it a key concept in fields such as nuclear magnetic resonance (NMR) and electron paramagnetic resonance (EPR), since the precession rate does not depend on the spatial orientation of the spins.
Including Thomas precession
The above equation is the one that is used in most applications. However, a full treatment must include the effects of Thomas precession, yielding the equation (in CGS units) (The CGS units are used so that E has the same units as B):
where is the relativistic Lorentz factor (not to be confused with the gyromagnetic ratio above). Notably, for the electron g is very close to 2, so if one sets g = 2, one arrives at
Bargmann–Michel–Telegdi equation
The spin precession of an electron in an external electromagnetic field is described by the Bargmann–Michel–Telegdi (BMT) equation
where , , , and are polarization four-vector, charge, mass, and magnetic moment, is four-velocity of electron (in a system of units in which ), , , and is electromagnetic field-strength tensor. Using equations of motion,
one can rewrite the first term on the right side of the BMT equation as , where is four-acceleration. This term describes Fermi–Walker transport and leads to Thomas precession. The second term is associated with Larmor precession.
When electromagnetic fields are uniform in space or when gradient forces like can be neglected, the particle's translational motion is described by
The BMT equation is then written as
The Beam-Optical version of the Thomas-BMT, from the Quantum Theory of Charged-Particle Beam Optics, applicable in accelerator optics.
Applications
A 1935 paper published by Lev Landau and Evgeny Lifshitz predicted the existence of ferromagnetic resonance of the Larmor precession, which was independently verified in experiments by J. H. E. Griffiths (UK) and E. K. Zavoiskij (USSR) in 1946.
Larmor precession is important in nuclear magnetic resonance, magnetic resonance imaging, electron paramagnetic resonance, muon spin resonance, and neutron spin echo. It is also important for the alignment of cosmic dust grains, which is a cause of the polarization of starlight.
To calculate the spin of a particle in a magnetic field, one must in general also take into account Thomas precession if the particle is moving.
Precession direction
The spin angular momentum of an electron precesses counter-clockwise about the direction of the magnetic field. An electron has a negative charge, so the direction of its magnetic moment is opposite to that of its spin.
See also
LARMOR neutron microscope
Notes
External links
Georgia State University HyperPhysics page on Larmor Frequency
Larmor Frequency Calculator
Atomic physics
Electromagnetism
Nuclear magnetic resonance
Precession | 0.783575 | 0.993097 | 0.778165 |
Thermodynamic equations | Thermodynamics is expressed by a mathematical framework of thermodynamic equations which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics.
Introduction
One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work, or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot. Carnot used the phrase motive power for work. In the footnotes to his famous On the Motive Power of Fire, he states: “We use here the expression motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.” With the inclusion of a unit of time in Carnot's definition, one arrives at the modern definition for power:
During the latter half of the 19th century, physicists such as Rudolf Clausius, Peter Guthrie Tait, and Willard Gibbs worked to develop the concept of a thermodynamic system and the correlative energetic laws which govern its associated processes. The equilibrium state of a thermodynamic system is described by specifying its "state". The state of a thermodynamic system is specified by a number of extensive quantities, the most familiar of which are volume, internal energy, and the amount of each constituent particle (particle numbers). Extensive parameters are properties of the entire system, as contrasted with intensive parameters which can be defined at a single point, such as temperature and pressure. The extensive parameters (except entropy) are generally conserved in some way as long as the system is "insulated" to changes to that parameter from the outside. The truth of this statement for volume is trivial, for particles one might say that the total particle number of each atomic element is conserved. In the case of energy, the statement of the conservation of energy is known as the first law of thermodynamics.
A thermodynamic system is in equilibrium when it is no longer changing in time. This may happen in a very short time, or it may happen with glacial slowness. A thermodynamic system may be composed of many subsystems which may or may not be "insulated" from each other with respect to the various extensive quantities. If we have a thermodynamic system in equilibrium in which we relax some of its constraints, it will move to a new equilibrium state. The thermodynamic parameters may now be thought of as variables and the state may be thought of as a particular point in a space of thermodynamic parameters. The change in the state of the system can be seen as a path in this state space. This change is called a thermodynamic process. Thermodynamic equations are now used to express the relationships between the state parameters at these different equilibrium state.
The concept which governs the path that a thermodynamic system traces in state space as it goes from one equilibrium state to another is that of entropy. The entropy is first viewed as an extensive function of all of the extensive thermodynamic parameters. If we have a thermodynamic system in equilibrium, and we release some of the extensive constraints on the system, there are many equilibrium states that it could move to consistent with the conservation of energy, volume, etc. The second law of thermodynamics specifies that the equilibrium state that it moves to is in fact the one with the greatest entropy. Once we know the entropy as a function of the extensive variables of the system, we will be able to predict the final equilibrium state.
Notation
Some of the most common thermodynamic quantities are:
The conjugate variable pairs are the fundamental state variables used to formulate the thermodynamic functions.
The most important thermodynamic potentials are the following functions:
Thermodynamic systems are typically affected by the following types of system interactions. The types under consideration are used to classify systems as open systems, closed systems, and isolated systems.
Common material properties determined from the thermodynamic functions are the following:
The following constants are constants that occur in many relationships due to the application of a standard system of units.
Laws of thermodynamics
The behavior of a thermodynamic system is summarized in the laws of Thermodynamics, which concisely are:
Zeroth law of thermodynamics
If A, B, C are thermodynamic systems such that A is in thermal equilibrium with B and B is in thermal equilibrium with C, then A is in thermal equilibrium with C.
The zeroth law is of importance in thermometry, because it implies the existence of temperature scales. In practice, C is a thermometer, and the zeroth law says that systems that are in thermodynamic equilibrium with each other have the same temperature. The law was actually the last of the laws to be formulated.
First law of thermodynamics
where is the infinitesimal increase in internal energy of the system, is the infinitesimal heat flow into the system, and is the infinitesimal work done by the system.
The first law is the law of conservation of energy. The symbol instead of the plain d, originated in the work of German mathematician Carl Gottfried Neumann and is used to denote an inexact differential and to indicate that Q and W are path-dependent (i.e., they are not state functions). In some fields such as physical chemistry, positive work is conventionally considered work done on the system rather than by the system, and the law is expressed as .
Second law of thermodynamics
The entropy of an isolated system never decreases: for an isolated system.
A concept related to the second law which is important in thermodynamics is that of reversibility. A process within a given isolated system is said to be reversible if throughout the process the entropy never increases (i.e. the entropy remains unchanged).
Third law of thermodynamics
when
The third law of thermodynamics states that at the absolute zero of temperature, the entropy is zero for a perfect crystalline structure.
Onsager reciprocal relations – sometimes called the Fourth law of thermodynamics
The fourth law of thermodynamics is not yet an agreed upon law (many supposed variations exist); historically, however, the Onsager reciprocal relations have been frequently referred to as the fourth law.
The fundamental equation
The first and second law of thermodynamics are the most fundamental equations of thermodynamics. They may be combined into what is known as fundamental thermodynamic relation which describes all of the changes of thermodynamic state functions of a system of uniform temperature and pressure. As a simple example, consider a system composed of a number of k different types of particles and has the volume as its only external variable. The fundamental thermodynamic relation may then be expressed in terms of the internal energy as:
Some important aspects of this equation should be noted: , ,
The thermodynamic space has k+2 dimensions
The differential quantities (U, S, V, Ni) are all extensive quantities. The coefficients of the differential quantities are intensive quantities (temperature, pressure, chemical potential). Each pair in the equation are known as a conjugate pair with respect to the internal energy. The intensive variables may be viewed as a generalized "force". An imbalance in the intensive variable will cause a "flow" of the extensive variable in a direction to counter the imbalance.
The equation may be seen as a particular case of the chain rule. In other words: from which the following identifications can be made: These equations are known as "equations of state" with respect to the internal energy. (Note - the relation between pressure, volume, temperature, and particle number which is commonly called "the equation of state" is just one of many possible equations of state.) If we know all k+2 of the above equations of state, we may reconstitute the fundamental equation and recover all thermodynamic properties of the system.
The fundamental equation can be solved for any other differential and similar expressions can be found. For example, we may solve for and find that
Thermodynamic potentials
By the principle of minimum energy, the second law can be restated by saying that for a fixed entropy, when the constraints on the system are relaxed, the internal energy assumes a minimum value. This will require that the system be connected to its surroundings, since otherwise the energy would remain constant.
By the principle of minimum energy, there are a number of other state functions which may be defined which have the dimensions of energy and which are minimized according to the second law under certain conditions other than constant entropy. These are called thermodynamic potentials. For each such potential, the relevant fundamental equation results from the same Second-Law principle that gives rise to energy minimization under restricted conditions: that the total entropy of the system and its environment is maximized in equilibrium. The intensive parameters give the derivatives of the environment entropy with respect to the extensive properties of the system.
The four most common thermodynamic potentials are:
After each potential is shown its "natural variables". These variables are important because if the thermodynamic potential is expressed in terms of its natural variables, then it will contain all of the thermodynamic relationships necessary to derive any other relationship. In other words, it too will be a fundamental equation. For the above four potentials, the fundamental equations are expressed as:
The thermodynamic square can be used as a tool to recall and derive these potentials.
First order equations
Just as with the internal energy version of the fundamental equation, the chain rule can be used on the above equations to find k+2 equations of state with respect to the particular potential. If Φ is a thermodynamic potential, then the fundamental equation may be expressed as:
where the are the natural variables of the potential. If is conjugate to then we have the equations of state for that potential, one for each set of conjugate variables.
Only one equation of state will not be sufficient to reconstitute the fundamental equation. All equations of state will be needed to fully characterize the thermodynamic system. Note that what is commonly called "the equation of state" is just the "mechanical" equation of state involving the Helmholtz potential and the volume:
For an ideal gas, this becomes the familiar PV=NkBT.
Euler integrals
Because all of the natural variables of the internal energy U are extensive quantities, it follows from Euler's homogeneous function theorem that
Substituting into the expressions for the other main potentials we have the following expressions for the thermodynamic potentials:
Note that the Euler integrals are sometimes also referred to as fundamental equations.
Gibbs–Duhem relationship
Differentiating the Euler equation for the internal energy and combining with the fundamental equation for internal energy, it follows that:
which is known as the Gibbs-Duhem relationship. The Gibbs-Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with r components, there will be r+1 independent parameters, or degrees of freedom. For example, a simple system with a single component will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Willard Gibbs and Pierre Duhem.
Second order equations
There are many relationships that follow mathematically from the above basic equations. See Exact differential for a list of mathematical relationships. Many equations are expressed as second derivatives of the thermodynamic potentials (see Bridgman equations).
Maxwell relations
Maxwell relations are equalities involving the second derivatives of thermodynamic potentials with respect to their natural variables. They follow directly from the fact that the order of differentiation does not matter when taking the second derivative. The four most common Maxwell relations are:
{|
|-
|
|width="80"|
|
|-
|
|width="80"|
|
|}
The thermodynamic square can be used as a tool to recall and derive these relations.
Material properties
Second derivatives of thermodynamic potentials generally describe the response of the system to small changes. The number of second derivatives which are independent of each other is relatively small, which means that most material properties can be described in terms of just a few "standard" properties. For the case of a single component system, there are three properties generally considered "standard" from which all others may be derived:
Compressibility at constant temperature or constant entropy
Specific heat (per-particle) at constant pressure or constant volume
Coefficient of thermal expansion
These properties are seen to be the three possible second derivative of the Gibbs free energy with respect to temperature and pressure.
Thermodynamic property relations
Properties such as pressure, volume, temperature, unit cell volume, bulk modulus and mass are easily measured. Other properties are measured through simple relations, such as density, specific volume, specific weight. Properties such as internal energy, entropy, enthalpy, and heat transfer are not so easily measured or determined through simple relations. Thus, we use more complex relations such as Maxwell relations, the Clapeyron equation, and the Mayer relation.
Maxwell relations in thermodynamics are critical because they provide a means of simply measuring the change in properties of pressure, temperature, and specific volume, to determine a change in entropy. Entropy cannot be measured directly. The change in entropy with respect to pressure at a constant temperature is the same as the negative change in specific volume with respect to temperature at a constant pressure, for a simple compressible system. Maxwell relations in thermodynamics are often used to derive thermodynamic relations.
The Clapeyron equation allows us to use pressure, temperature, and specific volume to determine an enthalpy change that is connected to a phase change. It is significant to any phase change process that happens at a constant pressure and temperature. One of the relations it resolved to is the enthalpy of vaporization at a provided temperature by measuring the slope of a saturation curve on a pressure vs. temperature graph. It also allows us to determine the specific volume of a saturated vapor and liquid at that provided temperature. In the equation below, represents the specific latent heat, represents temperature, and represents the change in specific volume.
The Mayer relation states that the specific heat capacity of a gas at constant volume is slightly less than at constant pressure. This relation was built on the reasoning that energy must be supplied to raise the temperature of the gas and for the gas to do work in a volume changing case. According to this relation, the difference between the specific heat capacities is the same as the universal gas constant. This relation is represented by the difference between Cp and Cv:
Cp – Cv = R
See also
Thermodynamics
Timeline of thermodynamics
Notes
References
Chapters 1 - 10, Part 1: Equilibrium.
(reprinted from Oxford University Press, 1978)
Thermodynamics
Chemical engineering | 0.788828 | 0.986461 | 0.778148 |
Robert Hooke | Robert Hooke (; 18 July 16353 March 1703) was an English polymath who was active as a physicist ("natural philosopher"), astronomer, geologist, meteorologist and architect. He is credited as one of the first scientists to investigate living things at microscopic scale in 1665, using a compound microscope that he designed. Hooke was an impoverished scientific inquirer in young adulthood who went on to become one of the most important scientists of his time. After the Great Fire of London in 1666, Hooke (as a surveyor and architect) attained wealth and esteem by performing more than half of the property line surveys and assisting with the city's rapid reconstruction. Often vilified by writers in the centuries after his death, his reputation was restored at the end of the twentieth century and he has been called "England's Leonardo [da Vinci]".
Hooke was a Fellow of the Royal Society and from 1662, he was its first Curator of Experiments. From 1665 to 1703, he was also Professor of Geometry at Gresham College. Hooke began his scientific career as an assistant to the physical scientist Robert Boyle. Hooke built the vacuum pumps that were used in Boyle's experiments on gas law and also conducted experiments. In 1664, Hooke identified the rotations of Mars and Jupiter. Hooke's 1665 book Micrographia, in which he coined the term cell, encouraged microscopic investigations. Investigating optics specifically light refraction Hooke inferred a wave theory of light. His is the first-recorded hypothesis of the cause of the expansion of matter by heat, of air's composition by small particles in constant motion that thus generate its pressure, and of heat as energy.
In physics, Hooke inferred that gravity obeys an inverse square law and arguably was the first to hypothesise such a relation in planetary motion, a principle Isaac Newton furthered and formalised in Newton's law of universal gravitation. Priority over this insight contributed to the rivalry between Hooke and Newton. In geology and palaeontology, Hooke originated the theory of a terraqueous globe, thus disputing the Biblical view of the Earth's age; he also hypothesised the extinction of species, and argued hills and mountains had become elevated by geological processes. By identifying fossils of extinct species, Hooke presaged the theory of biological evolution.
Life and works
Early life
Much of what is known of Hooke's early life comes from an autobiography he commenced in 1696 but never completed; Richard Waller FRS mentions it in his introduction to The Posthumous Works of Robert Hooke, M.D. S.R.S., which was printed in 1705. The work of Waller, along with John Ward's Lives of the Gresham Professors, and John Aubrey's Brief Lives form the major near-contemporaneous biographical accounts of his life.
Hooke was born in 1635 in Freshwater, Isle of Wight, to Cecily Gyles and the Anglican priest John Hooke, who was the curate of All Saints' Church, Freshwater. Robert was the youngest, by seven years, of four siblings (two boys and two girls); he was frail and not expected to live. Although his father gave him some instruction in English, (Latin) Grammar and Divinity, Robert's education was largely neglected. Left to his own devices, he made little mechanical toys; seeing a brass clock dismantled, he built a wooden replica that "would go".
Hooke's father died in October 1648, leaving £40 in his will to Robert (plus another £10 held over from his grandmother). At the age of 13, he took this to London to become an apprentice to the celebrated painter Peter Lely. Hooke also had "some instruction in drawing" from the limner Samuel Cowper but "the smell of the Oil Colours did not agree with his Constitution, increasing his Head-ache to which he was ever too much subject", and he became a pupil at Westminster School, living with its master Richard Busby. Hooke quickly mastered Latin, Greek and Euclid's Elements; he also learnt to play the organ and began his lifelong study of mechanics. He remained an accomplished draughtsman, as he was later to demonstrate in his drawings that illustrate the work of Robert Boyle and Hooke's own Micrographia.
Oxford
In 1653, Hooke secured a place at Christ Church, Oxford, receiving free tuition and accommodation as an organist and a chorister, and a basic income as a servitor, despite the fact he did not officially matriculate until 1658. In 1662, Hooke was awarded a Master of Arts degree.
While a student at Oxford, Hooke was also employed as an assistant to Dr Thomas Willis a physician, chemist and member of the Oxford Philosophical Club. The Philosophical Club had been founded by John Wilkins, Warden of Wadham College, who led this important group of scientists who went on to form the nucleus of the Royal Society. In 1659, Hooke described to the Club some elements of a method of heavier-than-air flight but concluded human muscles were insufficient to the task. Through the Club, Hooke met Seth Ward (the University's Savilian Professor of Astronomy) and developed for Ward a mechanism that improved the regularity of pendulum clocks used for astronomical time-keeping. Hooke characterised his Oxford days as the foundation of his lifelong passion for science. The friends he made there, particularly Christopher Wren, were important to him throughout his career. Willis introduced Hooke to Robert Boyle, who the Club sought to attract to Oxford.
In 1655, Boyle moved to Oxford and Hooke became nominally his assistant but in practice his co-experimenter. Boyle had been working on gas pressures; the possibility a vacuum might exist despite Aristotle's maxim "Nature abhors a vacuum" had just begun to be considered. Hooke developed an air pump for Boyle's experiments rather than use Ralph Greatorex's pump, which Hooke considered as "too gross to perform any great matter". Hooke's engine enabled the development of the eponymous law that was subsequently attributed to Boyle; Hooke had a particularly keen eye and was an adept mathematician, neither of which applied to Boyle. Hooke taught Boyle Euclid's Elements and Descartes's Principles of Philosophy; it also caused them to recognise fire as a chemical reaction and not, as Aristotle taught, a fundamental element of nature.
Royal Society
According to Henry Robinson, Librarian of The Royal Society in 1935:
The Royal Society for the Improvement of Natural Knowledge by Experiment was founded in 1660 and given its Royal Charter in July 1662. On 5 November 1661, Robert Moray proposed the appointment of a curator to furnish the society with experiments, and this was unanimously passed and Hooke was named on Boyle's recommendation. The Society did not have a reliable income to fully fund the post of Curator of Experiments but in 1664, John Cutler settled an annual gratuity of £50 on the Society to found a "" lectureship at Gresham College on the understanding the Society would appoint Hooke to this task. On 27 June 1664, Hooke was confirmed to the office and on 11 January 1665, he was named Curator by Office for life with an annual salary of £80, which consisting of £30 from the Society and Cutler's £50 annuity.
In June 1663, Hooke was elected a Fellow of the Royal Society (FRS). On 20 March 1665, he was also appointed Gresham Professor of Geometry. On 13 September 1667, Hooke became acting Secretary of the Society and on 19 December 1677, he was appointed its Joint Secretary.
Personality, relationships, health and death
Although John Aubrey described Hooke as a person of "great virtue and goodness". much has been written about the unpleasant side of Hooke's personality. According to his first biographer Richard Waller, Hooke was "in person, but despicable", and "melancholy, mistrustful, and jealous". Waller's comments influenced other writers for more than 200 years such that many books and articlesespecially biographies of Isaac Newtonportray Hooke as a disgruntled, selfish, anti-social curmudgeon. For example, Arthur Berry said Hooke "claimed credit for most of the scientific discoveries of the time". Sullivan wrote he was "positively unscrupulous" and had an "uneasy apprehensive vanity" in dealings with Newton. Manuel described Hooke as "cantankerous, envious, vengeful". According to More, Hooke had both a "cynical temperament" and a "caustic tongue". Andrade was more sympathetic but still described Hooke as "difficult", "suspicious" and "irritable". In October 1675, the Council of the Royal Society considered a motion to expel Hooke because of an attack he made on Christiaan Huygens over scientific priority in watch design but it did not pass. According to Hooke's biographer Ellen Drake:
The publication of Hooke's diary in 1935 revealed previously unknown details about his social and familial relationships. His biographer Margaret said: "the picture which is usually painted of Hooke as a recluse is completely false". He interacted with noted artisans such as clock-maker Thomas Tompion and instrument-maker Christopher Cocks (Cox). Hooke often met Christopher Wren, with whom he shared many interests, and had a lasting friendship with John Aubrey. His diaries also make frequent reference to meetings at coffeehouses and taverns, as well as to dinners with Robert Boyle. On many occasions, Hooke took tea with his lab assistant Harry Hunt. Although he largely lived aloneapart from the servants who ran his home his niece Grace Hooke and his cousin Tom Giles lived with him for some years as children.
Hooke never married. According to his diary, Hooke had a sexual relationship with his niece Grace, after she had turned 16. Grace was in his custody since the age of 10. He also had sexual relations with several maids and housekeepers. Hooke's biographer Stephen Inwood considers Grace to have been the love of his life, and he was devastated when she died in 1687. Inwood also mentions "The age difference between him and Grace was commonplace and would not have upset his contemporaries as it does us". The incestous relationship would nevertheless have been frowned upon and tried by an ecclesiastical court had it been discovered, it was not however a capital felony after 1660.
Since childhood, Hooke suffered from migraine, tinnitus, dizziness and bouts of insomnia; he also had a spinal deformity that was consistent with a diagnosis of Scheuermann's kyphosis, giving him in middle and later years a "thin and crooked body, over-large head and protruding eyes". Approaching these in a scientific spirit, he experimented with self-medication, diligently recording symptoms, substances and effects in his diary. He regularly used sal ammoniac, emetics, laxatives and opiates, which appear to have had an increasing effect on his physical and mental health over time.
Hooke died in London on 3 March 1703, having been blind and bedridden during the last year of his life. A chest containing £8,000 in money and gold was found in his room at Gresham College. His library contained over 3,000 books in Latin, French, Italian and English. Although he had talked of leaving a generous bequest to the Royal Society, which would have given his name to a library, laboratory and lectures, no will was found and the money passed to a cousin named Elizabeth Stephens. Hooke was buried at St Helen's Church, Bishopsgate, in the City of London but the precise location of his grave is unknown.
Science
Hooke's role at the Royal Society was to demonstrate experiments from his own methods or at the suggestion of members. Among his earliest demonstrations were discussions of the nature of air and the implosion of glass bubbles that had been sealed with enclosed hot air. He also demonstrated that a dog could be kept alive with its thorax opened, provided air was pumped in and out of its lungs. He noted the difference between venous and arterial blood, and thus demonstrated that the ("food of life") and [flames] were the same thing. There were also experiments on gravity, the falling of objects, the weighing of bodies, the measurement of barometric pressure at different heights, and the movement of pendulums up to . His biographer described him as England's first meteorologist, in her description of his essay Method for making a history of the weather. (Hooke specifies that a thermometer, a hygrometer, a wind gauge and a record sheet be used for proper weather records.)
Astronomy
In May 1664, using a refracting telescope, Hooke observed the Great Red Spot of Jupiter for two hours as it moved across the planet's face. In March 1665, he published his findings and from them, the Italian astronomer Giovanni Cassini calculated the rotation period of Jupiter to be nine hours and fifty-five minutes.
One of the most-challenging problems Hooke investigated was the measurement of the distance from Earth to a star other than the Sun. Hooke selected the star Gamma Draconis and chose the method of parallax determination. In 1669, after several months of observing, Hooke believed the desired result had been achieved. It is now known his equipment was far too imprecise to obtain an accurate measurement.
Hooke's Micrographia contains illustrations of the Pleiades star cluster and lunar craters. He conducted experiments to investigate the formation of these craters and concluded their existence meant the Moon must have its own gravity, a radical departure from the contemporaneous Aristotelian celestial model. He also was an early observer of the rings of Saturn, and discovered one of the first-observed double-star systems Gamma Arietis in 1664.
To achieve these discoveries, Hooke needed better instruments than those that were available at the time. Accordingly, he invented three new mechanisms: the Hooke joint, a sophisticated universal joint that allowed his instruments to smoothly follow the apparent motion of the observed body; the first clockwork drive to automate the process; and a micrometer screw that allowed him to achieve a precision of ten seconds of arc. Hooke was dissatisfied with refracting telescopes so he built the first practical Gregorian telescope that used a silvered glass mirror.
Mechanics
In 1660, Hooke discovered the law of elasticity that bears his name and describes the linear variation of tension with extension in an elastic spring. Hooke first described this discovery in an anagram "ceiiinosssttuv", whose solution he published in 1678 as ("As the extension, so the force"). His work on elasticity culminated in his development of the balance spring or hairspring, which for the first time enabled a portable timepiecea watchto keep time with reasonable accuracy. A bitter dispute between Hooke and Christiaan Huygens on the priority of this invention was to continue for centuries after the death of both but a note dated 23 June 1670 in the journals of the Royal Society, describing a demonstration of a balance-controlled watch before the Royal Society, may support Hooke's claim to priority for the idea. Nevertheless, it is Huygens who is credited with building the first watch to use a balance spring.
Hooke's announcement of his law of elasticity using an anagram was a method scientists, such as Hooke, Huygens and Galileo, sometimes used to establish priority for a discovery without revealing details. Hooke used mechanical analogues to understand fundamental processes such as the motion of a spherical pendulum and of a ball in a hollow cone, to demonstrate central force due to gravity, and a hanging chain net with point loads to provide the optimum shape for a dome with heavy cross on top.
Despite continuing reports to the contrary, Hooke did not influence Thomas Newcomen's invention of the steam engine; this myth, which originated in an article in the third edition of "Encyclopædia Britannica", has been found to be mistaken.
Gravitation
While many of Hooke's contemporaries, such as Isaac Newton, believed in aether as a medium for transmitting attraction and repulsion between separated celestial bodies, Hooke argued for an attracting principle of gravitation in Micrographia (1665). In a communication to the Royal Society in 1666, he wrote:
Hooke's 1674 Gresham lecture, An Attempt to Prove the Motion of the Earth by Observations (published 1679), said gravitation applies to "all celestial bodies" and restated these three propositions.
Hooke's statements up to 1674 make no mention, however, that an inverse square law applies or might apply to these attractions. His model of gravitation was also not yet universal, though it approached universality more closely than previous hypotheses. Hooke did not provide accompanying evidence or mathematical demonstration; he stated in 1674: "Now what these several degrees [of gravitational attraction] are I have not yet experimentally verified", indicating he did not yet know what law the gravitation might follow; and about his whole proposal, he said: "This I only hint at present ... having my self many other things in hand which I would first , and therefore cannot so well attend it" (i.e. "prosecuting this Inquiry").
In November 1679, Hooke initiated a notable exchange of letters with Newton that was published in 1960. Hooke's ostensible purpose was to tell Newton he (Hooke) had been appointed to manage the Royal Society's correspondence; Hooke therefore wanted to hear from members about their research or their views about the research of others. Hooke asked Newton's opinions about various matters. Among other items, Hooke mentioned "compounding the celestial motions of the planets of a direct motion by the tangent and an attractive motion towards the central body"; his "hypothesis of the or causes of springinesse"; a new hypothesis from Paris about planetary motions, which he described at length; efforts to carry out or improve national surveys; and the difference of latitude between London and Cambridge.
Newton's reply offered "a of my own" about a terrestrial experiment rather than a proposal about celestial motions that might detect the Earth's motion; the experiment would use a body suspended in air and then dropped. Hooke wanted to discern how Newton thought the falling body could experimentally reveal the Earth's motion by its direction of deviation from the vertical but Hooke went on hypothetically to consider how its motion could continue if the solid Earth had not been in the way, on a spiral path to the centre. Hooke disagreed with Newton's idea of the body's continuing motion. A further short correspondence developed; towards the end of it, writing on 6 January 1680 to Newton, Hooke communicated his "supposition ... that the Attraction always is in a duplicate proportion to the Distance from the , and Consequently that the Velocity will be in a subduplicate proportion to the Attraction and Consequently as Kepler Supposes to the Distance". (Hooke's inference about the velocity is incorrect.)
In 1686, when the first book of Newton's Principia was presented to the Royal Society, Hooke said he had given Newton the "notion" of "the rule of the decrease of Gravity, being reciprocally as the squares of the distances from the ". At the same time, according to Edmond Halley's contemporaneous report, Hooke agreed "the Demonstration of the Curves generated thereby" was wholly Newton's.
According to a 2002 assessment of the early history of the inverse square law: "by the late 1660s, the assumption of an 'inverse proportion between gravity and the square of distance' was rather common and had been advanced by a number of different people for different reasons". In the 1660s, Newton had shown for planetary motion under a circular assumption, force in the radial direction had an inverse-square relation with distance from the centre. Newton, who in May 1686 was presented with Hooke's claim to priority on the inverse square law, denied he was to be credited as author of the idea, giving reasons including the citation of prior work by others. Newton also said that, even if he had first heard of the inverse square proportion from Hooke (which Newton said he had not), he would still have some rights to it because of his mathematical developments and demonstrations. These, he said, enabled observations to be relied upon as evidence of its accuracy while according to Newton, Hooke, without mathematical demonstrations and evidence in favour of the supposition, could only guess it was approximately valid "at great distances from the centre".
Newton did accept and acknowledge, in all editions of the Principia, Hooke and others had separately appreciated the inverse square law in the solar system. Newton acknowledged Wren, Hooke and Halley in this connection in his "Scholium to Proposition 4" in Book1. In a letter to Halley, Newton also acknowledged his correspondence with Hooke in 1679–1680 had reawakened his dormant interest in astronomical matters but that did not mean, according to Newton, Hooke had told Newton anything new or original. Newton wrote:
Whilst Newton was primarily a pioneer in mathematical analysis and its applications, and optical experimentation, Hooke was a creative experimenter of such great range who left some of his ideas, such as those about gravitation, undeveloped. In 1759, decades after the deaths of both Newton and Hooke, Alexis Clairaut, mathematical astronomer eminent in his own right in the field of gravitational studies, reviewed Hooke's published work on gravitation. According to Stephen Peter Rigaud, Clairaut wrote: "The example of Hooke and that of Kepler [serves] to show what a distance there is between a truth that is glimpsed and a truth that is demonstrated". I. Bernard Cohen said: "Hooke's claim to the inverse-square law has masked Newton's far more fundamental debt to him, the analysis of curvilinear orbital motion. In asking for too much credit, Hooke effectively denied to himself the credit due him for a seminal idea".
Horology
Hooke made important contributions to the science of timekeeping and was intimately involved in the advances of his time; these included refinement of the pendulum as a better regulator for clocks, increased precision of clock mechanisms and the use of the balance spring to improve the timekeeping of watches.
Galileo had observed the regularity of a pendulum and Huygens first incorporated it in a clock; in 1668, Hooke demonstrated his new device to keep a pendulum swinging regularly in unsteady conditions. His invention of a tooth-cutting machine enabled a substantial improvement in the accuracy and precision of timepieces. Waller reported the invention was, by Hooke's death, in constant use among clock makers.
Hooke announced he conceived a way to build a marine chronometer to determine longitude. and with the help of Boyle and others, he attempted to patent it. In the process, Hooke demonstrated a pocket-watch of his own devising that was fitted with a coil spring attached to the arbour of the balance. Hooke's refusal to accept an escape clause in the proposed exclusive contract for the use of this idea resulted in its abandonment.
Hooke developed the principle of the balance spring independently of Huygens and at least five years beforehand. Huygens published his own work in Journal de Scavans in February 1675 and built the first functioning watch to use a balance spring.
Microscopy
In 1663 and 1664, Hooke made his microscopic, and some astronomic, observations, which he collated in Micrographia in 1665. His book, which describes observations with microscopes and telescopes, as well as original work in biology, contains the earliest-recorded observation of a microorganism, the microfungus Mucor. Hooke coined the term "cell", suggesting a resemblance between plant structures and honeycomb cells.The hand-crafted, leather-and-gold-tooled microscope he designed and used to make the observations for Micrographia, which Christopher Cock made for him in London, is on display at the National Museum of Health and Medicine in Maryland. Hooke's work developed from that of Henry Power, who published his microscopy work in Experimental Philosophy (1663); in turn, the Dutch scientist Antonie van Leeuwenhoek went on to develop increased magnification and so reveal protozoa, blood cells and spermatozoa.
Micrographia also contains Hooke's, or perhaps Boyle's and Hooke's, ideas on combustion. Hooke's experiments led him to conclude combustion involves a component of air, a statement with which modern scientists would agree but that was not understood widely, if at all, in the seventeenth century. He also concluded respiration and combustion involve a specific and limited component of air. According to Partington, if "Hooke had continued his experiments on combustion, it is probable that he would have discovered oxygen".
Samuel Pepys wrote of the book in his diary on 21 January 16: "Before I went to bed I sat up till two o’clock in my chamber reading of Mr. Hooke's Observations, the most ingenious book that ever I read in my life".
Palaeontology and geology
One of the observations in Micrographia is of fossil wood, the microscopic structure of which Hooke compared to that of ordinary wood. This led him to conclude that fossilised objects like petrified wood and fossil shells such as ammonites were the remains of living things that had been soaked in mineral-laden petrifying water. He believed that such fossils provided reliable clues about the history of life on Earth and, despite the objections of contemporary naturalists like John Raywho found the concept of extinction theologically unacceptablethat in some cases they might represent species that had become extinct through some geological disaster. In a series of lectures in 1668, Hooke proposed the then-heretical idea the Earth's surface had been formed by volcanoes and earthquakes, and that the latter were responsible for shell fossils being found far above sea level.
In 1835, Charles Lyell, the Scottish geologist and associate of Charles Darwin, wrote of Hooke in Principles of Geology: "His treatise ... is the most philosophical production of that age, in regard to the causes of former changes in the organic and inorganic kingdoms of nature".
Memory
Hooke's scientific model of human memory was one of the first of its kind. In a 1682 lecture to the Royal Society, Hooke proposed a mechanical analogue model of human memory that bore little resemblance to the mainly philosophical models of earlier writers. This model addressed the components of encoding, memory capacity, repetition, retrieval, and forgetting – some with surprisingly modern accuracy. According to psychology professor Douglas Hintzman, Hooke's model's most-interesting points are that it allows for attention and other top-down influences on encoding; it uses resonance to implement parallel, cue-dependent retrieval; it explains memory for recency; it offers a single-system account of repetition and priming; and the power law of forgetting can be derived from the model's assumption in a straightforward way.
Other
On 8 July 1680, Hooke observed the nodal patterns associated with the modes of vibration of glass plates. He ran a bow along the edge of a flour-covered glass plate and saw the nodal patterns emerge. In acoustics, in 1681, Hooke showed the Royal Society that musical tones can be generated using spinning brass cogs cut with teeth in particular proportions.
Architecture
Robert Hooke was Surveyor to the City of London and chief assistant to Christopher Wren, in which capacities he helped Wren rebuild London after the Great Fire of 1666. Hooke designed the Monument to the Great Fire of London (1672), Montagu House in Bloomsbury (1674) and Bethlem Royal Hospital (1674), which became known as "Bedlam". Other buildings Hooke designed include the Royal College of Physicians (1679); Aske's Hospital (1679), Ragley Hall, Warwickshire (1680); the Church of St Mary Magdalene at Willen, Buckinghamshire (1680) and Ramsbury Manor, Wiltshire (1681). He worked on many of the London churches that were rebuilt after the fire; Hooke was generally subcontracted by Wren; from 1671 to 1696, Wren's office paid Hooke £2,820 in fees, more than he ever earned from his Royal Society and Cutler Lectureship posts.
Wren and Hooke were both keen astronomers. The Monument to the Great Fire of London was designed to serve a scientific function as a zenith telescope for astronomical observation, though traffic vibration made it unusable for this purpose. The legacy of this can be observed in the construction of the spiral staircase, which has no central column, and in the observation chamber, which remains in place below ground level. He also collaborated with Wren on the design of St Paul's Cathedral; Hooke determined the ideal shape of an arch is an inverted catenary and thence that a circular series of such arches makes an ideal shape for the cathedral's dome.
In the reconstruction after the Great Fire, Hooke proposed redesigning London's streets on a grid pattern with wide boulevards and arteries, a pattern that was later used in Haussmann's renovation of Paris and in many American cities, for which Wren and others also submitted proposals. The King decided both the prospective cost of building and compensation, and the need to quickly restore trade and population meant the city would be rebuilt on the original property lines. Hooke was given the task of surveying the ruins to identify foundations, street edges and property boundaries. He was closely involved with the drafting of an Act of Common Council (April 1667), which set out the process by which the original foundations would be formally recognised and certificated. According to Lisa Jardine: "in the four weeks from the 4th of October, [Hooke] helped map the fire-damaged area, began compiling a Land Information System for London, and drew up building regulations for an Act of Parliament to govern the rebuilding". Stephen Inwood said: "the surveyors' reports, which were generally written by Hooke, show an admirable ability to get to the nub of intricate neighbourly squabbles, and to produce a crisp and judicious recommendation from a tangle of claims and counter-claims".
Hooke also had to measure and certify land that would be compulsorily purchased for the planned road widening so compensation could be paid. In 1670, he was appointed Surveyor of the Royal Works. Hooke, together with the work of Scottish cartographer and printer John Ogilby, Hooke's precise and detailed surveys led to production in 1677 of a large-scale map of London, the first-known to be of a specific scale (1:1200).
Likenesses
No authenticated portrait of Robert Hooke exists, a situation that has sometimes been attributed to the heated conflicts between Hooke and Isaac Newton, although Hooke's biographer Allan Chapman rejects as a myth claims Newton or his acolytes deliberately destroyed Hooke's portrait. German antiquarian and scholar Zacharias Conrad von Uffenbach visited the Royal Society in 1710 and his account of his visit mentions him being shown portraits of "Boyle and Hoock", which were said to be good likenesses but, while Boyle's portrait survives, Hooke's has been lost. In Hooke's time, the Royal Society met at Gresham College but within a few months of Hooke's death Newton became the Society's president and plans for a new meeting place were made. When the Royal Society moved to new premises in 1710, Hooke's was the only portrait that went missing and remains so. According to Hooke's diary, he sat for a portrait by renowned artist Mary Beale, so it is possible such a portrait did at some time exist. Conversely, Chapman draws attention to the fact that Waller's extensively illustrated work, Posthumous works of Robert Hooke, published shortly after Hooke's death, has no portrait of him.
Two contemporaneous, written descriptions of Hooke's appearance have survived; his close friend John Aubrey described him in middle age and at the height of his creative powers:
Richard Waller, writing in 1705 in The Posthumous Works of Robert Hooke, described the elderly Hooke:
On 3 July 1939, Time magazine published a portrait, supposedly of Hooke, but when Ashley Montagu traced the source, it was found to lack a verifiable connection to Hooke. Montagu found the two contemporaneous written descriptions of Hooke's appearance agree with one another but that neither matches the portrait in Time.
In 2003, historian Lisa Jardine conjectured that a recently discovered portrait was of Hooke, but this proposal was disproved by William B. Jensen of the University of Cincinnati who identified the subject as the Flemish scholar Jan Baptist van Helmont.
Other possible likenesses of Hooke include:
A seal used by Hooke displays an unusual profile portrait of a man's head, which some have said portrays Hooke.
The engraved frontispiece to the 1728 edition of Chambers' Cyclopedia shows a drawing of a bust of Robert Hooke. The extent to which the drawing is based on a real work of art is unknown.
A memorial window existed at St Helen's Church, Bishopsgate, London, but it was a formulaic rendering rather than a n accurate likeness. The window was destroyed in the 1993 Bishopsgate bombing.
In 2003, the amateur painter Rita Greer embarked on a project to memorialise Hooke and produce credible images of him, both painted and drawn, she believes match Aubrey's and Waller's the descriptions of him. Greer's images of Hooke, which are free to use under the Free Art License, have been used for television programmes in the UK and the US, in books, magazines and for public relations.
In 2019, Larry Griffing, an associate professor in Biology at Texas A&M University, proposed that a portrait by Mary Beale of an unknown sitter and referred to as Portrait of a Mathematicianis actually of Hooke, noting the physical features of the sitter in the portrait match Hooke's. The figure points to a drawing of elliptical motion that appears to match an unpublished manuscript created by him. The painting also includes an orrery depicting the same principle. According to Griffing, buildings included in the image are of Lowther Castle, now in Cumbria, and its Church of St Michael. The church was renovated under one of Hooke's architectural commissions, which Beale would have known from her extensive body of work for the Lowther family. According to Griffing, the painting would once have been owned by the Royal Society but was abandoned when Newton, its president, moved the Society's headquarters in 1710. Christopher Whittaker of the School of Education, University of Durham, England, has questioned Griffing's analysis; according to Whittaker, it is more likely to be of Isaac Barrow; in a response to Whittaker, Griffing reaffirmed his deduction.
Commemorations
3514 Hooke, an asteroid (1971 UJ)
A crater on the Moon and another on Mars are named in his honour.
The Hooke Medal is an annual award by the British Society for Cell Biology, to recognise "an emerging leader in cell biology".
List of new memorials to Robert Hooke 2005–2009 erected the occasion of the tercentenary of his death
The Boyle-Hooke plaque in Oxford
Works
Lectures de potentia restitutiva, or, Of spring explaining the power of springing bodies. London : Printed for John Martyn. 1678.
Micrographia:
includes An Attempt to prove the Annual Motion of the Earth, Animadversions on the Machina Coelestis of Mr. Hevelius, A Description of Helioscopes with other instruments, Mechanical Improvement of Lamps, Remarks about Comets 1677, Microscopium, Lectures on the Spring, etc.
Explanatory notes
References
Citations
Sources
(Published in the UK (2003) as The man who knew too much: the inventive life of Robert Hooke, 1635-1703, London, Pan Books, ISBN 978-0-330-48829-7, OCLC 59355860)
Also viewable on here
Further reading
Gunther's Early Science in Oxford devotes five of its fourteen volumes to Hooke.
See also
List of astronomical instrument makers
External links
Robert Hooke, hosted by Westminster School
Micrographia
Hooke's Micrographia, at Project Gutenberg (downloadable collections, including searchable ASCII text and book as complete html document with images)
Hooke's Micrographia, at Linda Hall Library
Digitzed images of Micrographia housed at the University of Kentucky Libraries Special Collections Research Center
Lost manuscript of Robert Hooke discovered – from The Guardian
Manuscript bought for The Royal Society – from The Guardian
Robert Hooke's Books, a searchable database of books that belonged to or were annotated by Robert Hooke
– A 60-minute presentation by Prof. Michael Cooper, Gresham College, with links to slides, audio, video, and a transcript, with references
(A pair of letters exchanged between Hooke and Newton (9 December 1679 and 13 December 1679, omitted from Waller's The Posthumous Works of Robert Hooke, M.D. S.R.S.)
(Hooke's diary for MarchJuly 1672 and January 1681 to May 1683, omitted by Robinson and Adams from The Diary of Robert Hooke, M.A., M.D., F.R.S., 1672–1680)
(A pair of letters exchanged between Hooke and Newton (9 December 1679 and 13 December 1679, omitted from Rowse's Essay on Newton's Principia.)
English Anglicans
17th-century English architects
17th-century English scientists
English inventors
English physicists
Natural philosophers
Original fellows of the Royal Society
People educated at Westminster School, London
People from Freshwater, Isle of Wight
Academics of Gresham College
British scientific instrument makers
Age of Enlightenment
1635 births
1703 deaths
Architects from the Isle of Wight | 0.778808 | 0.999141 | 0.778139 |
Laplace's demon | In the history of science, Laplace's demon was a notable published articulation of causal determinism on a scientific basis by Pierre-Simon Laplace in 1814. According to determinism, if someone (the demon) knows the precise location and momentum of every atom in the universe, their past and future values for any given time are entailed; they can be calculated from the laws of classical mechanics.
Discoveries and theories in the decades following suggest that some elements of Laplace's original writing are wrong or incompatible with our universe. For example, irreversible processes in thermodynamics suggest that Laplace's "demon" could not reconstruct past positions and moments from the current state.
English translation
This intellect is often referred to as Laplace's demon (and sometimes Laplace's Superman, after Hans Reichenbach). Laplace himself did not use the word "demon", which was a later embellishment. As translated into English above, he simply referred to: "Une intelligence ... Rien ne serait incertain pour elle, et l'avenir, comme le passé, serait présent à ses yeux." This idea seems to have been widespread around the time that Laplace first expressed it in 1773, particularly in France. Variations can be found in Maupertuis (1756), Nicolas de Condorcet (1768), Baron D'Holbach (1770), and an undated fragment in the archives of Diderot. Recent scholarship suggests that the image of a super-powerful calculating intelligence was also proposed by Roger Joseph Boscovich in his 1758 Theoria philosophiae naturalis.
Arguments against Laplace's demon
Thermodynamic irreversibility
According to chemical engineer Robert Ulanowicz in his 1986 book Growth and Development, Laplace's demon met its end with early 19th century developments of the concepts of irreversibility, entropy, and the second law of thermodynamics. In other words, Laplace's demon was based on the premise of reversibility and classical mechanics; however, Ulanowicz points out that many thermodynamic processes are irreversible, so that if thermodynamic quantities are taken to be purely physical then no such demon is possible as one could not reconstruct past positions and momenta from the current state.
Maximum entropy thermodynamics takes a very different view, considering thermodynamic variables to have a statistical basis which is separate from the deterministic microscopic physics. However, this theory has met criticism regarding its ability to make predictions about physics; a number of physicists and mathematicians, including Yvan Velenik of the Department of Mathematics for the University of Geneva, have pointed out that maximum entropy thermodynamics essentially describes our knowledge about a system but does not describe the system itself.
Quantum mechanical irreversibility
Due to its canonical assumption of determinism, Laplace's demon is incompatible with the Copenhagen interpretation, which stipulates indeterminacy. The interpretation of quantum mechanics is still very much open for debate and there are many who take opposing views (such as the many worlds interpretation and the de Broglie–Bohm interpretation).
Chaos theory
Chaos theory is sometimes pointed out as a contradiction to Laplace's demon: it describes how a deterministic system can nonetheless exhibit behavior that is impossible to predict: as in the butterfly effect, minor variations between the starting conditions of two systems can result in major differences. While this explains unpredictability in practical cases, applying it to Laplace's case is questionable: under the strict demon hypothesis all details are known—to infinite precision—and therefore variations in starting conditions are non-existent. Put another way: Chaos theory is applicable when knowledge of the system is imperfect, whereas Laplace's demon assumes perfect knowledge of the system, therefore the variability leading to chaos in chaos theory and non-variability in the knowledge of the world Laplace's demon holds are noncomparable.
Cantor diagonalization
In 2008, David Wolpert used Cantor diagonalization to challenge the idea of Laplace's demon. He did this by assuming that the demon is a computational device and showed that no two such devices can completely predict each other. Wolpert's paper was cited in 2014 in a paper of Josef Rukavicka, where a significantly simpler argument is presented that disproves Laplace's demon using Turing machines, under the assumption of free will.
Additional context
In full context, Laplace's demon, as conceived, is infinitely removed from the human mind and thus could never assist humanity's efforts at prediction:
Despite this, the English physicist Stephen Hawking said in his book A Brief History of Time that "Laplace suggested that there should be a set of scientific laws that would allow us to predict everything that would happen in the universe."
Similarly, in James Gleick's book Chaos, the author appears to conflate Laplace's demon with a "dream" for human deterministic predictability, and even states that "Laplace seems almost buffoon-like in his optimism, but much of modern science has pursued his dream" (pg.14).
Loschmidt's paradox
Recently, Laplace's demon has been invoked to resolve a famous paradox of statistical physics, Loschmidt's paradox. The argument is that, in order to reverse all velocities in a gas system, measurements must be performed by what effectively becomes a Laplace's demon. This, in conjunction with Landauer's principle, allows a way out of the paradox.
Recent views
There has recently been proposed a limit on the computational power of the universe, i.e. the ability of Laplace's demon to process an infinite amount of information. The limit is based on the maximum entropy of the universe, the speed of light, and the minimum amount of time taken to move information across the Planck length, and the figure was shown to be about 10120 bits. Accordingly, anything that requires more than this amount of data cannot be computed in the amount of time that has elapsed so far in the universe. A simple logical proof of the impossibility of Laplace's idea was advanced in 2012 by Iegor Reznikoff, who posits that the demon cannot predict his own future memory.
See also
Clockwork universe theory
Eudaemons
Evil demon
Laplace's Witch (film)
Maxwell's demon
Omniscience
Superdeterminism
References
Determinism
Thought experiments
Pierre-Simon Laplace
Fictional demons | 0.779861 | 0.99778 | 0.77813 |
Thermal energy | The term "thermal energy" is often used ambiguously in physics and engineering. It can denote several different physical concepts, including:
Internal energy: The total energy contained within a body of matter or radiation.
Heat: Energy in transfer between a system and its surroundings by mechanisms other than thermodynamic work and transfer of matter.
The characteristic energy associated with a single microscopic degree of freedom, where denotes temperature and denotes the Boltzmann constant.
Mark Zemansky (1970) has argued that the term “thermal energy” is best avoided due to its ambiguity. He suggests using more precise terms like “internal energy” and “heat” to avoid confusion. The term is, however, used in some textbooks.
Relation between heat and internal energy
In thermodynamics, heat is energy in transfer to or from a thermodynamic system by mechanisms other than thermodynamic work or transfer of matter, such as conduction, radiation, and friction. Heat refers to a quantity in transfer between systems, not to a property of any one system, or "contained" within it; on the other hand, internal energy and enthalpy are properties of a single system. Heat and work depend on the way in which an energy transfer occurs. In contrast, internal energy is a property of the state of a system and can thus be understood without knowing how the energy got there.
Macroscopic thermal energy
In addition to the microscopic kinetic energies of its molecules, the internal energy of a body includes chemical energy belonging to distinct molecules, and the global joint potential energy involved in the interactions between molecules and suchlike. Thermal energy may be viewed as contributing to internal energy or to enthalpy.
Chemical internal energy
The internal energy of a body can change in a process in which chemical potential energy is converted into non-chemical energy. In such a process, the thermodynamic system can change its internal energy by doing work on its surroundings, or by gaining or losing energy as heat. It is not quite lucid to merely say that "the converted chemical potential energy has simply become internal energy". It is, however, sometimes convenient to say that "the chemical potential energy has been converted into thermal energy". This is expressed in ordinary traditional language by talking of 'heat of reaction'.
Potential energy of internal interactions
In a body of material, especially in condensed matter, such as a liquid or a solid, in which the constituent particles, such as molecules or ions, interact strongly with one another, the energies of such interactions contribute strongly to the internal energy of the body. Still, they are not immediately apparent in the kinetic energies of molecules, as manifest in temperature. Such energies of interaction may be thought of as contributions to the global internal microscopic potential energies of the body.
Microscopic thermal energy
In a statistical mechanical account of an ideal gas, in which the molecules move independently between instantaneous collisions, the internal energy is just the sum total of the gas's independent particles' kinetic energies, and it is this kinetic motion that is the source and the effect of the transfer of heat across a system's boundary. For a gas that does not have particle interactions except for instantaneous collisions, the term "thermal energy" is effectively synonymous with "internal energy".
In many statistical physics texts, "thermal energy" refers to , the product of the Boltzmann constant and the absolute temperature, also written as .
Thermal current density
When there is no accompanying flow of matter, the term "thermal energy" is also applied to the energy carried by a heat flow.
See also
Geothermal energy
Geothermal heating
Geothermal power
Heat transfer
Ocean thermal energy conversion
Orders of magnitude (temperature)
Thermal energy storage
References
Thermodynamic properties
Forms of energy
Physics | 0.780386 | 0.997061 | 0.778092 |
Introduction to electromagnetism | Electromagnetism is one of the fundamental forces of nature. Early on, electricity and magnetism were studied separately and regarded as separate phenomena. Hans Christian Ørsted discovered that the two were related – electric currents give rise to magnetism. Michael Faraday discovered the converse, that magnetism could induce electric currents, and James Clerk Maxwell put the whole thing together in a unified theory of electromagnetism. Maxwell's equations further indicated that electromagnetic waves existed, and the experiments of Heinrich Hertz confirmed this, making radio possible. Maxwell also postulated, correctly, that light was a form of electromagnetic wave, thus making all of optics a branch of electromagnetism. Radio waves differ from light only in that the wavelength of the former is much longer than the latter. Albert Einstein showed that the magnetic field arises through the relativistic motion of the electric field and thus magnetism is merely a side effect of electricity. The modern theoretical treatment of electromagnetism is as a quantum field in quantum electrodynamics.
In many situations of interest to electrical engineering, it is not necessary to apply quantum theory to get correct results. Classical physics is still an accurate approximation in most situations involving macroscopic objects. With few exceptions, quantum theory is only necessary at the atomic scale and a simpler classical treatment can be applied. Further simplifications of treatment are possible in limited situations. Electrostatics deals only with stationary electric charges so magnetic fields do not arise and are not considered. Permanent magnets can be described without reference to electricity or electromagnetism. Circuit theory deals with electrical networks where the fields are largely confined around current carrying conductors. In such circuits, even Maxwell's equations can be dispensed with and simpler formulations used. On the other hand, a quantum treatment of electromagnetism is important in chemistry. Chemical reactions and chemical bonding are the result of quantum mechanical interactions of electrons around atoms. Quantum considerations are also necessary to explain the behaviour of many electronic devices, for instance the tunnel diode.
Electric charge
Electromagnetism is one of the fundamental forces of nature alongside gravity, the strong force and the weak force. Whereas gravity acts on all things that have mass, electromagnetism acts on all things that have electric charge. Furthermore, as there is the conservation of mass according to which mass cannot be created or destroyed, there is also the conservation of charge which means that the charge in a closed system (where no charges are leaving or entering) must remain constant. The fundamental law that describes the gravitational force on a massive object in classical physics is Newton's law of gravity. Analogously, Coulomb's law is the fundamental law that describes the force that charged objects exert on one another. It is given by the formula
where F is the force, ke is the Coulomb constant, q1 and q2 are the magnitudes of the two charges, and r2 is the square of the distance between them. It describes the fact that like charges repel one another whereas opposite charges attract one another and that the stronger the charges of the particles, the stronger the force they exert on one another. The law is also an inverse square law which means that as the distance between two particles is doubled, the force on them is reduced by a factor of four.
Electric and magnetic fields
In physics, fields are entities that interact with matter and can be described mathematically by assigning a value to each point in space and time. Vector fields are fields which are assigned both a numerical value and a direction at each point in space and time. Electric charges produce a vector field called the electric field. The numerical value of the electric field, also called the electric field strength, determines the strength of the electric force that a charged particle will feel in the field and the direction of the field determines which direction the force will be in. By convention, the direction of the electric field is the same as the direction of the force on positive charges and opposite to the direction of the force on negative charges. Because positive charges are repelled by other positive charges and are attracted to negative charges, this means the electric fields point away from positive charges and towards negative charges. These properties of the electric field are encapsulated in the equation for the electric force on a charge written in terms of the electric field:
where F is the force on a charge q in an electric field E.
As well as producing an electric field, charged particles will produce a magnetic field when they are in a state of motion that will be felt by other charges that are in motion (as well as permanent magnets). The direction of the force on a moving charge from a magnetic field is perpendicular to both the direction of motion and the direction of the magnetic field lines and can be found using the right-hand rule. The strength of the force is given by the equation
where F is the force on a charge q with speed v in a magnetic field B which is pointing in a direction of angle θ from the direction of motion of the charge.
The combination of the electric and magnetic forces on a charged particle is called the Lorentz force. Classical electromagnetism is fully described by the Lorentz force alongside a set of equations called Maxwell's equations. The first of these equations is known as Gauss's law. It describes the electric field produced by charged particles and by charge distributions. According to Gauss's law, the flux (or flow) of electric field through any closed surface is proportional to the amount of charge that is enclosed by that surface. This means that the greater the charge, the greater the electric field that is produced. It also has other important implications. For example, this law means that if there is no charge enclosed by the surface, then either there is no electric field at all or, if there is a charge near to but outside of the closed surface, the flow of electric field into the surface must exactly cancel with the flow out of the surface. The second of Maxwell's equations is known as Gauss's law for magnetism and, similarly to the first Gauss's law, it describes flux, but instead of electric flux, it describes magnetic flux. According to Gauss's law for magnetism, the flow of magnetic field through a closed surface is always zero. This means that if there is a magnetic field, the flow into the closed surface will always cancel out with the flow out of the closed surface. This law has also been called "no magnetic monopoles" because it means that any magnetic flux flowing out of a closed surface must flow back into it, meaning that positive and negative magnetic poles must come together as a magnetic dipole and can never be separated into magnetic monopoles. This is in contrast to electric charges which can exist as separate positive and negative charges.
The third of Maxwell's equations is called the Ampère–Maxwell law. It states that a magnetic field can be generated by an electric current. The direction of the magnetic field is given by Ampère's right-hand grip rule. If the wire is straight, then the magnetic field is curled around it like the gripped fingers in the right-hand rule. If the wire is wrapped into coils, then the magnetic field inside the coils points in a straight line like the outstretched thumb in the right-hand grip rule. When electric currents are used to produce a magnet in this way, it is called an electromagnet. Electromagnets often use a wire curled up into solenoid around an iron core which strengthens the magnetic field produced because the iron core becomes magnetised. Maxwell's extension to the law states that a time-varying electric field can also generate a magnetic field. Similarly, Faraday's law of induction states that a magnetic field can produce an electric current. For example, a magnet pushed in and out of a coil of wires can produce an electric current in the coils which is proportional to the strength of the magnet as well as the number of coils and the speed at which the magnet is inserted and extracted from the coils. This principle is essential for transformers which are used to transform currents from high voltage to low voltage, and vice versa. They are needed to convert high voltage mains electricity into low voltage electricity which can be safely used in homes. Maxwell's formulation of the law is given in the Maxwell–Faraday equation—the fourth and final of Maxwell's equations—which states that a time-varying magnetic field produces an electric field.
Together, Maxwell's equations provide a single uniform theory of the electric and magnetic fields and Maxwell's work in creating this theory has been called "the second great unification in physics" after the first great unification of Newton's law of universal gravitation. The solution to Maxwell's equations in free space (where there are no charges or currents) produces wave equations corresponding to electromagnetic waves (with both electric and magnetic components) travelling at the speed of light. The observation that these wave solutions had a wave speed exactly equal to the speed of light led Maxwell to hypothesise that light is a form of electromagnetic radiation and to posit that other electromagnetic radiation could exist with different wavelengths. The existence of electromagnetic radiation was proved by Heinrich Hertz in a series of experiments ranging from 1886 to 1889 in which he discovered the existence of radio waves. The full electromagnetic spectrum (in order of increasing frequency) consists of radio waves, microwaves, infrared radiation, visible light, ultraviolet light, X-rays and gamma rays.
A further unification of electromagnetism came with Einstein's special theory of relativity. According to special relativity, observers moving at different speeds relative to one another occupy different observational frames of reference. If one observer is in motion relative to another observer then they experience length contraction where unmoving objects appear closer together to the observer in motion than to the observer at rest. Therefore, if an electron is moving at the same speed as the current in a neutral wire, then they experience the flowing electrons in the wire as standing still relative to it and the positive charges as contracted together. In the lab frame, the electron is moving and so feels a magnetic force from the current in the wire but because the wire is neutral it feels no electric force. But in the electron's rest frame, the positive charges seem closer together compared to the flowing electrons and so the wire seems positively charged. Therefore, in the electron's rest frame it feels no magnetic force (because it is not moving in its own frame) but it does feel an electric force due to the positively charged wire. This result from relativity proves that magnetic fields are just electric fields in a different reference frame (and vice versa) and so the two are different manifestations of the same underlying electromagnetic field.
Conductors, insulators and circuits
Conductors
A conductor is a material that allows electrons to flow easily. The most effective conductors are usually metals because they can be described fairly accurately by the free electron model in which electrons delocalize from the atomic nuclei, leaving positive ions surrounded by a cloud of free electrons. Examples of good conductors include copper, aluminum, and silver. Wires in electronics are often made of copper.
The main properties of conductors are:
The electric field is zero inside a perfect conductor. Because charges are free to move in a conductor, when they are disturbed by an external electric field they rearrange themselves such that the field that their configuration produces exactly cancels the external electric field inside the conductor.
The electric potential is the same everywhere inside the conductor and is constant across the surface of the conductor. This follows from the first statement because the field is zero everywhere inside the conductor and therefore the potential is constant within the conductor too.
The electric field is perpendicular to the surface of a conductor. If this were not the case, the field would have a nonzero component on the surface of the conductor, which would cause the charges in the conductor to move around until that component of the field is zero.
The net electric flux through a surface is proportional to the charge enclosed by the surface. This is a restatement of Gauss' law.
In some materials, the electrons are bound to the atomic nuclei and so are not free to move around but the energy required to set them free is low. In these materials, called semiconductors, the conductivity is low at low temperatures but as the temperature is increased the electrons gain more thermal energy and the conductivity increases. Silicon is an example of a semiconductors that can be used to create solar cells which become more conductive the more energy they receive from photons from the sun.
Superconductors are materials that exhibit little to no resistance to the flow of electrons when cooled below a certain critical temperature. Superconductivity can only be explained by the quantum mechanical Pauli exclusion principle which states that no two fermions (an electron is a type of fermion) can occupy exactly the same quantum state. In superconductors, below a certain temperature the electrons form boson bound pairs which do not follow this principle and this means that all the electrons can fall to the same energy level and move together uniformly in a current.
Insulators
Insulators are material which are highly resistive to the flow of electrons and so are often used to cover conducting wires for safety. In insulators, electrons are tightly bound to atomic nuclei and the energy to free them is very high so they are not free to move and are resistive to induced movement by an external electric field. However, some insulators, called dielectrics, can be polarised under the influence of an external electric field so that the charges are minutely displaced forming dipoles that create a positive and negative side. Dielectrics are used in capacitors to allow them to store more electric potential energy in the electric field between the capacitor plates.
Capacitors
A capacitor is an electronic component that stores electrical potential energy in an electric field between two oppositely charged conducting plates. If one of the conducting plates has a charge density of +Q/A and the other has a charge of -Q/A where A is the area of the plates, then there will be an electric field between them. The potential difference between two parallel plates V can be derived mathematically as
where d is the plate separation and is the permittivity of free space. The ability of the capacitor to store electrical potential energy is measured by the capacitance which is defined as and for a parallel plate capacitor this is
If a dielectric is placed between the plates then the permittivity of free space is multiplied by the relative permittivity of the dielectric and the capacitance increases. The maximum energy that can be stored by a capacitor is proportional to the capacitance and the square of the potential difference between the plates
Inductors
An inductor is an electronic component that stores energy in a magnetic field inside a coil of wire. A current-carrying coil of wire induces a magnetic field according to Ampère's circuital law. The greater the current I, the greater the energy stored in the magnetic field and the lower the inductance which is defined where is the magnetic flux produced by the coil of wire. The inductance is a measure of the circuit's resistance to a change in current and so inductors with high inductances can also be used to oppose alternating current.
Other circuit components
Circuit laws
Circuit theory deals with electrical networks where the fields are largely confined around current carrying conductors. In such circuits, simple circuit laws can be used instead of deriving all the behaviour of the circuits directly from electromagnetic laws. Ohm's law states the relationship between the current I and the voltage V of a circuit by introducing the quantity known as resistance R
Ohm's law:
Power is defined as so Ohm's law can be used to tell us the power of the circuit in terms of other quantities
Kirchhoff's junction rule states that the current going into a junction (or node) must equal the current that leaves the node. This comes from charge conservation, as current is defined as the flow of charge over time. If a current splits as it exits a junction, the sum of the resultant split currents is equal to the incoming circuit.
Kirchhoff's loop rule states that the sum of the voltage in a closed loop around a circuit equals zero. This comes from the fact that the electric field is conservative which means that no matter the path taken, the potential at a point does not change when you get back there.
Rules can also tell us how to add up quantities such as the current and voltage in series and parallel circuits.
For series circuits, the current remains the same for each component and the voltages and resistances add up:
For parallel circuits, the voltage remains the same for each component and the currents and resistances are related as shown:
See also
List of textbooks on electromagnetism
References
Electromagnetism
electromagnetism | 0.795708 | 0.977782 | 0.778029 |
Vehicle dynamics | Vehicle dynamics is the study of vehicle motion, e.g., how a vehicle's forward movement changes in response to driver inputs, propulsion system outputs, ambient conditions, air/surface/water conditions, etc.
Vehicle dynamics is a part of engineering primarily based on classical mechanics.
It may be applied for motorized vehicles (such as automobiles), bicycles and motorcycles, aircraft, and watercraft.
Factors affecting vehicle dynamics
The aspects of a vehicle's design which affect the dynamics can be grouped into drivetrain and braking, suspension and steering, distribution of mass, aerodynamics and tires.
Drivetrain and braking
Automobile layout (i.e. location of engine and driven wheels)
Powertrain
Braking system
Suspension and steering
Some attributes relate to the geometry of the suspension, steering and chassis. These include:
Ackermann steering geometry
Axle track
Camber angle
Caster angle
Ride height
Roll center
Scrub radius
Steering ratio
Toe
Wheel alignment
Wheelbase
Distribution of mass
Some attributes or aspects of vehicle dynamics are purely due to mass and its distribution. These include:
Center of mass
Moment of inertia
Roll moment
Sprung mass
Unsprung mass
Weight distribution
Aerodynamics
Some attributes or aspects of vehicle dynamics are purely aerodynamic. These include:
Automobile drag coefficient
Automotive aerodynamics
Center of pressure
Downforce
Ground effect in cars
Tires
Some attributes or aspects of vehicle dynamics can be attributed directly to the tires. These include:
Camber thrust
Circle of forces
Contact patch
Cornering force
Ground pressure
Pacejka's Magic Formula
Pneumatic trail
Radial Force Variation
Relaxation length
Rolling resistance
Self aligning torque
Skid
Slip angle
Slip (vehicle dynamics)
Spinout
Steering ratio
Tire load sensitivity
Vehicle behaviours
Some attributes or aspects of vehicle dynamics are purely dynamic. These include:
Body flex
Body roll
Bump Steer
Bundorf analysis
Directional stability
Critical speed
Noise, vibration, and harshness
Pitch
Ride quality
Roll
Speed wobble
Understeer, oversteer, lift-off oversteer, and fishtailing
Weight transfer and load transfer
Yaw
Analysis and simulation
The dynamic behavior of vehicles can be analysed in several different ways. This can be as straightforward as a simple spring mass system, through a three-degree of freedom (DoF) bicycle model, to a large degree of complexity using a multibody system simulation package such as MSC ADAMS or Modelica. As computers have gotten faster, and software user interfaces have improved, commercial packages such as CarSim have become widely used in industry for rapidly evaluating hundreds of test conditions much faster than real time. Vehicle models are often simulated with advanced controller designs provided as software in the loop (SIL) with controller design software such as Simulink, or with physical hardware in the loop (HIL).
Vehicle motions are largely due to the shear forces generated between the tires and road, and therefore the tire model is an essential part of the math model. In current vehicle simulator models, the tire model is the weakest and most difficult part to simulate. The tire model must produce realistic shear forces during braking, acceleration, cornering, and combinations, on a range of surface conditions. Many models are in use. Most are semi-empirical, such as the Pacejka Magic Formula model.
Racing car games or simulators are also a form of vehicle dynamics simulation. In early versions many simplifications were necessary in order to get real-time performance with reasonable graphics. However, improvements in computer speed have combined with interest in realistic physics, leading to driving simulators that are used for vehicle engineering using detailed models such as CarSim.
It is important that the models should agree with real world test results, hence many of the following tests are correlated against results from instrumented test vehicles.
Techniques include:
Linear range constant radius understeer
Fishhook
Frequency response
Lane change
Moose test
Sinusoidal steering
Skidpad
Swept path analysis
See also
Automotive suspension design
Automobile handling
Hunting oscillation
Multi-axis shaker table
Vehicular metrics
4-poster
7 post shaker
References
Further reading
A new way of representing tyre data obtained from measurements in pure cornering and pure braking conditions.
Mathematically oriented derivation of standard vehicle dynamics equations, and definitions of standard terms.
Vehicle dynamics as developed by Maurice Olley from the 1930s onwards. First comprehensive analytical synthesis of vehicle dynamics.
Latest and greatest, also the standard reference for automotive suspension engineers.
Vehicle dynamics and chassis design from a race car perspective.
Handling, Braking, and Ride of Road and Race Cars.
Lecture Notes to the MOOC Vehicle Dynamics of iversity
Automotive engineering
Automotive technologies
Dynamics (mechanics)
Vehicle technology | 0.789409 | 0.985464 | 0.777934 |
Jevons paradox | In economics, the Jevons paradox (; sometimes Jevons effect) occurs when technological progress increases the efficiency with which a resource is used (reducing the amount necessary for any one use), but the falling cost of use induces increases in demand enough that resource use is increased, rather than reduced. Governments, both historical and modern, typically expect that energy efficiency gains will lower energy consumption, rather than expecting the Jevons paradox.
In 1865, the English economist William Stanley Jevons observed that technological improvements that increased the efficiency of coal use led to the increased consumption of coal in a wide range of industries. He argued that, contrary to common intuition, technological progress could not be relied upon to reduce fuel consumption.
The issue has been re-examined by modern economists studying consumption rebound effects from improved energy efficiency. In addition to reducing the amount needed for a given use, improved efficiency also lowers the relative cost of using a resource, which increases the quantity demanded. This may counteract (to some extent) the reduction in use from improved efficiency. Additionally, improved efficiency increases real incomes and accelerates economic growth, further increasing the demand for resources. The Jevons paradox occurs when the effect from increased demand predominates, and the improved efficiency results in a faster rate of resource utilization.
Considerable debate exists about the size of the rebound in energy efficiency and the relevance of the Jevons paradox to energy conservation. Some dismiss the effect, while others worry that it may be self-defeating to pursue sustainability by increasing energy efficiency. Some environmental economists have proposed that efficiency gains be coupled with conservation policies that keep the cost of use the same (or higher) to avoid the Jevons paradox. Conservation policies that increase cost of use (such as cap and trade or green taxes) can be used to control the rebound effect.
History
The Jevons paradox was first described by the English economist William Stanley Jevons in his 1865 book The Coal Question. Jevons observed that England's consumption of coal soared after James Watt introduced the Watt steam engine, which greatly improved the efficiency of the coal-fired steam engine from Thomas Newcomen's earlier design. Watt's innovations made coal a more cost-effective power source, leading to the increased use of the steam engine in a wide range of industries. This in turn increased total coal consumption, even as the amount of coal required for any particular application fell. Jevons argued that improvements in fuel efficiency tend to increase (rather than decrease) fuel use, writing: "It is a confusion of ideas to suppose that the economical use of fuel is equivalent to diminished consumption. The very contrary is the truth."
At that time, many in Britain worried that coal reserves were rapidly dwindling, but some experts opined that improving technology would reduce coal consumption. Jevons argued that this view was incorrect, as further increases in efficiency would tend to increase the use of coal. Hence, improving technology would tend to increase the rate at which England's coal deposits were being depleted, and could not be relied upon to solve the problem.
Although Jevons originally focused on coal, the concept has since been extended to other resources, e.g., water usage. The Jevons paradox is also found in socio-hydrology, in the safe development paradox called the reservoir effect, where construction of a reservoir to reduce the risk of water shortage can instead exacerbate that risk, as increased water availability leads to more development and hence more water consumption.
Cause
Economists have observed that consumers tend to travel more when their cars are more fuel efficient, causing a 'rebound' in the demand for fuel. An increase in the efficiency with which a resource (e.g. fuel) is used causes a decrease in the cost of using that resource when measured in terms of what it can achieve (e.g. travel). Generally speaking, a decrease in the cost (or price) of a good or service will increase the quantity demanded (the law of demand). With a lower cost for travel, consumers will travel more, increasing the demand for fuel. This increase in demand is known as the rebound effect, and it may or may not be large enough to offset the original drop in fuel use from the increased efficiency. The Jevons paradox occurs when the rebound effect is greater than 100%, exceeding the original efficiency gains.
The size of the direct rebound effect is dependent on the price elasticity of demand for the good. In a perfectly competitive market where fuel is the sole input used, if the price of fuel remains constant but efficiency is doubled, the effective price of travel would be halved (twice as much travel can be purchased). If in response, the amount of travel purchased more than doubles (i.e. demand is price elastic), then fuel consumption would increase, and the Jevons paradox would occur. If demand is price inelastic, the amount of travel purchased would less than double, and fuel consumption would decrease. However, goods and services generally use more than one type of input (e.g. fuel, labour, machinery), and other factors besides input cost may also affect price. These factors tend to reduce the rebound effect, making the Jevons paradox less likely to occur.
Khazzoom–Brookes postulate
In the 1980s, economists Daniel Khazzoom and Leonard Brookes revisited the Jevons paradox for the case of society's energy use. Brookes, then chief economist at the UK Atomic Energy Authority, argued that attempts to reduce energy consumption by increasing energy efficiency would simply raise demand for energy in the economy as a whole. Khazzoom focused on the narrower point that the potential for rebound was ignored in mandatory performance standards for domestic appliances being set by the California Energy Commission.
In 1992, the economist Harry Saunders dubbed the hypothesis that improvements in energy efficiency work to increase (rather than decrease) energy consumption the Khazzoom–Brookes postulate, and argued that the hypothesis is broadly supported by neoclassical growth theory (the mainstream economic theory of capital accumulation, technological progress and long-run economic growth). Saunders showed that the Khazzoom–Brookes postulate occurs in the neoclassical growth model under a wide range of assumptions.
According to Saunders, increased energy efficiency tends to increase energy consumption by two means. First, increased energy efficiency makes the use of energy relatively cheaper, thus encouraging increased use (the direct rebound effect). Second, increased energy efficiency increases real incomes and leads to increased economic growth, which pulls up energy use for the whole economy. At the microeconomic level (looking at an individual market), even with the rebound effect, improvements in energy efficiency usually result in reduced energy consumption. That is, the rebound effect is usually less than 100%. However, at the macroeconomic level, more efficient (and hence comparatively cheaper) energy leads to faster economic growth, which increases energy use throughout the economy. Saunders argued that taking into account both microeconomic and macroeconomic effects, the technological progress that improves energy efficiency will tend to increase overall energy use.
Energy conservation policy
Jevons warned that fuel efficiency gains tend to increase fuel use. However, this does not imply that improved fuel efficiency is worthless if the Jevons paradox occurs; higher fuel efficiency enables greater production and a higher material quality of life. For example, a more efficient steam engine allowed the cheaper transport of goods and people that contributed to the Industrial Revolution. Nonetheless, if the Khazzoom–Brookes postulate is correct, increased fuel efficiency, by itself, will not reduce the rate of depletion of fossil fuels.
There is considerable debate about whether the Khazzoom-Brookes Postulate is correct, and of the relevance of the Jevons paradox to energy conservation policy. Most governments, environmentalists and NGOs pursue policies that improve efficiency, holding that these policies will lower resource consumption and reduce environmental problems. Others, including many environmental economists, doubt this 'efficiency strategy' towards sustainability, and worry that efficiency gains may in fact lead to higher production and consumption. They hold that for resource use to fall, efficiency gains should be coupled with other policies that limit resource use. However, other environmental economists argue that, while the Jevons paradox may occur in some situations, the empirical evidence for its widespread applicability is limited.
The Jevons paradox is sometimes used to argue that energy conservation efforts are futile, for example, that more efficient use of oil will lead to increased demand, and will not slow the arrival or the effects of peak oil. This argument is usually presented as a reason not to enact environmental policies or pursue fuel efficiency (e.g. if cars are more efficient, it will simply lead to more driving). Several points have been raised against this argument. First, in the context of a mature market such as for oil in developed countries, the direct rebound effect is usually small, and so increased fuel efficiency usually reduces resource use, other conditions remaining constant. Second, even if increased efficiency does not reduce the total amount of fuel used, there remain other benefits associated with improved efficiency. For example, increased fuel efficiency may mitigate the price increases, shortages and disruptions in the global economy associated with crude oil depletion. Third, environmental economists have pointed out that fuel use will unambiguously decrease if increased efficiency is coupled with an intervention (e.g. a fuel tax) that keeps the cost of fuel use the same or higher.
The Jevons paradox indicates that increased efficiency by itself may not reduce fuel use, and that sustainable energy policy must rely on other types of government interventions as well. As the imposition of conservation standards or other government interventions that increase cost-of-use do not display the Jevons paradox, they can be used to control the rebound effect. To ensure that efficiency-enhancing technological improvements reduce fuel use, efficiency gains can be paired with government intervention that reduces demand (e.g. green taxes, cap and trade, or higher emissions standards). The ecological economists Mathis Wackernagel and William Rees have suggested that any cost savings from efficiency gains be "taxed away or otherwise removed from further economic circulation. Preferably they should be captured for reinvestment in natural capital rehabilitation." By mitigating the economic effects of government interventions designed to promote ecologically sustainable activities, efficiency-improving technological progress may make the imposition of these interventions more palatable, and more likely to be implemented.
Other examples
Agriculture
Increasing the yield of a crop, such as wheat, for a given area will reduce the area required to achieve the same total yield. However, increasing efficiency may make it more profitable to grow wheat and lead farmers to convert land to the production of wheat, thereby increasing land use instead.
See also
Andy and Bill's law, new software will always consume any increase in computing power that new hardware can provide
Diminishing returns
Downs–Thomson paradox, increasing road capacity can make traffic congestion worse
Tragedy of the commons, a phenomenon in which common resources to which access is not regulated tend to become depleted
Wirth's law, faster hardware can trigger the development of less-efficient software
Dutch Disease, strong revenue from a dominant sector renders other sectors uncompetitive and starves them
References
Further reading
Eponymous paradoxes
Paradoxes in economics
Industrial ecology
Energy policy
Energy conservation
Environmental social science concepts | 0.779317 | 0.998217 | 0.777927 |
Sex in space | The conditions governing sex in space (intercourse, conception and procreation while weightless) have become a necessary study due to plans for long-duration space missions, as well as the future potential accommodation of sexual partners aboard the International Space Station (ISS). Issues explored include disrupted circadian rhythms, radiation, isolation, stress, and the physical acts of intercourse in zero or minimal gravity.
Sex in space is a part of space sexology.
Overview
Human sexual activity in the weightlessness of outer space presents difficulties due to Newton's third law. According to the law, if the couple remain attached, their movements will counter each other. Consequently, their actions will not change their velocity unless they are affected by another, unattached, object. Some difficulty could occur due to drifting into other objects. If the couple have a combined velocity relative to other objects, collisions could occur. The discussion of sex in space has also raised the issue of conception and pregnancy in space.
, with NASA planning lunar outposts and possibly long-duration missions, the topic has taken a respectable place in life sciences. Despite this, some researchers have argued that national and private space agencies have yet to develop any concrete research and plans to address human sexuality in space. Dubé and colleagues (2021) proposed that NASA should embrace the discipline of space sexology by integrating sex research into their Human Research Program. Santaguida and colleagues (2022) have further argued that space agencies and private companies should invest in this discipline to address the potential for sexual harassment and assault in space contexts.
Physiological issues
Numerous physiological changes have been noted during spaceflight, many of which may affect sex and procreation, notably circulation and the flow of blood within the body. Such potential effects would likely be caused by a culmination of factors, including gravitational changes, planetary and space radiation, noise, vibration, social isolation, disrupted circadian rhythms, or mental and physical stress.
Gravity and microgravity
The primary issue to be considered in off-Earth reproduction is the lack of gravitational acceleration. Life on Earth, and thus the reproductive and ontogenetic processes of all life, evolved under the constant influence of the Earth's 1g gravitational field. It is important to study how space environment affects critical phases of mammalian reproduction and development, as well as the events surrounding fertilization, embryogenesis, pregnancy, birth, postnatal maturation, and parental care.
Studies conducted on rats revealed that, although the fetus developed properly once exposed to normal gravity, rats raised in microgravity lacked the ability to right themselves. Another study examined mouse embryo fertilization in microgravity. Although this resulted in healthy mice, once implanted at normal gravity, the fertilization rate was lower for the embryos fertilized in microgravity. , no mice or rats had developed while in microgravity throughout the entire life cycle.
In 2006, American novelist Vanna Bonta invented the 2suit, a garment designed to facilitate sex in weightless environments such as outer space, or on planets with low gravity. The 2suit was made of a lightweight fabric, with a Velcro-lined exterior, which would enable two people to securely embrace. However, Bonta stressed that the 2suit was versatile, and was not intended for the sole purpose of sex. Functionality testing was conducted in 2008 by Bonta aboard G-Force One, a low gravity simulator. It took eight attempts for the two test participants (Bonta and her husband) to successfully embrace one another. According to science author Mark Thompson, the 2suit was cumbersome but moderately successful, and it is not clear whether or not it will have practical value for future space travelers. The 2suit has been covered in the TV series The Universe as well as a 2008 History Channel television documentary. It has also been discussed by online writers.
History of attempts
NASA has stated that it knows of no intercourse in space.
Planned attempts
In June 2015, Pornhub announced its plans to make the first pornographic film in space. It launched a crowdfunding campaign to fund the effort, dubbed Sexploration, with the goal of raising $3.4 million in 60 days. The campaign only received pledges for $236,086. If funded, the film would have been slated for a 2016 release, following six months of training for the two performers and six-person crew. Though it claimed to be in talks with multiple private spaceflight carriers, the company declined to name names "for fear that would risk unnecessary fallout" from the carriers. A Space.com article about the campaign mentioned that in 2008, Virgin Galactic received and rejected a $1 million offer from an undisclosed party to shoot a sex film on board SpaceShipTwo.
Adult film actress CoCo Brown had begun certifying for a co-pilot seat in the XCOR Lynx spaceplane, which would have launched in a suborbital flight in 2016 and spent a short amount of time in zero-gravity. However, XCOR Aerospace declared bankruptcy before ever flying a space tourist.
Short of actual space, the adult entertainment production company Private Media Group has filmed a movie called The Uranus Experiment: Part Two where an actual zero-gravity intercourse scene was accomplished with a reduced-gravity aircraft. The filming process was particularly difficult from a technical and logistical standpoint. Budget constraints allowed for only one shot, featuring the actors Sylvia Saint and Nick Lang.
In popular culture
Science fiction writer and futurist Isaac Asimov, in a 1973 article "Sex in a Spaceship", conjectured what sex would be like in the weightless environment of space, anticipating some of the benefits of engaging in sex in an environment of microgravity.
On July 23, 2006, a Sex in Space panel was held at the Space Frontier Foundation's annual conference. Speakers were science journalist-author Laura Woodmansee, who presented her book Sex in Space; Jim Logan, the first graduate of a new aerospace medicine residency program to be hired by NASA's Johnson Space Center in Houston; and Vanna Bonta, an American poet, novelist, and actress who had recently flown in zero gravity and had agreed to an interview for Woodmansee's book. The speakers made presentations that explored "the biological, emotional, and ... physical issues that will confront people moving [off Earth] into the space environment." NBC science journalist Alan Boyle reported on the panel, opening a world discussion of a topic previously considered taboo.
"Sex in Space" was the title of an episode of the History Channel documentary television series The Universe in 2008. The globally distributed show was dubbed into foreign languages, opening worldwide discussion about what had previously been avoided as a taboo subject. Sex in space became a topic of discussion for the long-term survival of the human species, colonization of other planets, inspired songs, and humanized reasons for space exploration.
The idea of sex in space appears frequently in science fiction. Arthur C. Clarke claimed to first address it in his 1973 novel Rendezvous with Rama.
In the pilot episode of the Expanse, 'Dulcinea''', a scene was shown where the first officer of the ice hauler ship, the Canterbury, was having sexual intercourse with the ship's navigator in zero gravity. The intercourse was met with a sudden interruption when the ship resumed thrust, slamming them both to the bunk bed with the acceleration.
See also
References
Footnotes
General references
External links
Adventures in Space, The Zero-G Spot, by Michael Behar; OUTSIDE Magazine, December 2006
Outer-space sex carries complications By Alan Boyle, MSNBC July 24, 2006. Concept of "2suit" design of American writer Vanna Bonta.
Space sex hoax rises again by James Oberg
Pregnancy in Space Seems Possible
Astronauts test sex in space - but did the earth move? The Guardian'', February 24, 2000
Virgin Galactic rejects $1 million space porn by Peter B. de Selding, MSNBC, October 2, 2008
Has anyone ever had sex in space? from The Straight Dope by Cecil Adams, February 28, 1997
Space Frontier Foundation's media archives for the SFF1484 panel "Sex in Space" from the 2006 "New Space Return to the Moon Conference" featuring authors Laura Woodmansee, and Vanna Bonta with NASA physician Dr. John Logan.
From Russia... with Love (propaganda-style interview with Russian "space procreation" specialist)
The Case for Space Sexology
Love and rockets: We need to figure out how to have sex in space for human survival and well-being
human sexuality
human spaceflight
Sexual intercourse | 0.779242 | 0.998296 | 0.777914 |
Poisson's equation | Poisson's equation is an elliptic partial differential equation of broad utility in theoretical physics. For example, the solution to Poisson's equation is the potential field caused by a given electric charge or mass density distribution; with the potential field known, one can then calculate the corresponding electrostatic or gravitational (force) field. It is a generalization of Laplace's equation, which is also frequently seen in physics. The equation is named after French mathematician and physicist Siméon Denis Poisson who published it in 1823.
Statement of the equation
Poisson's equation is
where is the Laplace operator, and and are real or complex-valued functions on a manifold. Usually, is given, and is sought. When the manifold is Euclidean space, the Laplace operator is often denoted as , and so Poisson's equation is frequently written as
In three-dimensional Cartesian coordinates, it takes the form
When identically, we obtain Laplace's equation.
Poisson's equation may be solved using a Green's function:
where the integral is over all of space. A general exposition of the Green's function for Poisson's equation is given in the article on the screened Poisson equation. There are various methods for numerical solution, such as the relaxation method, an iterative algorithm.
Applications in Physics and Engineering
Newtonian gravity
In the case of a gravitational field g due to an attracting massive object of density ρ, Gauss's law for gravity in differential form can be used to obtain the corresponding Poisson equation for gravity. Gauss's law for gravity is
Since the gravitational field is conservative (and irrotational), it can be expressed in terms of a scalar potential ϕ:
Substituting this into Gauss's law,
yields Poisson's equation for gravity:
If the mass density is zero, Poisson's equation reduces to Laplace's equation. The corresponding Green's function can be used to calculate the potential at distance from a central point mass (i.e., the fundamental solution). In three dimensions the potential is
which is equivalent to Newton's law of universal gravitation.
Electrostatics
Many problems in electrostatics are governed by the Poisson equation, which relates the electric potential
to the free charge density
, such as those found in conductors.
The mathematical details of Poisson's equation, commonly expressed in SI units (as opposed to Gaussian units), describe how the distribution of free charges generates the electrostatic potential in a given Region (mathematics).
Starting with Gauss's law for electricity (also one of Maxwell's equations) in differential form, one has
where is the divergence operator, D is the electric displacement field, and ρf is the free-charge density (describing charges brought from outside).
Assuming the medium is linear, isotropic, and homogeneous (see polarization density), we have the constitutive equation
where is the permittivity of the medium, and E is the electric field.
Substituting this into Gauss's law and assuming that is spatially constant in the region of interest yields
In electrostatics, we assume that there is no magnetic field (the argument that follows also holds in the presence of a constant magnetic field).
Then, we have that
where is the curl operator. This equation means that we can write the electric field as the gradient of a scalar function (called the electric potential), since the curl of any gradient is zero. Thus we can write
where the minus sign is introduced so that is identified as the electric potential energy per unit charge.
The derivation of Poisson's equation under these circumstances is straightforward. Substituting the potential gradient for the electric field,
directly produces Poisson's equation for electrostatics, which is
Specifying the Poisson's equation for the potential requires knowing the charge density distribution. If the charge density is zero, then Laplace's equation results. If the charge density follows a Boltzmann distribution, then the Poisson–Boltzmann equation results. The Poisson–Boltzmann equation plays a role in the development of the Debye–Hückel theory of dilute electrolyte solutions.
Using a Green's function, the potential at distance from a central point charge (i.e., the fundamental solution) is
which is Coulomb's law of electrostatics. (For historical reasons, and unlike gravity's model above, the factor appears here and not in Gauss's law.)
The above discussion assumes that the magnetic field is not varying in time. The same Poisson equation arises even if it does vary in time, as long as the Coulomb gauge is used. In this more general class of cases, computing is no longer sufficient to calculate E, since E also depends on the magnetic vector potential A, which must be independently computed. See Maxwell's equation in potential formulation for more on and A in Maxwell's equations and how an appropriate Poisson's equation is obtained in this case.
Potential of a Gaussian charge density
If there is a static spherically symmetric Gaussian charge density
where is the total charge, then the solution of Poisson's equation
is given by
where is the error function.
This solution can be checked explicitly by evaluating .
Note that for much greater than , approaches unity, and the potential approaches the point-charge potential,
as one would expect. Furthermore, the error function approaches 1 extremely quickly as its argument increases; in practice, for the relative error is smaller than one part in a thousand.
Surface reconstruction
Surface reconstruction is an inverse problem. The goal is to digitally reconstruct a smooth surface based on a large number of points pi (a point cloud) where each point also carries an estimate of the local surface normal ni. Poisson's equation can be utilized to solve this problem with a technique called Poisson surface reconstruction.
The goal of this technique is to reconstruct an implicit function f whose value is zero at the points pi and whose gradient at the points pi equals the normal vectors ni. The set of (pi, ni) is thus modeled as a continuous vector field V. The implicit function f is found by integrating the vector field V. Since not every vector field is the gradient of a function, the problem may or may not have a solution: the necessary and sufficient condition for a smooth vector field V to be the gradient of a function f is that the curl of V must be identically zero. In case this condition is difficult to impose, it is still possible to perform a least-squares fit to minimize the difference between V and the gradient of f.
In order to effectively apply Poisson's equation to the problem of surface reconstruction, it is necessary to find a good discretization of the vector field V. The basic approach is to bound the data with a finite-difference grid. For a function valued at the nodes of such a grid, its gradient can be represented as valued on staggered grids, i.e. on grids whose nodes lie in between the nodes of the original grid. It is convenient to define three staggered grids, each shifted in one and only one direction corresponding to the components of the normal data. On each staggered grid we perform trilinear interpolation on the set of points. The interpolation weights are then used to distribute the magnitude of the associated component of ni onto the nodes of the particular staggered grid cell containing pi. Kazhdan and coauthors give a more accurate method of discretization using an adaptive finite-difference grid, i.e. the cells of the grid are smaller (the grid is more finely divided) where there are more data points. They suggest implementing this technique with an adaptive octree.
Fluid dynamics
For the incompressible Navier–Stokes equations, given by
The equation for the pressure field is an example of a nonlinear Poisson equation:
Notice that the above trace is not sign-definite.
See also
Discrete Poisson equation
Poisson–Boltzmann equation
Helmholtz equation
Uniqueness theorem for Poisson's equation
Weak formulation
Harmonic function
Heat equation
Potential theory
References
Further reading
External links
Poisson Equation at EqWorld: The World of Mathematical Equations
Eponymous equations of physics
Potential theory
Partial differential equations
Electrostatics
Mathematical physics
Electromagnetism | 0.779994 | 0.997295 | 0.777884 |
Flow velocity | In continuum mechanics the flow velocity in fluid dynamics, also macroscopic velocity in statistical mechanics, or drift velocity in electromagnetism, is a vector field used to mathematically describe the motion of a continuum. The length of the flow velocity vector is scalar, the flow speed.
It is also called velocity field; when evaluated along a line, it is called a velocity profile (as in, e.g., law of the wall).
Definition
The flow velocity u of a fluid is a vector field
which gives the velocity of an element of fluid at a position and time
The flow speed q is the length of the flow velocity vector
and is a scalar field.
Uses
The flow velocity of a fluid effectively describes everything about the motion of a fluid. Many physical properties of a fluid can be expressed mathematically in terms of the flow velocity. Some common examples follow:
Steady flow
The flow of a fluid is said to be steady if does not vary with time. That is if
Incompressible flow
If a fluid is incompressible the divergence of is zero:
That is, if is a solenoidal vector field.
Irrotational flow
A flow is irrotational if the curl of is zero:
That is, if is an irrotational vector field.
A flow in a simply-connected domain which is irrotational can be described as a potential flow, through the use of a velocity potential with If the flow is both irrotational and incompressible, the Laplacian of the velocity potential must be zero:
Vorticity
The vorticity, , of a flow can be defined in terms of its flow velocity by
If the vorticity is zero, the flow is irrotational.
The velocity potential
If an irrotational flow occupies a simply-connected fluid region then there exists a scalar field such that
The scalar field is called the velocity potential for the flow. (See Irrotational vector field.)
Bulk velocity
In many engineering applications the local flow velocity vector field is not known in every point and the only accessible velocity is the bulk velocity or average flow velocity (with the usual dimension of length per time), defined as the quotient between the volume flow rate (with dimension of cubed length per time) and the cross sectional area (with dimension of square length):
.
See also
Displacement field (mechanics)
Drift velocity
Enstrophy
Group velocity
Particle velocity
Pressure gradient
Strain rate
Strain-rate tensor
Stream function
Velocity potential
Vorticity
Wind velocity
References
Fluid dynamics
Continuum mechanics
Vector calculus
Velocity
Spatial gradient
Vector physical quantities | 0.787725 | 0.987413 | 0.777809 |
Table of thermodynamic equations | Common thermodynamic equations and quantities in thermodynamics, using mathematical notation, are as follows:
Definitions
Many of the definitions below are also used in the thermodynamics of chemical reactions.
General basic quantities
General derived quantities
Thermal properties of matter
Thermal transfer
Equations
The equations in this article are classified by subject.
Thermodynamic processes
Kinetic theory
Ideal gas
Entropy
, where kB is the Boltzmann constant, and Ω denotes the volume of macrostate in the phase space or otherwise called thermodynamic probability.
, for reversible processes only
Statistical physics
Below are useful results from the Maxwell–Boltzmann distribution for an ideal gas, and the implications of the Entropy quantity. The distribution is valid for atoms or molecules constituting ideal gases.
Corollaries of the non-relativistic Maxwell–Boltzmann distribution are below.
Quasi-static and reversible processes
For quasi-static and reversible processes, the first law of thermodynamics is:
where δQ is the heat supplied to the system and δW is the work done by the system.
Thermodynamic potentials
The following energies are called the thermodynamic potentials,
and the corresponding fundamental thermodynamic relations or "master equations" are:
Maxwell's relations
The four most common Maxwell's relations are:
More relations include the following.
Other differential equations are:
Quantum properties
Indistinguishable Particles
where N is number of particles, h is that Planck constant, I is moment of inertia, and Z is the partition function, in various forms:
Thermal properties of matter
Thermal transfer
Thermal efficiencies
See also
List of thermodynamic properties
Antoine equation
Bejan number
Bowen ratio
Bridgman's equations
Clausius–Clapeyron relation
Departure functions
Duhem–Margules equation
Ehrenfest equations
Gibbs–Helmholtz equation
Phase rule
Kopp's law
Noro–Frenkel law of corresponding states
Onsager reciprocal relations
Stefan number
Thermodynamics
Timeline of thermodynamics
Triple product rule
Exact differential
References
Atkins, Peter and de Paula, Julio Physical Chemistry, 7th edition, W.H. Freeman and Company, 2002 .
Chapters 1–10, Part 1: "Equilibrium".
Landsberg, Peter T. Thermodynamics and Statistical Mechanics. New York: Dover Publications, Inc., 1990. (reprinted from Oxford University Press, 1978).
Lewis, G.N., and Randall, M., "Thermodynamics", 2nd Edition, McGraw-Hill Book Company, New York, 1961.
Reichl, L.E., A Modern Course in Statistical Physics, 2nd edition, New York: John Wiley & Sons, 1998.
Schroeder, Daniel V. Thermal Physics. San Francisco: Addison Wesley Longman, 2000 .
Silbey, Robert J., et al. Physical Chemistry, 4th ed. New Jersey: Wiley, 2004.
Callen, Herbert B. (1985). Thermodynamics and an Introduction to Themostatistics, 2nd edition, New York: John Wiley & Sons.
External links
Thermodynamic equation calculator
Thermodynamic equations
Thermodynamics
Chemical engineering | 0.783806 | 0.992305 | 0.777775 |
Velocity-addition formula | In relativistic physics, a velocity-addition formula is an equation that specifies how to combine the velocities of objects in a way that is consistent with the requirement that no object's speed can exceed the speed of light. Such formulas apply to successive Lorentz transformations, so they also relate different frames. Accompanying velocity addition is a kinematic effect known as Thomas precession, whereby successive non-collinear Lorentz boosts become equivalent to the composition of a rotation of the coordinate system and a boost.
Standard applications of velocity-addition formulas include the Doppler shift, Doppler navigation, the aberration of light, and the dragging of light in moving water observed in the 1851 Fizeau experiment.
The notation employs as velocity of a body within a Lorentz frame , and as velocity of a second frame , as measured in , and as the transformed velocity of the body within the second frame.
History
The speed of light in a fluid is slower than the speed of light in vacuum, and it changes if the fluid is moving along with the light. In 1851, Fizeau measured the speed of light in a fluid moving parallel to the light using an interferometer. Fizeau's results were not in accord with the then-prevalent theories. Fizeau experimentally correctly determined the zeroth term of an expansion of the relativistically correct addition law in terms of as is described below. Fizeau's result led physicists to accept the empirical validity of the rather unsatisfactory theory by Fresnel that a fluid moving with respect to the stationary aether partially drags light with it, i.e. the speed is instead of , where is the speed of light in the aether, is the refractive index of the fluid, and is the speed of the fluid with respect to the aether.
The aberration of light, of which the easiest explanation is the relativistic velocity addition formula, together with Fizeau's result, triggered the development of theories like Lorentz aether theory of electromagnetism in 1892. In 1905 Albert Einstein, with the advent of special relativity, derived the standard configuration formula ( in the ) for the addition of relativistic velocities. The issues involving aether were, gradually over the years, settled in favor of special relativity.
Galilean relativity
It was observed by Galileo that a person on a uniformly moving ship has the impression of being at rest and sees a heavy body falling vertically downward. This observation is now regarded as the first clear statement of the principle of mechanical relativity. Galileo saw that from the point of view of a person standing on the shore, the motion of falling downwards on the ship would be combined with, or added to, the forward motion of the ship. In terms of velocities, it can be said that the velocity of the falling body relative to the shore equals the velocity of that body relative to ship plus the velocity of the ship relative to the shore.
In general for three objects A (e.g. Galileo on the shore), B (e.g. ship), C (e.g. falling body on ship) the velocity vector of C relative to A (velocity of falling object as Galileo sees it) is the sum of the velocity of C relative to B (velocity of falling object relative to ship) plus the velocity of B relative to A (ship's velocity away from the shore). The addition here is the vector addition of vector algebra and the resulting velocity is usually represented in the form
The cosmos of Galileo consists of absolute space and time and the addition of velocities corresponds to composition of Galilean transformations. The relativity principle is called Galilean relativity. It is obeyed by Newtonian mechanics.
Special relativity
According to the theory of special relativity, the frame of the ship has a different clock rate and distance measure, and the notion of simultaneity in the direction of motion is altered, so the addition law for velocities is changed. This change is not noticeable at low velocities but as the velocity increases towards the speed of light it becomes important. The addition law is also called a composition law for velocities. For collinear motions, the speed of the object, , e.g. a cannonball fired horizontally out to sea, as measured from the ship, moving at speed , would be measured by someone standing on the shore and watching the whole scene through a telescope as
The composition formula can take an algebraically equivalent form, which can be easily derived by using only the principle of constancy of the speed of light,
The cosmos of special relativity consists of Minkowski spacetime and the addition of velocities corresponds to composition of Lorentz transformations. In the special theory of relativity Newtonian mechanics is modified into relativistic mechanics.
Standard configuration
The formulas for boosts in the standard configuration follow most straightforwardly from taking differentials of the inverse Lorentz boost in standard configuration. If the primed frame is travelling with speed with Lorentz factor in the positive relative to the unprimed frame, then the differentials are
Divide the first three equations by the fourth,
or
which is
in which expressions for the primed velocities were obtained using the standard recipe by replacing by and swapping primed and unprimed coordinates. If coordinates are chosen so that all velocities lie in a (common) plane, then velocities may be expressed as
(see polar coordinates) and one finds
The proof as given is highly formal. There are other more involved proofs that may be more enlightening, such as the one below.
General configuration
Starting from the expression in coordinates for parallel to the , expressions for the perpendicular and parallel components can be cast in vector form as follows, a trick which also works for Lorentz transformations of other 3d physical quantities originally in set up standard configuration. Introduce the velocity vector in the unprimed frame and in the primed frame, and split them into components parallel (∥) and perpendicular (⊥) to the relative velocity vector (see hide box below) thus
then with the usual Cartesian standard basis vectors , set the velocity in the unprimed frame to be
which gives, using the results for the standard configuration,
where is the dot product. Since these are vector equations, they still have the same form for in any direction. The only difference from the coordinate expressions is that the above expressions refers to vectors, not components.
One obtains
where is the reciprocal of the Lorentz factor. The ordering of operands in the definition is chosen to coincide with that of the standard configuration from which the formula is derived.
Either the parallel or the perpendicular component for each vector needs to be found, since the other component will be eliminated by substitution of the full vectors.
The parallel component of can be found by projecting the full vector into the direction of the relative motion
and the perpendicular component of {{math|u′}} can be found by the geometric properties of the cross product (see figure above right),
In each case, is a unit vector in the direction of relative motion.
The expressions for and can be found in the same way. Substituting the parallel component into
results in the above equation.
Using an identity in and ,These formulae follow from inverting for and applying the difference of two squares to obtain
so that
where the last expression is by the standard vector analysis formula . The first expression extends to any number of spatial dimensions, but the cross product is defined in three dimensions only. The objects with having velocity relative to and having velocity relative to can be anything. In particular, they can be three frames, or they could be the laboratory, a decaying particle and one of the decay products of the decaying particle.
Properties
The relativistic addition of 3-velocities is non-linear, so in general
for real number , although it is true that
Also, due to the last terms, is in general neither commutative
nor associative
It deserves special mention that if and refer to velocities of pairwise parallel frames (primed parallel to unprimed and doubly primed parallel to primed), then, according to Einstein's velocity reciprocity principle, the unprimed frame moves with velocity relative to the primed frame, and the primed frame moves with velocity relative to the doubly primed frame hence is the velocity of the unprimed frame relative to the doubly primed frame, and one might expect to have by naive application of the reciprocity principle. This does not hold, though the magnitudes are equal. The unprimed and doubly primed frames are not parallel, but related through a rotation. This is related to the phenomenon of Thomas precession, and is not dealt with further here.
The norms are given by
and
It is clear that the non-commutativity manifests itself as an additional rotation of the coordinate frame when two boosts are involved, since the norm squared is the same for both orders of boosts.
The gamma factors for the combined velocities are computed as
Notational conventions
Notations and conventions for the velocity addition vary from author to author. Different symbols may be used for the operation, or for the velocities involved, and the operands may be switched for the same expression, or the symbols may be switched for the same velocity. A completely separate symbol may also be used for the transformed velocity, rather than the prime used here. Since the velocity addition is non-commutative, one cannot switch the operands or symbols without changing the result.
Examples of alternative notation include:
No specific operand
(using units where c = 1)
Left-to-right ordering of operands
Right-to-left ordering of operands
Applications
Some classical applications of velocity-addition formulas, to the Doppler shift, to the aberration of light, and to the dragging of light in moving water, yielding relativistically valid expressions for these phenomena are detailed below. It is also possible to use the velocity addition formula, assuming conservation of momentum (by appeal to ordinary rotational invariance), the correct form of the -vector part of the momentum four-vector, without resort to electromagnetism, or a priori not known to be valid, relativistic versions of the Lagrangian formalism. This involves experimentalist bouncing off relativistic billiard balls from each other. This is not detailed here, but see for reference Wikisource version (primary source) and .
Fizeau experiment
When light propagates in a medium, its speed is reduced, in the rest frame of the medium, to , where is the index of refraction of the medium . The speed of light in a medium uniformly moving with speed in the positive -direction as measured in the lab frame is given directly by the velocity addition formulas. For the forward direction (standard configuration, drop index on ) one gets,
Collecting the largest contributions explicitly,
Fizeau found the first three terms. The classical result is the first two terms.
Aberration of light
Another basic application is to consider the deviation of light, i.e. change of its direction, when transforming to a new reference frame with parallel axes, called aberration of light. In this case, , and insertion in the formula for yields
For this case one may also compute and from the standard formulae,
the trigonometric manipulations essentially being identical in the case to the manipulations in the case. Consider the difference,
correct to order . Employ in order to make small angle approximations a trigonometric formula,
where were used.
Thus the quantity
the classical aberration angle''', is obtained in the limit .
Relativistic Doppler shift
Here velocity components will be used as opposed to speed for greater generality, and in order to avoid perhaps seemingly ad hoc introductions of minus signs. Minus signs occurring here will instead serve to illuminate features when speeds less than that of light are considered.
For light waves in vacuum, time dilation together with a simple geometrical observation alone suffices to calculate the Doppler shift in standard configuration (collinear relative velocity of emitter and observer as well of observed light wave).
All velocities in what follows are parallel to the common positive , so subscripts on velocity components are dropped. In the observers frame, introduce the geometrical observation
as the spatial distance, or wavelength, between two pulses (wave crests), where is the time elapsed between the emission of two pulses. The time elapsed between the passage of two pulses at the same point in space is the time period , and its inverse is the observed (temporal) frequency. The corresponding quantities in the emitters frame are endowed with primes.
For light waves
and the observed frequency is
where is standard time dilation formula.
Suppose instead that the wave is not composed of light waves with speed , but instead, for easy visualization, bullets fired from a relativistic machine gun, with velocity in the frame of the emitter. Then, in general, the geometrical observation is precisely the same. But now, , and is given by velocity addition,
The calculation is then essentially the same, except that here it is easier carried out upside down with instead of . One finds
Observe that in the typical case, the that enters is negative. The formula has general validity though. When , the formula reduces to the formula calculated directly for light waves above,
If the emitter is not firing bullets in empty space, but emitting waves in a medium, then the formula still applies, but now, it may be necessary to first calculate from the velocity of the emitter relative to the medium.
Returning to the case of a light emitter, in the case the observer and emitter are not collinear, the result has little modification,
where is the angle between the light emitter and the observer. This reduces to the previous result for collinear motion when , but for transverse motion corresponding to , the frequency is shifted by the Lorentz factor. This does not happen in the classical optical Doppler effect.
Hyperbolic geometry
Associated to the relativistic velocity of an object is a quantity whose norm is called rapidity. These are related through
where the vector is thought of as being Cartesian coordinates on a 3-dimensional subspace of the Lie algebra of the Lorentz group spanned by the boost generators . This space, call it rapidity space, is isomorphic to as a vector space, and is mapped to the open unit ball,
, velocity space, via the above relation. The addition law on collinear form coincides with the law of addition of hyperbolic tangents
with
The line element in velocity space follows from the expression for relativistic relative velocity in any frame,
where the speed of light is set to unity so that and agree. It this expression, and are velocities of two objects in any one given frame. The quantity is the speed of one or the other object relative to the other object as seen in the given frame. The expression is Lorentz invariant, i.e. independent of which frame is the given frame, but the quantity it calculates is not. For instance, if the given frame is the rest frame of object one, then .
The line element is found by putting or equivalently ,
with and the usual spherical angle coordinates for taken in the -direction. Now introduce through
and the line element on rapidity space becomes
Relativistic particle collisions
In scattering experiments the primary objective is to measure the invariant scattering cross section. This enters the formula for scattering of two particle types into a final state assumed to have two or more particles,
or, in most textbooks,
where
is spacetime volume. It is an invariant under Lorentz transformations.
is the total number of reactions resulting in final state in spacetime volume . Being a number, it is invariant when the same spacetime volume is considered.
is the number of reactions resulting in final state per unit spacetime, or reaction rate. This is invariant.
is called the incident flux. This is required to be invariant, but isn't in the most general setting.
is the scattering cross section. It is required to be invariant.
are the particle densities in the incident beams. These are not invariant as is clear due to length contraction.
is the relative speed of the two incident beams. This cannot be invariant since is required to be so.
The objective is to find a correct expression for relativistic relative speed and an invariant expression for the incident flux.
Non-relativistically, one has for relative speed . If the system in which velocities are measured is the rest frame of particle type , it is required that Setting the speed of light , the expression for follows immediately from the formula for the norm (second formula) in the general configuration as
The formula reduces in the classical limit to as it should, and gives the correct result in the rest frames of the particles. The relative velocity is incorrectly given in most, perhaps all books on particle physics and quantum field theory. This is mostly harmless, since if either one particle type is stationary or the relative motion is collinear, then the right result is obtained from the incorrect formulas. The formula is invariant, but not manifestly so. It can be rewritten in terms of four-velocities as
The correct expression for the flux, published by Christian Møller in 1945, is given by
One notes that for collinear velocities, . In order to get a manifestly Lorentz invariant expression one writes with , where is the density in the rest frame, for the individual particle fluxes and arrives at
In the literature the quantity as well as are both referred to as the relative velocity. In some cases (statistical physics and dark matter literature), is referred to as the Møller velocity'', in which case means relative velocity. The true relative velocity is at any rate . The discrepancy between and is relevant though in most cases velocities are collinear. At LHC the crossing angle is small, around 300 rad, but at
the old Intersecting Storage Ring at CERN, it was about 18◦.
See also
Hyperbolic law of cosines
Biquaternion
Relative velocity
Remarks
Notes
References
(graduate level)
(introductory level)
(graduate level)
Historical
Wikisource version
External links
Special relativity
Equations
Addition formula
Kinematics | 0.782772 | 0.993528 | 0.777706 |
Elastic scattering | Elastic scattering is a form of particle scattering in scattering theory, nuclear physics and particle physics. In this process, the internal states of the particles involved stay the same. In the non-relativistic case, where the relative velocities of the particles are much less than the speed of light, elastic scattering simply means that the total kinetic energy of the system is conserved. At relativistic velocities, elastic scattering also requires the final state to have the same number of particles as the initial state and for them to be of the same kind.
Rutherford scattering
When the incident particle, such as an alpha particle or electron, is diffracted in the Coulomb potential of atoms and molecules, the elastic scattering process is called Rutherford scattering. In many electron diffraction techniques like reflection high energy electron diffraction (RHEED), transmission electron diffraction (TED), and gas electron diffraction (GED), where the incident electrons have sufficiently high energy (>10 keV), the elastic electron scattering becomes the main component of the scattering process and the scattering intensity is expressed as a function of the momentum transfer defined as the difference between the momentum vector of the incident electron and that of the scattered electron.
Optical elastic scattering
In Thomson scattering light interacts with electrons (this is the low-energy limit of Compton scattering).
In Rayleigh scattering a medium composed of particles whose sizes are much smaller than the wavelength scatters light sideways. In this scattering process, the energy (and therefore the wavelength) of the incident light is conserved and only its direction is changed. In this case, the scattering intensity is inversely proportional to the fourth power of the reciprocal wavelength of the light.
Nuclear particle physics
For particles with the mass of a proton or greater, elastic scattering is one of the main methods by which the particles interact with matter. At relativistic energies, protons, neutrons, helium ions, and HZE ions will undergo numerous elastic collisions before they are dissipated. This is a major concern with many types of ionizing radiation, including galactic cosmic rays, solar proton events, free neutrons in nuclear weapon design and nuclear reactor design, spaceship design, and the study of the Earth's magnetic field. In designing an effective biological shield, proper attention must be made to the linear energy transfer of the particles as they propagate through the shield. In nuclear reactors, the neutron's mean free path is critical as it undergoes elastic scattering on its way to becoming a slow-moving thermal neutron.
Besides elastic scattering, charged particles also undergo effects from their elementary charge, which repels them away from nuclei and causes their path to be curved inside an electric field. Particles can also undergo inelastic scattering and capture due to nuclear reactions. Protons and neutrons do this more often than heavier particles. Neutrons are also capable of causing fission in an incident nucleus. Light nuclei like deuterium and lithium can combine in nuclear fusion.
See also
Elastic collision
Inelastic scattering
Scattering theory
Thomson scattering
References
Particle physics
Scattering | 0.790119 | 0.984266 | 0.777687 |
Principle of relativity | In physics, the principle of relativity is the requirement that the equations describing the laws of physics have the same form in all admissible frames of reference.
For example, in the framework of special relativity, the Maxwell equations have the same form in all inertial frames of reference. In the framework of general relativity, the Maxwell equations or the Einstein field equations have the same form in arbitrary frames of reference.
Several principles of relativity have been successfully applied throughout science, whether implicitly (as in Newtonian mechanics) or explicitly (as in Albert Einstein's special relativity and general relativity).
Basic concepts
Certain principles of relativity have been widely assumed in most scientific disciplines. One of the most widespread is the belief that any law of nature should be the same at all times; and scientific investigations generally assume that laws of nature are the same regardless of the person measuring them. These sorts of principles have been incorporated into scientific inquiry at the most fundamental of levels.
Any principle of relativity prescribes a symmetry in natural law: that is, the laws must look the same to one observer as they do to another. According to a theoretical result called Noether's theorem, any such symmetry will also imply a conservation law alongside. For example, if two observers at different times see the same laws, then a quantity called energy will be conserved. In this light, relativity principles make testable predictions about how nature behaves.
Special principle of relativity
According to the first postulate of the special theory of relativity:
This postulate defines an inertial frame of reference.
The special principle of relativity states that physical laws should be the same in every inertial frame of reference, but that they may vary across non-inertial ones. This principle is used in both Newtonian mechanics and the theory of special relativity. Its influence in the latter is so strong that Max Planck named the theory after the principle.
The principle requires physical laws to be the same for any body moving at constant velocity as they are for a body at rest. A consequence is that an observer in an inertial reference frame cannot determine an absolute speed or direction of travel in space, and may only speak of speed or direction relative to some other object.
The principle does not extend to non-inertial reference frames because those frames do not, in general experience, seem to abide by the same laws of physics. In classical physics, fictitious forces are used to describe acceleration in non-inertial reference frames.
In Newtonian mechanics
The special principle of relativity was first explicitly enunciated by Galileo Galilei in 1632 in his Dialogue Concerning the Two Chief World Systems, using the metaphor of Galileo's ship.
Newtonian mechanics added to the special principle several other concepts, including laws of motion, gravitation, and an assertion of an absolute time. When formulated in the context of these laws, the special principle of relativity states that the laws of mechanics are invariant under a Galilean transformation.
In special relativity
Joseph Larmor and Hendrik Lorentz discovered that Maxwell's equations, used in the theory of electromagnetism, were invariant only by a certain change of time and length units. This left some confusion among physicists, many of whom thought that a luminiferous aether was incompatible with the relativity principle, in the way it was defined by Henri Poincaré:
In their 1905 papers on electrodynamics, Henri Poincaré and Albert Einstein explained that with the Lorentz transformations the relativity principle holds perfectly. Einstein elevated the (special) principle of relativity to a postulate of the theory and derived the Lorentz transformations from this principle combined with the principle of the independence of the speed of light (in vacuum) from the motion of the source. These two principles were reconciled with each other by a re-examination of the fundamental meanings of space and time intervals.
The strength of special relativity lies in its use of simple, basic principles, including the invariance of the laws of physics under a shift of inertial reference frames and the invariance of the speed of light in vacuum. (See also: Lorentz covariance.)
It is possible to derive the form of the Lorentz transformations from the principle of relativity alone. Using only the isotropy of space and the symmetry implied by the principle of special relativity, one can show that the space-time transformations between inertial frames are either Galilean or Lorentzian. Whether the transformation is actually Galilean or Lorentzian must be determined with physical experiments. It is not possible to conclude that the speed of light c is invariant by mathematical logic alone. In the Lorentzian case, one can then obtain relativistic interval conservation and the constancy of the speed of light.
General principle of relativity
The general principle of relativity states:
That is, physical laws are the same in reference frames—inertial or non-inertial. An accelerated charged particle might emit synchrotron radiation, though a particle at rest does not. If we consider now the same accelerated charged particle in its non-inertial rest frame, it emits radiation at rest.
Physics in non-inertial reference frames was historically treated by a coordinate transformation, first, to an inertial reference frame, performing the necessary calculations therein, and using another to return to the non-inertial reference frame. In most such situations, the same laws of physics can be used if certain predictable fictitious forces are added into consideration; an example is a uniformly rotating reference frame, which can be treated as an inertial reference frame if one adds a fictitious centrifugal force and Coriolis force into consideration.
The problems involved are not always so trivial. Special relativity predicts that an observer in an inertial reference frame does not see objects he would describe as moving faster than the speed of light. However, in the non-inertial reference frame of Earth, treating a spot on the Earth as a fixed point, the stars are observed to move in the sky, circling once about the Earth per day. Since the stars are light years away, this observation means that, in the non-inertial reference frame of the Earth, anybody who looks at the stars is seeing objects which appear, to them, to be moving faster than the speed of light.
Since non-inertial reference frames do not abide by the special principle of relativity, such situations are not self-contradictory.
General relativity
General relativity was developed by Einstein in the years 1907 - 1915. General relativity postulates that the global Lorentz covariance of special relativity becomes a local Lorentz covariance in the presence of matter. The presence of matter "curves" spacetime, and this curvature affects the path of free particles (and even the path of light). General relativity uses the mathematics of differential geometry and tensors in order to describe gravitation as an effect of the geometry of spacetime. Einstein based this new theory on the general principle of relativity, and he named the theory after the underlying principle.
See also
Background independence
Conjugate diameters
Cosmic microwave background radiation
Equivalence principle
Galilean relativity
General relativity including Introduction to general relativity
Invariant
List of important publications in physics: Relativity
Newton's Laws
Preferred frame
Principle of covariance
Principle of uniformity
Special relativity
Notes and references
Further reading
See the special relativity references and the general relativity references.
External links
Wikibooks: Special Relativity
Living Reviews in Relativity – An open access, peer-referred, solely online physics journal publishing invited reviews covering all areas of relativity research.
MathPages – Reflections on Relativity – A complete online course on Relativity.
Special Relativity Simulator
A Relativity Tutorial at Caltech – A basic introduction to concepts of Special and General Relativity, as well as astrophysics.
Relativity Gravity and Cosmology – A short course offered at MIT.
Relativity in film clips and animations from the University of New South Wales.
Animation clip visualizing the effects of special relativity on fast moving objects.
Relativity Calculator – Learn Special Relativity Mathematics The mathematics of special relativity presented in as simple and comprehensive manner possible within philosophical and historical contexts.
Theory of relativity
Theories | 0.784966 | 0.990647 | 0.777624 |
Simple harmonic motion | In mechanics and physics, simple harmonic motion (sometimes abbreviated as ) is a special type of periodic motion an object experiences by means of a restoring force whose magnitude is directly proportional to the distance of the object from an equilibrium position and acts towards the equilibrium position. It results in an oscillation that is described by a sinusoid which continues indefinitely (if uninhibited by friction or any other dissipation of energy).
Simple harmonic motion can serve as a mathematical model for a variety of motions, but is typified by the oscillation of a mass on a spring when it is subject to the linear elastic restoring force given by Hooke's law. The motion is sinusoidal in time and demonstrates a single resonant frequency. Other phenomena can be modeled by simple harmonic motion, including the motion of a simple pendulum, although for it to be an accurate model, the net force on the object at the end of the pendulum must be proportional to the displacement (and even so, it is only a good approximation when the angle of the swing is small; see small-angle approximation). Simple harmonic motion can also be used to model molecular vibration.
Simple harmonic motion provides a basis for the characterization of more complicated periodic motion through the techniques of Fourier analysis.
Introduction
The motion of a particle moving along a straight line with an acceleration whose direction is always towards a fixed point on the line and whose magnitude is proportional to the displacement from the fixed point is called simple harmonic motion.
In the diagram, a simple harmonic oscillator, consisting of a weight attached to one end of a spring, is shown. The other end of the spring is connected to a rigid support such as a wall. If the system is left at rest at the equilibrium position then there is no net force acting on the mass. However, if the mass is displaced from the equilibrium position, the spring exerts a restoring elastic force that obeys Hooke's law.
Mathematically,
where is the restoring elastic force exerted by the spring (in SI units: N), is the spring constant (N·m−1), and is the displacement from the equilibrium position (in metres).
For any simple mechanical harmonic oscillator:
When the system is displaced from its equilibrium position, a restoring force that obeys Hooke's law tends to restore the system to equilibrium.
Once the mass is displaced from its equilibrium position, it experiences a net restoring force. As a result, it accelerates and starts going back to the equilibrium position. When the mass moves closer to the equilibrium position, the restoring force decreases. At the equilibrium position, the net restoring force vanishes. However, at , the mass has momentum because of the acceleration that the restoring force has imparted. Therefore, the mass continues past the equilibrium position, compressing the spring. A net restoring force then slows it down until its velocity reaches zero, whereupon it is accelerated back to the equilibrium position again.
As long as the system has no energy loss, the mass continues to oscillate. Thus simple harmonic motion is a type of periodic motion. If energy is lost in the system, then the mass exhibits damped oscillation.
Note if the real space and phase space plot are not co-linear, the phase space motion becomes elliptical. The area enclosed depends on the amplitude and the maximum momentum.
Dynamics
In Newtonian mechanics, for one-dimensional simple harmonic motion, the equation of motion, which is a second-order linear ordinary differential equation with constant coefficients, can be obtained by means of Newton's second law and Hooke's law for a mass on a spring.
where is the inertial mass of the oscillating body, is its displacement from the equilibrium (or mean) position, and is a constant (the spring constant for a mass on a spring).
Therefore,
Solving the differential equation above produces a solution that is a sinusoidal function:
where
The meaning of the constants and can be easily found: setting on the equation above we see that , so that is the initial position of the particle, ; taking the derivative of that equation and evaluating at zero we get that , so that is the initial speed of the particle divided by the angular frequency, . Thus we can write:
This equation can also be written in the form:
where
or equivalently
In the solution, and are two constants determined by the initial conditions (specifically, the initial position at time is , while the initial velocity is ), and the origin is set to be the equilibrium position. Each of these constants carries a physical meaning of the motion: is the amplitude (maximum displacement from the equilibrium position), is the angular frequency, and is the initial phase.
Using the techniques of calculus, the velocity and acceleration as a function of time can be found:
Speed:
Maximum speed: (at equilibrium point)
Maximum acceleration: (at extreme points)
By definition, if a mass is under SHM its acceleration is directly proportional to displacement.
where
Since ,
and, since where is the time period,
These equations demonstrate that the simple harmonic motion is isochronous (the period and frequency are independent of the amplitude and the initial phase of the motion).
Energy
Substituting with , the kinetic energy of the system at time is
and the potential energy is
In the absence of friction and other energy loss, the total mechanical energy has a constant value
Examples
The following physical systems are some examples of simple harmonic oscillator.
Mass on a spring
A mass attached to a spring of spring constant exhibits simple harmonic motion in closed space. The equation for describing the period:
shows the period of oscillation is independent of the amplitude, though in practice the amplitude should be small. The above equation is also valid in the case when an additional constant force is being applied on the mass, i.e. the additional constant force cannot change the period of oscillation.
Uniform circular motion
Simple harmonic motion can be considered the one-dimensional projection of uniform circular motion. If an object moves with angular speed around a circle of radius centered at the origin of the -plane, then its motion along each coordinate is simple harmonic motion with amplitude and angular frequency .
Oscillatory motion
The motion of a body in which it moves to and from about a definite point is also called oscillatory motion or vibratory motion. The time period is able to be calculated by
where l is the distance from rotation to center of mass of object undergoing SHM and g being gravitational acceleration. This is analogous to the mass-spring system.
Mass of a simple pendulum
In the small-angle approximation, the motion of a simple pendulum is approximated by simple harmonic motion. The period of a mass attached to a pendulum of length with gravitational acceleration is given by
This shows that the period of oscillation is independent of the amplitude and mass of the pendulum but not of the acceleration due to gravity, , therefore a pendulum of the same length on the Moon would swing more slowly due to the Moon's lower gravitational field strength. Because the value of varies slightly over the surface of the earth, the time period will vary slightly from place to place and will also vary with height above sea level.
This approximation is accurate only for small angles because of the expression for angular acceleration being proportional to the sine of the displacement angle:
where is the moment of inertia. When is small, and therefore the expression becomes
which makes angular acceleration directly proportional and opposite to , satisfying the definition of simple harmonic motion (that net force is directly proportional to the displacement from the mean position and is directed towards the mean position).
Scotch yoke
A Scotch yoke mechanism can be used to convert between rotational motion and linear reciprocating motion. The linear motion can take various forms depending on the shape of the slot, but the basic yoke with a constant rotation speed produces a linear motion that is simple harmonic in form.
See also
Notes
References
External links
Simple Harmonic Motion from HyperPhysics
Java simulation of spring-mass oscillator
Geogebra applet for spring-mass, with 3 attached PDFs on SHM, driven/damped oscillators, spring-mass with friction
Classical mechanics
Pendulums
Motion (physics) | 0.77948 | 0.997563 | 0.777581 |
Magnetohydrodynamics | In physics and engineering, magnetohydrodynamics (MHD; also called magneto-fluid dynamics or hydromagnetics) is a model of electrically conducting fluids that treats all interpenetrating particle species together as a single continuous medium. It is primarily concerned with the low-frequency, large-scale, magnetic behavior in plasmas and liquid metals and has applications in multiple fields including space physics, geophysics, astrophysics, and engineering.
The word magnetohydrodynamics is derived from meaning magnetic field, meaning water, and meaning movement. The field of MHD was initiated by Hannes Alfvén, for which he received the Nobel Prize in Physics in 1970.
History
The MHD description of electrically conducting fluids was first developed by Hannes Alfvén in a 1942 paper published in Nature titled "Existence of Electromagnetic–Hydrodynamic Waves" which outlined his discovery of what are now referred to as Alfvén waves. Alfvén initially referred to these waves as "electromagnetic–hydrodynamic waves"; however, in a later paper he noted, "As the term 'electromagnetic–hydrodynamic waves' is somewhat complicated, it may be convenient to call this phenomenon 'magneto–hydrodynamic' waves."
Equations
In MHD, motion in the fluid is described using linear combinations of the mean motions of the individual species: the current density and the center of mass velocity . In a given fluid, each species has a number density , mass , electric charge , and a mean velocity . The fluid's total mass density is then , and the motion of the fluid can be described by the current density expressed as
and the center of mass velocity expressed as:
MHD can be described by a set of equations consisting of a continuity equation, an equation of motion, an equation of state, Ampère's Law, Faraday's law, and Ohm's law. As with any fluid description to a kinetic system, a closure approximation must be applied to highest moment of the particle distribution equation. This is often accomplished with approximations to the heat flux through a condition of adiabaticity or isothermality.
In the adiabatic limit, that is, the assumption of an isotropic pressure and isotropic temperature, a fluid with an adiabatic index , electrical resistivity , magnetic field , and electric field can be described by the continuous equation
the equation of state
the equation of motion
the low-frequency Ampère's law
Faraday's law
and Ohm's law
Taking the curl of this equation and using Ampère's law and Faraday's law results in the induction equation,
where is the magnetic diffusivity.
In the equation of motion, the Lorentz force term can be expanded using Ampère's law and a vector calculus identity to give
where the first term on the right hand side is the magnetic tension force and the second term is the magnetic pressure force.
Ideal MHD
The simplest form of MHD, ideal MHD, assumes that the resistive term in Ohm's law is small relative to the other terms such that it can be taken to be equal to zero. This occurs in the limit of large magnetic Reynolds numbers during which magnetic induction dominates over magnetic diffusion at the velocity and length scales under consideration. Consequently, processes in ideal MHD that convert magnetic energy into kinetic energy, referred to as ideal processes, cannot generate heat and raise entropy.
A fundamental concept underlying ideal MHD is the frozen-in flux theorem which states that the bulk fluid and embedded magnetic field are constrained to move together such that one can be said to be "tied" or "frozen" to the other. Therefore, any two points that move with the bulk fluid velocity and lie on the same magnetic field line will continue to lie on the same field line even as the points are advected by fluid flows in the system. The connection between the fluid and magnetic field fixes the topology of the magnetic field in the fluid—for example, if a set of magnetic field lines are tied into a knot, then they will remain so as long as the fluid has negligible resistivity. This difficulty in reconnecting magnetic field lines makes it possible to store energy by moving the fluid or the source of the magnetic field. The energy can then become available if the conditions for ideal MHD break down, allowing magnetic reconnection that releases the stored energy from the magnetic field.
Ideal MHD equations
In ideal MHD, the resistive term vanishes in Ohm's law giving the ideal Ohm's law,
Similarly, the magnetic diffusion term in the induction equation vanishes giving the ideal induction equation,
Applicability of ideal MHD to plasmas
Ideal MHD is only strictly applicable when:
The plasma is strongly collisional, so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are therefore close to Maxwellian.
The resistivity due to these collisions is small. In particular, the typical magnetic diffusion times over any scale length present in the system must be longer than any time scale of interest.
Interest in length scales much longer than the ion skin depth and Larmor radius perpendicular to the field, long enough along the field to ignore Landau damping, and time scales much longer than the ion gyration time (system is smooth and slowly evolving).
Importance of resistivity
In an imperfectly conducting fluid the magnetic field can generally move through the fluid following a diffusion law with the resistivity of the plasma serving as a diffusion constant. This means that solutions to the ideal MHD equations are only applicable for a limited time for a region of a given size before diffusion becomes too important to ignore. One can estimate the diffusion time across a solar active region (from collisional resistivity) to be hundreds to thousands of years, much longer than the actual lifetime of a sunspot—so it would seem reasonable to ignore the resistivity. By contrast, a meter-sized volume of seawater has a magnetic diffusion time measured in milliseconds.
Even in physical systems—which are large and conductive enough that simple estimates of the Lundquist number suggest that the resistivity can be ignored—resistivity may still be important: many instabilities exist that can increase the effective resistivity of the plasma by factors of more than 109. The enhanced resistivity is usually the result of the formation of small scale structure like current sheets or fine scale magnetic turbulence, introducing small spatial scales into the system over which ideal MHD is broken and magnetic diffusion can occur quickly. When this happens, magnetic reconnection may occur in the plasma to release stored magnetic energy as waves, bulk mechanical acceleration of material, particle acceleration, and heat.
Magnetic reconnection in highly conductive systems is important because it concentrates energy in time and space, so that gentle forces applied to a plasma for long periods of time can cause violent explosions and bursts of radiation.
When the fluid cannot be considered as completely conductive, but the other conditions for ideal MHD are satisfied, it is possible to use an extended model called resistive MHD. This includes an extra term in Ohm's Law which models the collisional resistivity. Generally MHD computer simulations are at least somewhat resistive because their computational grid introduces a numerical resistivity.
Structures in MHD systems
In many MHD systems most of the electric current is compressed into thin nearly-two-dimensional ribbons termed current sheets. These can divide the fluid into magnetic domains, inside of which the currents are relatively weak. Current sheets in
the solar corona are thought to be between a few meters and a few kilometers in thickness, which is quite thin compared to the magnetic domains (which are thousands to hundreds of thousands of kilometers across). Another example is in the Earth's magnetosphere, where current sheets separate topologically distinct domains, isolating most of the Earth's ionosphere from the solar wind.
Waves
The wave modes derived using the MHD equations are called magnetohydrodynamic waves or MHD waves. There are three MHD wave modes that can be derived from the linearized ideal-MHD equations for a fluid with a uniform and constant magnetic field:
Alfvén waves
Slow magnetosonic waves
Fast magnetosonic waves
These modes have phase velocities that are independent of the magnitude of the wavevector, so they experience no dispersion. The phase velocity depends on the angle between the wave vector and the magnetic field . An MHD wave propagating at an arbitrary angle with respect to the time independent or bulk field will satisfy the dispersion relation
where
is the Alfvén speed. This branch corresponds to the shear Alfvén mode. Additionally the dispersion equation gives
where
is the ideal gas speed of sound. The plus branch corresponds to the fast-MHD wave mode and the minus branch corresponds to the slow-MHD wave mode. A summary of the properties of these waves is provided:
The MHD oscillations will be damped if the fluid is not perfectly conducting but has a finite conductivity, or if viscous effects are present.
MHD waves and oscillations are a popular tool for the remote diagnostics of laboratory and astrophysical plasmas, for example, the corona of the Sun (Coronal seismology).
Extensions
Resistive
Resistive MHD describes magnetized fluids with finite electron diffusivity. This diffusivity leads to a breaking in the magnetic topology; magnetic field lines can 'reconnect' when they collide. Usually this term is small and reconnections can be handled by thinking of them as not dissimilar to shocks; this process has been shown to be important in the Earth-Solar magnetic interactions.
Extended
Extended MHD describes a class of phenomena in plasmas that are higher order than resistive MHD, but which can adequately be treated with a single fluid description. These include the effects of Hall physics, electron pressure gradients, finite Larmor Radii in the particle gyromotion, and electron inertia.
Two-fluid
Two-fluid MHD describes plasmas that include a non-negligible Hall electric field. As a result, the electron and ion momenta must be treated separately. This description is more closely tied to Maxwell's equations as an evolution equation for the electric field exists.
Hall
In 1960, M. J. Lighthill criticized the applicability of ideal or resistive MHD theory for plasmas. It concerned the neglect of the "Hall current term" in Ohm's law, a frequent simplification made in magnetic fusion theory. Hall-magnetohydrodynamics (HMHD) takes into account this electric field description of magnetohydrodynamics, and Ohm's law takes the form
where is the electron number density and is the elementary charge. The most important difference is that in the absence of field line breaking, the magnetic field is tied to the electrons and not to the bulk fluid.
Electron MHD
Electron Magnetohydrodynamics (EMHD) describes small scales plasmas when electron motion is much faster than the ion one. The main effects are changes in conservation laws, additional resistivity, importance of electron inertia. Many effects of Electron MHD are similar to effects of the Two fluid MHD and the Hall MHD. EMHD is especially important for z-pinch, magnetic reconnection, ion thrusters, neutron stars, and plasma switches.
Collisionless
MHD is also often used for collisionless plasmas. In that case the MHD equations are derived from the Vlasov equation.
Reduced
By using a multiscale analysis the (resistive) MHD equations can be reduced to a set of four closed scalar equations. This allows for, amongst other things, more efficient numerical calculations.
Limitations
Importance of kinetic effects
Another limitation of MHD (and fluid theories in general) is that they depend on the assumption that the plasma is strongly collisional (this is the first criterion listed above), so that the time scale of collisions is shorter than the other characteristic times in the system, and the particle distributions are Maxwellian. This is usually not the case in fusion, space and astrophysical plasmas. When this is not the case, or the interest is in smaller spatial scales, it may be necessary to use a kinetic model which properly accounts for the non-Maxwellian shape of the distribution function. However, because MHD is relatively simple and captures many of the important properties of plasma dynamics it is often qualitatively accurate and is therefore often the first model tried.
Effects which are essentially kinetic and not captured by fluid models include double layers, Landau damping, a wide range of instabilities, chemical separation in space plasmas and electron runaway. In the case of ultra-high intensity laser interactions, the incredibly short timescales of energy deposition mean that hydrodynamic codes fail to capture the essential physics.
Applications
Geophysics
Beneath the Earth's mantle lies the core, which is made up of two parts: the solid inner core and liquid outer core. Both have significant quantities of iron. The liquid outer core moves in the presence of the magnetic field and eddies are set up into the same due to the Coriolis effect. These eddies develop a magnetic field which boosts Earth's original magnetic field—a process which is self-sustaining and is called the geomagnetic dynamo.
Based on the MHD equations, Glatzmaier and Paul Roberts have made a supercomputer model of the Earth's interior. After running the simulations for thousands of years in virtual time, the changes in Earth's magnetic field can be studied. The simulation results are in good agreement with the observations as the simulations have correctly predicted that the Earth's magnetic field flips every few hundred thousand years. During the flips, the magnetic field does not vanish altogether—it just gets more complex.
Earthquakes
Some monitoring stations have reported that earthquakes are sometimes preceded by a spike in ultra low frequency (ULF) activity. A remarkable example of this occurred before the 1989 Loma Prieta earthquake in California, although a subsequent study indicates that this was little more than a sensor malfunction. On December 9, 2010, geoscientists announced that the DEMETER satellite observed a dramatic increase in ULF radio waves over Haiti in the month before the magnitude 7.0 Mw 2010 earthquake. Researchers are attempting to learn more about this correlation to find out whether this method can be used as part of an early warning system for earthquakes.
Space Physics
The study of space plasmas near Earth and throughout the Solar System is known as space physics. Areas researched within space physics encompass a large number of topics, ranging from the ionosphere to auroras, Earth's magnetosphere, the Solar wind, and coronal mass ejections.
MHD forms the framework for understanding how populations of plasma interact within the local geospace environment. Researchers have developed global models using MHD to simulate phenomena within Earth's magnetosphere, such as the location of Earth's magnetopause (the boundary between the Earth's magnetic field and the solar wind), the formation of the ring current, auroral electrojets, and geomagnetically induced currents.
One prominent use of global MHD models is in space weather forecasting. Intense solar storms have the potential to cause extensive damage to satellites and infrastructure, thus it is crucial that such events are detected early. The Space Weather Prediction Center (SWPC) runs MHD models to predict the arrival and impacts of space weather events at Earth.
Astrophysics
MHD applies to astrophysics, including stars, the interplanetary medium (space between the planets), and possibly within the interstellar medium (space between the stars) and jets. Most astrophysical systems are not in local thermal equilibrium, and therefore require an additional kinematic treatment to describe all the phenomena within the system (see Astrophysical plasma).
Sunspots are caused by the Sun's magnetic fields, as Joseph Larmor theorized in 1919. The solar wind is also governed by MHD. The differential solar rotation may be the long-term effect of magnetic drag at the poles of the Sun, an MHD phenomenon due to the Parker spiral shape assumed by the extended magnetic field of the Sun.
Previously, theories describing the formation of the Sun and planets could not explain how the Sun has 99.87% of the mass, yet only 0.54% of the angular momentum in the Solar System. In a closed system such as the cloud of gas and dust from which the Sun was formed, mass and angular momentum are both conserved. That conservation would imply that as the mass concentrated in the center of the cloud to form the Sun, it would spin faster, much like a skater pulling their arms in. The high speed of rotation predicted by early theories would have flung the proto-Sun apart before it could have formed. However, magnetohydrodynamic effects transfer the Sun's angular momentum into the outer solar system, slowing its rotation.
Breakdown of ideal MHD (in the form of magnetic reconnection) is known to be the likely cause of solar flares. The magnetic field in a solar active region over a sunspot can store energy that is released suddenly as a burst of motion, X-rays, and radiation when the main current sheet collapses, reconnecting the field.
Magnetic confinement fusion
MHD describes a wide range of physical phenomena occurring in fusion plasmas in devices such as tokamaks or stellarators.
The Grad-Shafranov equation derived from ideal MHD describes the equilibrium of axisymmetric toroidal plasma in a tokamak. In tokamak experiments, the equilibrium during each discharge is routinely calculated and reconstructed, which provides information on the shape and position of the plasma controlled by currents in external coils.
MHD stability theory is known to govern the operational limits of tokamaks. For example, the ideal MHD kink modes provide hard limits on the achievable plasma beta (Troyon limit) and plasma current (set by the requirement of the safety factor).
Sensors
Magnetohydrodynamic sensors are used for precision measurements of angular velocities in inertial navigation systems such as in aerospace engineering. Accuracy improves with the size of the sensor. The sensor is capable of surviving in harsh environments.
Engineering
MHD is related to engineering problems such as plasma confinement, liquid-metal cooling of nuclear reactors, and electromagnetic casting (among others).
A magnetohydrodynamic drive or MHD propulsor is a method for propelling seagoing vessels using only electric and magnetic fields with no moving parts, using magnetohydrodynamics. The working principle involves electrification of the propellant (gas or water) which can then be directed by a magnetic field, pushing the vehicle in the opposite direction. Although some working prototypes exist, MHD drives remain impractical.
The first prototype of this kind of propulsion was built and tested in 1965 by Steward Way, a professor of mechanical engineering at the University of California, Santa Barbara. Way, on leave from his job at Westinghouse Electric, assigned his senior-year undergraduate students to develop a submarine with this new propulsion system. In the early 1990s, a foundation in Japan (Ship & Ocean Foundation (Minato-ku, Tokyo)) built an experimental boat, the Yamato-1, which used a magnetohydrodynamic drive incorporating a superconductor cooled by liquid helium, and could travel at 15 km/h.
MHD power generation fueled by potassium-seeded coal combustion gas showed potential for more efficient energy conversion (the absence of solid moving parts allows operation at higher temperatures), but failed due to cost-prohibitive technical difficulties. One major engineering problem was the failure of the wall of the primary-coal combustion chamber due to abrasion.
In microfluidics, MHD is studied as a fluid pump for producing a continuous, nonpulsating flow in a complex microchannel design.
MHD can be implemented in the continuous casting process of metals to suppress instabilities and control the flow.
Industrial MHD problems can be modeled using the open-source software EOF-Library. Two simulation examples are 3D MHD with a free surface for electromagnetic levitation melting, and liquid metal stirring by rotating permanent magnets.
Magnetic drug targeting
An important task in cancer research is developing more precise methods for delivery of medicine to affected areas. One method involves the binding of medicine to biologically compatible magnetic particles (such as ferrofluids), which are guided to the target via careful placement of permanent magnets on the external body. Magnetohydrodynamic equations and finite element analysis are used to study the interaction between the magnetic fluid particles in the bloodstream and the external magnetic field.
See also
Computational magnetohydrodynamics
Electrohydrodynamics
Electromagnetic pump
Ferrofluid
Lorentz force velocity meter
Magnetic flow meter
Magnetohydrodynamic generator
Magnetohydrodynamic turbulence
Molten salt
Plasma stability
Shocks and discontinuities (magnetohydrodynamics)
List of textbooks in electromagnetism
Further reading
References
Plasma theory and modeling | 0.781526 | 0.99494 | 0.777571 |
Modeling and simulation | Modeling and simulation (M&S) is the use of models (e.g., physical, mathematical, behavioral, or logical representation of a system, entity, phenomenon, or process) as a basis for simulations to develop data utilized for managerial or technical decision making.
In the computer application of modeling and simulation a computer is used to build a mathematical model which contains key parameters of the physical model. The mathematical model represents the physical model in virtual form, and conditions are applied that set up the experiment of interest. The simulation starts – i.e., the computer calculates the results of those conditions on the mathematical model – and outputs results in a format that is either machine- or human-readable, depending upon the implementation.
The use of M&S within engineering is well recognized. Simulation technology belongs to the tool set of engineers of all application domains and has been included in the body of knowledge of engineering management. M&S helps to reduce costs, increase the quality of products and systems, and document and archive lessons learned. Because the results of a simulation are only as good as the underlying model(s), engineers, operators, and analysts must pay particular attention to its construction. To ensure that the results of the simulation are applicable to the real world, the user must understand the assumptions, conceptualizations, and constraints of its implementation. Additionally, models may be updated and improved using results of actual experiments. M&S is a discipline on its own. Its many application domains often lead to the assumption that M&S is a pure application. This is not the case and needs to be recognized by engineering management in the application of M&S.
The use of such mathematical models and simulations avoids actual experimentation, which can be costly and time-consuming. Instead, mathematical knowledge and computational power is used to solve real-world problems cheaply and in a time efficient manner. As such, M&S can facilitate understanding a system's behavior without actually testing the system in the real world. For example, to determine which type of spoiler would improve traction the most while designing a race car, a computer simulation of the car could be used to estimate the effect of different spoiler shapes on the coefficient of friction in a turn. Useful insights about different decisions in the design could be gleaned without actually building the car. In addition, simulation can support experimentation that occurs totally in software, or in human-in-the-loop environments where simulation represents systems or generates data needed to meet experiment objectives. Furthermore, simulation can be used to train persons using a virtual environment that would otherwise be difficult or expensive to produce.
Interest in simulations
Technically, simulation is well accepted. The 2006 National Science Foundation (NSF) Report on "Simulation-based Engineering Science" showed the potential of using simulation technology and methods to revolutionize the engineering science. Among the reasons for the steadily increasing interest in simulation applications are the following:
Using simulations is generally cheaper, safer and sometimes more ethical than conducting real-world experiments. For example, supercomputers are sometimes used to simulate the detonation of nuclear devices and their effects in order to support better preparedness in the event of a nuclear explosion. Similar efforts are conducted to simulate hurricanes and other natural catastrophes.
Simulations can often be even more realistic than traditional experiments, as they allow the free configuration of the realistic range of environment parameters found in the operational application field of the final product. Examples are supporting deep water operation of the US Navy or the simulating the surface of neighbored planets in preparation of NASA missions.
Simulations can often be conducted faster than real time. This allows using them for efficient if-then-else analyses of different alternatives, in particular when the necessary data to initialize the simulation can easily be obtained from operational data. This use of simulation adds decision support simulation systems to the tool box of traditional decision support systems.
Simulations allow setting up a coherent synthetic environment that allows for integration of simulated systems in the early analysis phase via mixed virtual systems with first prototypical components to a virtual test environment for the final system. If managed correctly, the environment can be migrated from the development and test domain to the training and education domain in follow-on life cycle phases for the systems (including the option to train and optimize a virtual twin of the real system under realistic constraints even before first components are being built).
The military and defense domain, in particular within the United States, has been the main M&S champion, in form of funding as well as application of M&S. E.g., M&S in modern military organizations is part of the acquisition/procurement strategy. Specifically, M&S is used to conduct Events and Experiments that influence requirements and training for military systems. As such, M&S is considered an integral part of systems engineering of military systems. Other application domains, however, are currently catching up. M&S in the fields of medicine, transportation, and other industries is poised to rapidly outstrip DoD's use of M&S in the years ahead, if it hasn't already happened.
Simulation in science
Modeling and simulation are important in research. Representing the real systems either via physical reproductions at smaller scale, or via mathematical models that allow representing the dynamics of the system via simulation, allows exploring system behavior in an articulated way which is often either not possible, or too risky in the real world.
As an emerging discipline
"The emerging discipline of M&S is based on developments in diverse computer science areas as well as influenced by developments in Systems Theory, Systems Engineering, Software Engineering, Artificial Intelligence, and more. This foundation is as diverse as that of engineering management and brings elements of art, engineering, and science together in a complex and unique way that requires domain experts to enable appropriate decisions when it comes to application or development of M&S technology in the context of this paper. The diversity and application-oriented nature of this new discipline sometimes result in the challenge, that the supported application domains themselves already have vocabularies in place that are not necessarily aligned between disjunctive domains. A comprehensive and concise representation of concepts, terms, and activities is needed that make up a professional Body of Knowledge for the M&S discipline. Due to the broad variety of contributors, this process is still ongoing."
Padilla et al. recommend in "Do we Need M&S Science" to distinguish between M&S Science, Engineering, and Applications.
M&S Science contributes to the Theory of M&S, defining the academic foundations of the discipline.
M&S Engineering is rooted in Theory but looks for applicable solution patterns. The focus is general methods that can be applied in various problem domains.
M&S Applications solve real world problems by focusing on solutions using M&S. Often, the solution results from applying a method, but many solutions are very problem domain specific and are derived from problem domain expertise and not from any general M&S theory or method.
Models can be composed of different units (models at finer granularity) linked to achieving a specific goal; for this reason they can be also called modeling solutions.
More generally, modeling and simulation is a key enabler for systems engineering activities as the system representation in a computer readable (and possibly executable) model enables engineers to reproduce the system (or Systems of System) behavior. A collection of applicative modeling and simulation method to support systems engineering activities in provided in.
Application domains
There are many categorizations possible, but the following taxonomy has been very successfully used in the defense domain, and is currently applied to medical simulation and transportation simulation as well.
Analyses Support is conducted in support of planning and experimentation. Very often, the search for an optimal solution that shall be implemented is driving these efforts. What-if analyses of alternatives fall into this category as well. This style of work is often accomplished by simulysts - those having skills in both simulation and as analysts. This blending of simulation and analyst is well noted in Kleijnen.
Systems Engineering Support is applied for the procurement, development, and testing of systems. This support can start in early phases and include topics like executable system architectures, and it can support testing by providing a virtual environment in which tests are conducted. This style of work is often accomplished by engineers and architects.
Training and Education Support provides simulators, virtual training environments, and serious games to train and educate people. This style of work is often accomplished by trainers working in concert with computer scientists.
A special use of Analyses Support is applied to ongoing business operations. Traditionally, decision support systems provide this functionality. Simulation systems improve their functionality by adding the dynamic element and allow to compute estimates and predictions, including optimization and what-if analyses.
Individual concepts
Although the terms "modeling" and "simulation" are often used as synonyms within disciplines applying M&S exclusively as a tool, within the discipline of M&S both are treated as individual and equally important concepts. Modeling is understood as the purposeful abstraction of reality, resulting in the formal specification of a conceptualization and underlying assumptions and constraints. M&S is in particular interested in models that are used to support the implementation of an executable version on a computer. The execution of a model over time is understood as the simulation. While modeling targets the conceptualization, simulation challenges mainly focus on implementation, in other words, modeling resides on the abstraction level, whereas simulation resides on the implementation level.
Conceptualization and implementation – modeling and simulation – are two activities that are mutually dependent, but can nonetheless be conducted by separate individuals. Management and engineering knowledge and guidelines are needed to ensure that they are well connected. Like an engineering management professional in systems engineering needs to make sure that the systems design captured in a systems architecture is aligned with the systems development, this task needs to be conducted with the same level of professionalism for the model that has to be implemented as well. As the role of big data and analytics continues to grow, the role of combined simulation of analysis is the realm of yet another professional called a simplest – in order to blend algorithmic and analytic techniques through visualizations available directly to decision makers. A study designed for the Bureau of Labor and Statistics by Lee et al. provides an interesting look at how bootstrap techniques (statistical analysis) were used with simulation to generate population data where there existed none.
Academic programs
Modeling and Simulation has only recently become an academic discipline of its own. Formerly, those working in the field usually had a background in engineering.
The following institutions offer degrees in Modeling and Simulation:
Ph D. Programs
University of Pennsylvania (Philadelphia, PA)
Old Dominion University (Norfolk, VA)
University of Alabama in Huntsville (Huntsville, AL)
University of Central Florida (Orlando, FL)
Naval Postgraduate School (Monterey, CA)
University of Genoa (Genoa, Italy)
Masters Programs
National University of Science and Technology, Pakistan (Islamabad, Pakistan)
Arizona State University (Tempe, AZ)
Old Dominion University (Norfolk, VA)
University of Central Florida (Orlando, FL)
the University of Alabama in Huntsville (Huntsville, AL)
Middle East Technical University (Ankara, Turkey)
University of New South Wales (Australia)
Naval Postgraduate School (Monterey, CA)
Department of Scientific Computing, Modeling and Simulation (M.Tech (Modelling & Simulation)) (Savitribai Phule Pune University, India)
Columbus State University (Columbus, GA)
Purdue University Calumet (Hammond, IN)
Delft University of Technology (Delft, The Netherlands)
University of Genoa (Genoa, Italy)
Hamburg University of Applied Sciences (Hamburg, Germany)
Professional Science Masters Programs
University of Central Florida (Orlando, FL)
Graduate Certificate Programs
Portland State University Systems Science
Columbus State University (Columbus, GA)
the University of Alabama in Huntsville (Huntsville, AL)
Undergraduate Programs
Old Dominion University (Norfolk, VA)
Ghulam Ishaq Khan Institute of Engineering Sciences and Technology (Swabi, Pakistan)
Modeling and Simulation Body of Knowledge
The Modeling and Simulation Body of Knowledge (M&S BoK) is the domain of knowledge (information) and capability (competency) that identifies the modeling and simulation community of practice and the M&S profession, industry, and market.
The M&S BoK Index is a set of pointers providing handles so that subject information content can be denoted, identified, accessed, and manipulated.
Summary
Three activities have to be conducted and orchestrated to ensure success:
a model must be produced that captures formally the conceptualization,
a simulation must implement this model, and
management must ensure that model and simulation are interconnected and on the current state (which means that normally the model needs to be updated in case the simulation is changed as well).
See also
Computational science
Computational engineering
Defense Technical Information Center
Glossary of military modeling and simulation
Interservice/Industry Training, Simulation and Education Conference (I/ITSEC)
Microscale and macroscale models
Military Operations Research Society (MORS)
Military simulation
Modeling and Simulation Coordination Office
Operations research
Orbit modeling
Power system simulation
Rule-based modeling
Simulation Interoperability Standards Organization (SISO)
Society for Modeling and Simulation International (SCS)
References
Further reading
The Springer Publishing House publishes the Simulation Foundations, Methods, and Applications Series .
Recently, Wiley started their own Series on Modeling and Simulation .
External links
US Department of Defense (DoD) Modeling and Simulation Coordination Office (M&SCO)
MODSIM World Conference
Society for Modeling and Simulation
Association for Computing Machinery (ACM) Special Interest Group (SIG) on SImulation and Modeling (SIM)
US Congressional Modeling and Simulation Caucus
Example of an M&S BoK Index developed by Tuncer Ören
SimSummit collaborative environment supporting an M&S BoK
Military terminology | 0.788771 | 0.985787 | 0.777561 |
Astronautics | Astronautics (or cosmonautics) is the practice of sending spacecraft beyond Earth's atmosphere into outer space. Spaceflight is one of its main applications and space science is its overarching field.
The term astronautics (originally astronautique in French) was coined in the 1920s by J.-H. Rosny, president of the Goncourt academy, in analogy with aeronautics. Because there is a degree of technical overlap between the two fields, the term aerospace is often used to describe both at once. In 1930, Robert Esnault-Pelterie published the first book on the new research field.
The term cosmonautics (originally cosmonautique in French) was introduced in the 1930s by Ary Sternfeld with his book Initiation à la Cosmonautique (Introduction to cosmonautics) (the book brought him the Prix REP-Hirsch, later known as the Prix d'Astronautique, of the French Astronomical Society in 1934.)
As with aeronautics, the restrictions of mass, temperatures, and external forces require that applications in space survive extreme conditions: high-grade vacuum, the radiation bombardment of interplanetary space and the magnetic belts of low Earth orbit. Space launch vehicles must withstand titanic forces, while satellites can experience huge variations in temperature in very brief periods. Extreme constraints on mass cause astronautical engineers to face the constant need to save mass in the design in order to maximize the actual payload that reaches orbit.
History
The early history of astronautics is theoretical: the fundamental mathematics of space travel was established by Isaac Newton in his 1687 treatise Philosophiæ Naturalis Principia Mathematica. Other mathematicians, such as Swiss Leonhard Euler and Franco-Italian Joseph Louis Lagrange also made essential contributions in the 18th and 19th centuries. In spite of this, astronautics did not become a practical discipline until the mid-20th century. On the other hand, the question of spaceflight puzzled the literary imaginations of such figures as Jules Verne and H. G. Wells. At the beginning of the 20th century, Russian cosmist Konstantin Tsiolkovsky derived the rocket equation, the governing equation for a rocket-based propulsion, enabling computation of the final velocity of a rocket from the mass of spacecraft, combined mass of propellant and spacecraft and exhaust velocity of the propellant.
By the early 1920s, Robert H. Goddard was developing liquid-propellant rockets, which would in a few brief decades become a critical component in the designs of such famous rockets as the V-2 and Saturn V.
The Prix d'Astronautique (Astronautics Prize) awarded by the Société astronomique de France, the French astronomical society, was the first prize on this subject. The international award, established by aviation and astronautical pioneer Robert Esnault-Pelterie and André-Louis Hirsch, was given from 1929 to 1939 in recognition of the study of interplanetary travel and astronautics.
By the mid-1950s, the Space Race between the USSR and the US had begun.
Subdisciplines
Although many regard astronautics itself as a rather specialized subject, engineers and scientists working in this area must be knowledgeable in many distinct fields.
Astrodynamics – the study of orbital motion. Those specializing in this field examine topics such as spacecraft trajectories, ballistics and celestial mechanics.
Spacecraft propulsion – how spacecraft change orbits, and how they are launched. Most spacecraft have some variety of rocket engine, and thus most research efforts focus on some variety of rocket propulsion, such as chemical, nuclear or electric.
Spacecraft design – a specialized form of systems engineering that centers on combining all the necessary subsystems for a particular launch vehicle or satellite.
Controls – keeping a satellite or rocket in its desired orbit (as in spacecraft navigation) and orientation (as in attitude control).
Space environment – although more a sub-discipline of physics rather than astronautics, the effects of space weather and other environmental issues constitute an increasingly important field of study for spacecraft designers.
Bioastronautics
Related fields of study
Aeronautics and aerospace
Mechanical engineering
Physics
See also
Atmospheric reentry
Spaceflight
Frank Malina
French space program
Hermann Oberth
Sergei Korolev
Wernher von Braun
References
Further reading
+ | 0.785812 | 0.98937 | 0.777458 |
Linear elasticity | Linear elasticity is a mathematical model as to how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
The fundamental "linearizing" assumptions of linear elasticity are: infinitesimal strains or "small" deformations (or strains) and linear relationships between the components of stress and strain. In addition linear elasticity is valid only for stress states that do not produce yielding.
These assumptions are reasonable for many engineering materials and engineering design scenarios. Linear elasticity is therefore used extensively in structural analysis and engineering design, often with the aid of finite element analysis.
Mathematical formulation
Equations governing a linear elastic boundary value problem are based on three tensor partial differential equations for the balance of linear momentum and six infinitesimal strain-displacement relations. The system of differential equations is completed by a set of linear algebraic constitutive relations.
Direct tensor form
In direct tensor form that is independent of the choice of coordinate system, these governing equations are:
Cauchy momentum equation, which is an expression of Newton's second law. In convective form it is written as:
Strain-displacement equations:
Constitutive equations. For elastic materials, Hooke's law represents the material behavior and relates the unknown stresses and strains. The general equation for Hooke's law is
where is the Cauchy stress tensor, is the infinitesimal strain tensor, is the displacement vector, is the fourth-order stiffness tensor, is the body force per unit volume, is the mass density, represents the nabla operator, represents a transpose, represents the second material derivative with respect to time, and is the inner product of two second-order tensors (summation over repeated indices is implied).
Cartesian coordinate form
Expressed in terms of components with respect to a rectangular Cartesian coordinate system, the governing equations of linear elasticity are:
Equation of motion: where the subscript is a shorthand for and indicates , is the Cauchy stress tensor, is the body force density, is the mass density, and is the displacement.These are 3 independent equations with 6 independent unknowns (stresses). In engineering notation, they are:
Strain-displacement equations: where is the strain. These are 6 independent equations relating strains and displacements with 9 independent unknowns (strains and displacements). In engineering notation, they are:
Constitutive equations. The equation for Hooke's law is: where is the stiffness tensor. These are 6 independent equations relating stresses and strains. The requirement of the symmetry of the stress and strain tensors lead to equality of many of the elastic constants, reducing the number of different elements to 21 .
An elastostatic boundary value problem for an isotropic-homogeneous media is a system of 15 independent equations and equal number of unknowns (3 equilibrium equations, 6 strain-displacement equations, and 6 constitutive equations). Specifying the boundary conditions, the boundary value problem is completely defined. To solve the system two approaches can be taken according to boundary conditions of the boundary value problem: a displacement formulation, and a stress formulation.
Cylindrical coordinate form
In cylindrical coordinates the equations of motion are
The strain-displacement relations are
and the constitutive relations are the same as in Cartesian coordinates, except that the indices ,, now stand for ,,, respectively.
Spherical coordinate form
In spherical coordinates the equations of motion are
The strain tensor in spherical coordinates is
(An)isotropic (in)homogeneous media
In isotropic media, the stiffness tensor gives the relationship between the stresses (resulting internal stresses) and the strains (resulting deformations). For an isotropic medium, the stiffness tensor has no preferred direction: an applied force will give the same displacements (relative to the direction of the force) no matter the direction in which the force is applied. In the isotropic case, the stiffness tensor may be written: where is the Kronecker delta, K is the bulk modulus (or incompressibility), and is the shear modulus (or rigidity), two elastic moduli. If the medium is inhomogeneous, the isotropic model is sensible if either the medium is piecewise-constant or weakly inhomogeneous; in the strongly inhomogeneous smooth model, anisotropy has to be accounted for. If the medium is homogeneous, then the elastic moduli will be independent of the position in the medium. The constitutive equation may now be written as:
This expression separates the stress into a scalar part on the left which may be associated with a scalar pressure, and a traceless part on the right which may be associated with shear forces. A simpler expression is:
where λ is Lamé's first parameter. Since the constitutive equation is simply a set of linear equations, the strain may be expressed as a function of the stresses as:
which is again, a scalar part on the left and a traceless shear part on the right. More simply:
where is Poisson's ratio and is Young's modulus.
Elastostatics
Elastostatics is the study of linear elasticity under the conditions of equilibrium, in which all forces on the elastic body sum to zero, and the displacements are not a function of time. The equilibrium equations are then
In engineering notation (with tau as shear stress),
This section will discuss only the isotropic homogeneous case.
Displacement formulation
In this case, the displacements are prescribed everywhere in the boundary. In this approach, the strains and stresses are eliminated from the formulation, leaving the displacements as the unknowns to be solved for in the governing equations.
First, the strain-displacement equations are substituted into the constitutive equations (Hooke's Law), eliminating the strains as unknowns:
Differentiating (assuming and are spatially uniform) yields:
Substituting into the equilibrium equation yields:
or (replacing double (dummy) (=summation) indices k,k by j,j and interchanging indices, ij to, ji after the, by virtue of Schwarz' theorem)
where and are Lamé parameters.
In this way, the only unknowns left are the displacements, hence the name for this formulation. The governing equations obtained in this manner are called the elastostatic equations, the special case of the steady Navier–Cauchy equations given below.
Once the displacement field has been calculated, the displacements can be replaced into the strain-displacement equations to solve for strains, which later are used in the constitutive equations to solve for stresses.
The biharmonic equation
The elastostatic equation may be written:
Taking the divergence of both sides of the elastostatic equation and assuming the body forces has zero divergence (homogeneous in domain) we have
Noting that summed indices need not match, and that the partial derivatives commute, the two differential terms are seen to be the same and we have: from which we conclude that:
Taking the Laplacian of both sides of the elastostatic equation, and assuming in addition , we have
From the divergence equation, the first term on the left is zero (Note: again, the summed indices need not match) and we have:
from which we conclude that:
or, in coordinate free notation which is just the biharmonic equation in .
Stress formulation
In this case, the surface tractions are prescribed everywhere on the surface boundary. In this approach, the strains and displacements are eliminated leaving the stresses as the unknowns to be solved for in the governing equations. Once the stress field is found, the strains are then found using the constitutive equations.
There are six independent components of the stress tensor which need to be determined, yet in the displacement formulation, there are only three components of the displacement vector which need to be determined. This means that there are some constraints which must be placed upon the stress tensor, to reduce the number of degrees of freedom to three. Using the constitutive equations, these constraints are derived directly from corresponding constraints which must hold for the strain tensor, which also has six independent components. The constraints on the strain tensor are derivable directly from the definition of the strain tensor as a function of the displacement vector field, which means that these constraints introduce no new concepts or information. It is the constraints on the strain tensor that are most easily understood. If the elastic medium is visualized as a set of infinitesimal cubes in the unstrained state, then after the medium is strained, an arbitrary strain tensor must yield a situation in which the distorted cubes still fit together without overlapping. In other words, for a given strain, there must exist a continuous vector field (the displacement) from which that strain tensor can be derived. The constraints on the strain tensor that are required to assure that this is the case were discovered by Saint Venant, and are called the "Saint Venant compatibility equations". These are 81 equations, 6 of which are independent non-trivial equations, which relate the different strain components. These are expressed in index notation as:
In engineering notation, they are:
The strains in this equation are then expressed in terms of the stresses using the constitutive equations, which yields the corresponding constraints on the stress tensor. These constraints on the stress tensor are known as the Beltrami-Michell equations of compatibility:
In the special situation where the body force is homogeneous, the above equations reduce to
A necessary, but insufficient, condition for compatibility under this situation is or .
These constraints, along with the equilibrium equation (or equation of motion for elastodynamics) allow the calculation of the stress tensor field. Once the stress field has been calculated from these equations, the strains can be obtained from the constitutive equations, and the displacement field from the strain-displacement equations.
An alternative solution technique is to express the stress tensor in terms of stress functions which automatically yield a solution to the equilibrium equation. The stress functions then obey a single differential equation which corresponds to the compatibility equations.
Solutions for elastostatic cases
Thomson's solution - point force in an infinite isotropic medium
The most important solution of the Navier–Cauchy or elastostatic equation is for that of a force acting at a point in an infinite isotropic medium. This solution was found by William Thomson (later Lord Kelvin) in 1848 (Thomson 1848). This solution is the analog of Coulomb's law in electrostatics. A derivation is given in Landau & Lifshitz. Defining
where is Poisson's ratio, the solution may be expressed as where is the force vector being applied at the point, and is a tensor Green's function which may be written in Cartesian coordinates as:
It may be also compactly written as:
and it may be explicitly written as:
In cylindrical coordinates it may be written as:
where is total distance to point.
It is particularly helpful to write the displacement in cylindrical coordinates for a point force directed along the z-axis. Defining and as unit vectors in the and directions respectively yields:
It can be seen that there is a component of the displacement in the direction of the force, which diminishes, as is the case for the potential in electrostatics, as 1/r for large r. There is also an additional ρ-directed component.
Boussinesq–Cerruti solution - point force at the origin of an infinite isotropic half-space
Another useful solution is that of a point force acting on the surface of an infinite half-space. It was derived by Boussinesq for the normal force and Cerruti for the tangential force and a derivation is given in Landau & Lifshitz. In this case, the solution is again written as a Green's tensor which goes to zero at infinity, and the component of the stress tensor normal to the surface vanishes. This solution may be written in Cartesian coordinates as [recall: and , = Poisson's ratio]:
Other solutions
Point force inside an infinite isotropic half-space.
Point force on a surface of an isotropic half-space.
Contact of two elastic bodies: the Hertz solution (see Matlab code). See also the page on Contact mechanics.
Elastodynamics in terms of displacements
Elastodynamics is the study of elastic waves and involves linear elasticity with variation in time. An elastic wave is a type of mechanical wave that propagates in elastic or viscoelastic materials. The elasticity of the material provides the restoring force of the wave. When they occur in the Earth as the result of an earthquake or other disturbance, elastic waves are usually called seismic waves.
The linear momentum equation is simply the equilibrium equation with an additional inertial term:
If the material is governed by anisotropic Hooke's law (with the stiffness tensor homogeneous throughout the material), one obtains the displacement equation of elastodynamics:
If the material is isotropic and homogeneous, one obtains the (general, or transient) Navier–Cauchy equation:
The elastodynamic wave equation can also be expressed as
where
is the acoustic differential operator, and is Kronecker delta.
In isotropic media, the stiffness tensor has the form
where
is the bulk modulus (or incompressibility), and is the shear modulus (or rigidity), two elastic moduli. If the material is homogeneous (i.e. the stiffness tensor is constant throughout the material), the acoustic operator becomes:
For plane waves, the above differential operator becomes the acoustic algebraic operator:
where
are the eigenvalues of with eigenvectors parallel and orthogonal to the propagation direction , respectively. The associated waves are called longitudinal and shear elastic waves. In the seismological literature, the corresponding plane waves are called P-waves and S-waves (see Seismic wave).
Elastodynamics in terms of stresses
Elimination of displacements and strains from the governing equations leads to the Ignaczak equation of elastodynamics
In the case of local isotropy, this reduces to
The principal characteristics of this formulation include: (1) avoids gradients of compliance but introduces gradients of mass density; (2) it is derivable from a variational principle; (3) it is advantageous for handling traction initial-boundary value problems, (4) allows a tensorial classification of elastic waves, (5) offers a range of applications in elastic wave propagation problems; (6) can be extended to dynamics of classical or micropolar solids with interacting fields of diverse types (thermoelastic, fluid-saturated porous, piezoelectro-elastic...) as well as nonlinear media.
Anisotropic homogeneous media
For anisotropic media, the stiffness tensor is more complicated. The symmetry of the stress tensor means that there are at most 6 different elements of stress. Similarly, there are at most 6 different elements of the strain tensor . Hence the fourth-order stiffness tensor may be written as a matrix (a tensor of second order). Voigt notation is the standard mapping for tensor indices,
With this notation, one can write the elasticity matrix for any linearly elastic medium as:
As shown, the matrix is symmetric, this is a result of the existence of a strain energy density function which satisfies . Hence, there are at most 21 different elements of .
The isotropic special case has 2 independent elements:
The simplest anisotropic case, that of cubic symmetry has 3 independent elements:
The case of transverse isotropy, also called polar anisotropy, (with a single axis (the 3-axis) of symmetry) has 5 independent elements:
When the transverse isotropy is weak (i.e. close to isotropy), an alternative parametrization utilizing Thomsen parameters, is convenient for the formulas for wave speeds.
The case of orthotropy (the symmetry of a brick) has 9 independent elements:
Elastodynamics
The elastodynamic wave equation for anisotropic media can be expressed as
where
is the acoustic differential operator, and is Kronecker delta.
Plane waves and Christoffel equation
A plane wave has the form
with of unit length.
It is a solution of the wave equation with zero forcing, if and only if and constitute an eigenvalue/eigenvector pair of the acoustic algebraic operator
This propagation condition (also known as the Christoffel equation) may be written as
where
denotes propagation direction and is phase velocity.
See also
Castigliano's method
Cauchy momentum equation
Clapeyron's theorem
Contact mechanics
Deformation
Elasticity (physics)
GRADELA
Hooke's law
Infinitesimal strain theory
Michell solution
Plasticity (physics)
Signorini problem
Spring system
Stress (mechanics)
Stress functions
References
Elasticity (physics)
Solid mechanics
Sound | 0.783068 | 0.992782 | 0.777416 |
Langevin equation | In physics, a Langevin equation (named after Paul Langevin) is a stochastic differential equation describing how a system evolves when subjected to a combination of deterministic and fluctuating ("random") forces. The dependent variables in a Langevin equation typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. One application is to Brownian motion, which models the fluctuating motion of a small particle in a fluid.
Brownian motion as a prototype
The original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid,
Here, is the velocity of the particle, is its damping coefficient, and is its mass. The force acting on the particle is written as a sum of a viscous force proportional to the particle's velocity (Stokes' law), and a noise term representing the effect of the collisions with the molecules of the fluid. The force has a Gaussian probability distribution with correlation function
where is the Boltzmann constant, is the temperature and is the i-th component of the vector . The -function form of the time correlation means that the force at a time is uncorrelated with the force at any other time. This is an approximation: the actual random force has a nonzero correlation time corresponding to the collision time of the molecules. However, the Langevin equation is used to describe the motion of a "macroscopic" particle at a much longer time scale, and in this limit the -correlation and the Langevin equation becomes virtually exact.
Another common feature of the Langevin equation is the occurrence of the damping coefficient in the correlation function of the random force, which in an equilibrium system is an expression of the Einstein relation.
Mathematical aspects
A strictly -correlated fluctuating force is not a function in the usual mathematical sense and even the derivative is not defined in this limit. This problem disappears when the Langevin equation is written in integral form
Therefore, the differential form is only an abbreviation for its time integral. The general mathematical term for equations of this type is "stochastic differential equation".
Another mathematical ambiguity occurs for Langevin equations with multiplicative noise, which refers to noise terms that are multiplied by a non-constant function of the dependent variables, e.g., . If a multiplicative noise is intrinsic to the system, its definition is ambiguous, as it is equally valid to interpret it according to Stratonovich- or Ito- scheme (see Itô calculus). Nevertheless, physical observables are independent of the interpretation, provided the latter is applied consistently when manipulating the equation. This is necessary because the symbolic rules of calculus differ depending on the interpretation scheme. If the noise is external to the system, the appropriate interpretation is the Stratonovich one.
Generic Langevin equation
There is a formal derivation of a generic Langevin equation from classical mechanics. This generic equation plays a central role in the theory of critical dynamics, and other areas of nonequilibrium statistical mechanics. The equation for Brownian motion above is a special case.
An essential step in the derivation is the division of the degrees of freedom into the categories slow and fast. For example, local thermodynamic equilibrium in a liquid is reached within a few collision times, but it takes much longer for densities of conserved quantities like mass and energy to relax to equilibrium. Thus, densities of conserved quantities, and in particular their long wavelength components, are slow variable candidates. This division can be expressed formally with the Zwanzig projection operator. Nevertheless, the derivation is not completely rigorous from a mathematical physics perspective because it relies on assumptions that lack rigorous proof, and instead are justified only as plausible approximations of physical systems.
Let denote the slow variables. The generic Langevin equation then reads
The fluctuating force obeys a Gaussian probability distribution with correlation function
This implies the Onsager reciprocity relation for the damping coefficients . The dependence of on is negligible in most cases. The symbol denotes the Hamiltonian of the system, where is the equilibrium probability distribution of the variables . Finally, is the projection of the Poisson bracket of the slow variables and onto the space of slow variables.
In the Brownian motion case one would have , or and . The equation of motion for is exact: there is no fluctuating force and no damping coefficient .
Examples
Thermal noise in an electrical resistor
There is a close analogy between the paradigmatic Brownian particle discussed above and Johnson noise, the electric voltage generated by thermal fluctuations in a resistor. The diagram at the right shows an electric circuit consisting of a resistance R and a capacitance C. The slow variable is the voltage U between the ends of the resistor. The Hamiltonian reads , and the Langevin equation becomes
This equation may be used to determine the correlation function
which becomes white noise (Johnson noise) when the capacitance becomes negligibly small.
Critical dynamics
The dynamics of the order parameter of a second order phase transition slows down near the critical point and can be described with a Langevin equation. The simplest case is the universality class "model A" with a non-conserved scalar order parameter, realized for instance in axial ferromagnets,
Other universality classes (the nomenclature is "model A",..., "model J") contain a diffusing order parameter, order parameters with several components, other critical variables and/or contributions from Poisson brackets.
Harmonic oscillator in a fluid
A particle in a fluid is described by a Langevin equation with a potential energy function, a damping force, and thermal fluctuations given by the fluctuation dissipation theorem. If the potential is quadratic then the constant energy curves are ellipses, as shown in the figure. If there is dissipation but no thermal noise, a particle continually loses energy to the environment, and its time-dependent phase portrait (velocity vs position) corresponds to an inward spiral toward 0 velocity. By contrast, thermal fluctuations continually add energy to the particle and prevent it from reaching exactly 0 velocity. Rather, the initial ensemble of stochastic oscillators approaches a steady state in which the velocity and position are distributed according to the Maxwell–Boltzmann distribution. In the plot below (figure 2), the long time velocity distribution (blue) and position distributions (orange) in a harmonic potential is plotted with the Boltzmann probabilities for velocity (green) and position (red). In particular, the late time behavior depicts thermal equilibrium.
Trajectories of free Brownian particles
Consider a free particle of mass with equation of motion described by
where is the particle velocity, is the particle mobility, and is a rapidly fluctuating force whose time-average vanishes over a characteristic timescale of particle collisions, i.e. . The general solution to the equation of motion is
where is the correlation time of the noise term. It can also be shown that the autocorrelation function of the particle velocity is given by
where we have used the property that the variables and become uncorrelated for time separations . Besides, the value of is set to be equal to such that it obeys the equipartition theorem. If the system is initially at thermal equilibrium already with , then for all , meaning that the system remains at equilibrium at all times.
The velocity of the Brownian particle can be integrated to yield its trajectory . If it is initially located at the origin with probability 1, then the result is
Hence, the average displacement asymptotes to as the system relaxes. The mean squared displacement can be determined similarly:
This expression implies that , indicating that the motion of Brownian particles at timescales much shorter than the relaxation time of the system is (approximately) time-reversal invariant. On the other hand, , which indicates an irreversible, dissipative process.
Recovering Boltzmann statistics
If the external potential is conservative and the noise term derives from a reservoir in thermal equilibrium, then the long-time solution to the Langevin equation must reduce to the Boltzmann distribution, which is the probability distribution function for particles in thermal equilibrium. In the special case of overdamped dynamics, the inertia of the particle is negligible in comparison to the damping force, and the trajectory is described by the overdamped Langevin equation
where is the damping constant. The term is white noise, characterized by (formally, the Wiener process). One way to solve this equation is to introduce a test function and calculate its average. The average of should be time-independent for finite , leading to
Itô's lemma for the Itô drift-diffusion process says that the differential of a twice-differentiable function is given by
Applying this to the calculation of gives
This average can be written using the probability density function ;
where the second term was integrated by parts (hence the negative sign). Since this is true for arbitrary functions , it follows that
thus recovering the Boltzmann distribution
Equivalent techniques
In some situations, one is primarily interested in the noise-averaged behavior of the Langevin equation, as opposed to the solution for particular realizations of the noise. This section describes techniques for obtaining this averaged behavior that are distinct from—but also equivalent to—the stochastic calculus inherent in the Langevin equation.
Fokker–Planck equation
A Fokker–Planck equation is a deterministic equation for the time dependent probability density of stochastic variables . The Fokker–Planck equation corresponding to the generic Langevin equation described in this article is the following:
The equilibrium distribution is a stationary solution.
Klein–Kramers equation
The Fokker–Planck equation for an underdamped Brownian particle is called the Klein–Kramers equation. If the Langevin equations are written as
where is the momentum, then the corresponding Fokker–Planck equation is
Here and are the gradient operator with respect to and , and is the Laplacian with respect to .
In -dimensional free space, corresponding to on , this equation can be solved using Fourier transforms. If the particle is initialized at with position and momentum , corresponding to initial condition , then the solution is
where
In three spatial dimensions, the mean squared displacement is
Path integral
A path integral equivalent to a Langevin equation can be obtained from the corresponding Fokker–Planck equation or by transforming the Gaussian probability distribution of the fluctuating force to a probability distribution of the slow variables, schematically .
The functional determinant and associated mathematical subtleties drop out if the Langevin equation is discretized in the natural (causal) way, where depends on but not on . It turns out to be convenient to introduce auxiliary response variables . The path integral equivalent to the generic Langevin equation then reads
where is a normalization factor and
The path integral formulation allows for the use of tools from quantum field theory, such as perturbation and renormalization group methods. This formulation is typically referred to as either the Martin-Siggia-Rose formalism or the Janssen-De Dominicis formalism after its developers. The mathematical formalism for this representation can be developed on abstract Wiener space.
See also
Grote–Hynes theory
Langevin dynamics
Stochastic thermodynamics
References
Further reading
W. T. Coffey (Trinity College, Dublin, Ireland) and Yu P. Kalmykov (Université de Perpignan, France, The Langevin Equation: With Applications to Stochastic Problems in Physics, Chemistry and Electrical Engineering (Third edition), World Scientific Series in Contemporary Chemical Physics – Vol 27.
Reif, F. Fundamentals of Statistical and Thermal Physics, McGraw Hill New York, 1965. See section 15.5 Langevin Equation
R. Friedrich, J. Peinke and Ch. Renner. How to Quantify Deterministic and Random Influences on the Statistics of the Foreign Exchange Market, Phys. Rev. Lett. 84, 5224–5227 (2000)
L.C.G. Rogers and D. Williams. Diffusions, Markov Processes, and Martingales, Cambridge Mathematical Library, Cambridge University Press, Cambridge, reprint of 2nd (1994) edition, 2000.
Statistical mechanics
Stochastic differential equations | 0.781445 | 0.994825 | 0.777401 |
Invariant (physics) | In theoretical physics, an invariant is an observable of a physical system which remains unchanged under some transformation. Invariance, as a broader term, also applies to the no change of form of physical laws under a transformation, and is closer in scope to the mathematical definition. Invariants of a system are deeply tied to the symmetries imposed by its environment.
Invariance is an important concept in modern theoretical physics, and many theories are expressed in terms of their symmetries and invariants.
Examples
In classical and quantum mechanics, invariance of space under translation results in momentum being an invariant and the conservation of momentum, whereas invariance of the origin of time, i.e. translation in time, results in energy being an invariant and the conservation of energy. In general, by Noether's theorem, any invariance of a physical system under a continuous symmetry leads to a fundamental conservation law.
In crystals, the electron density is periodic and invariant with respect to discrete translations by unit cell vectors. In very few materials, this symmetry can be broken due to enhanced electron correlations.
Another examples of physical invariants are the speed of light, and charge and mass of a particle observed from two reference frames moving with respect to one another (invariance under a spacetime Lorentz transformation), and invariance of time and acceleration under a Galilean transformation between two such frames moving at low velocities.
Quantities can be invariant under some common transformations but not under others. For example, the velocity of a particle is invariant when switching coordinate representations from rectangular to curvilinear coordinates, but is not invariant when transforming between frames of reference that are moving with respect to each other. Other quantities, like the speed of light, are always invariant.
Physical laws are said to be invariant under transformations when their predictions remain unchanged. This generally means that the form of the law (e.g. the type of differential equations used to describe the law) is unchanged in transformations so that no additional or different solutions are obtained.
Covariance and contravariance generalize the mathematical properties of invariance in tensor mathematics, and are frequently used in electromagnetism, special relativity, and general relativity.
Informal usage
In the field of physics, the adjective covariant (as in covariance and contravariance of vectors) is often used informally as a synonym for "invariant". For example, the Schrödinger equation does not keep its written form under the coordinate transformations of special relativity. Thus, a physicist might say that the Schrödinger equation is not covariant. In contrast, the Klein–Gordon equation and the Dirac equation do keep their written form under these coordinate transformations. Thus, a physicist might say that these equations are covariant.
Despite this usage of "covariant", it is more accurate to say that the Klein–Gordon and Dirac equations are invariant, and that the Schrödinger equation is not invariant. Additionally, to remove ambiguity, the transformation by which the invariance is evaluated should be indicated.
See also
Casimir operator
Charge (physics)
Conservation law
Conserved quantity
General covariance
Eigenvalues and eigenvectors
Invariants of tensors
Killing form
Physical constant
Poincaré group
Scalar (physics)
Symmetry (physics)
Uniformity of nature
Weyl transformation
References
Conservation laws
Physical quantities | 0.795445 | 0.977273 | 0.777367 |
Heat transfer | Heat transfer is a discipline of thermal engineering that concerns the generation, use, conversion, and exchange of thermal energy (heat) between physical systems. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes. Engineers also consider the transfer of mass of differing chemical species (mass transfer in the form of advection), either cold or hot, to achieve heat transfer. While these mechanisms have distinct characteristics, they often occur simultaneously in the same system.
Heat conduction, also called diffusion, is the direct microscopic exchanges of kinetic energy of particles (such as molecules) or quasiparticles (such as lattice waves) through the boundary between two systems. When an object is at a different temperature from another body or its surroundings, heat flows so that the body and the surroundings reach the same temperature, at which point they are in thermal equilibrium. Such spontaneous heat transfer always occurs from a region of high temperature to another region of lower temperature, as described in the second law of thermodynamics.
Heat convection occurs when the bulk flow of a fluid (gas or liquid) carries its heat through the fluid. All convective processes also move heat partly by diffusion, as well. The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". The former process is often called "forced convection." In this case, the fluid is forced to flow by use of a pump, fan, or other mechanical means.
Thermal radiation occurs through a vacuum or any transparent medium (solid or fluid or gas). It is the transfer of energy by means of photons or electromagnetic waves governed by the same laws.
Overview
Heat transfer is the energy exchanged between materials (solid/liquid/gas) as a result of a temperature difference. The thermodynamic free energy is the amount of work that a thermodynamic system can perform. Enthalpy is a thermodynamic potential, designated by the letter "H", that is the sum of the internal energy of the system (U) plus the product of pressure (P) and volume (V). Joule is a unit to quantify energy, work, or the amount of heat.
Heat transfer is a process function (or path function), as opposed to functions of state; therefore, the amount of heat transferred in a thermodynamic process that changes the state of a system depends on how that process occurs, not only the net difference between the initial and final states of the process.
Thermodynamic and mechanical heat transfer is calculated with the heat transfer coefficient, the proportionality between the heat flux and the thermodynamic driving force for the flow of heat. Heat flux is a quantitative, vectorial representation of heat flow through a surface.
In engineering contexts, the term heat is taken as synonymous with thermal energy. This usage has its origin in the historical interpretation of heat as a fluid (caloric) that can be transferred by various causes, and that is also common in the language of laymen and everyday life.
The transport equations for thermal energy (Fourier's law), mechanical momentum (Newton's law for fluids), and mass transfer (Fick's laws of diffusion) are similar, and analogies among these three transport processes have been developed to facilitate the prediction of conversion from any one to the others.
Thermal engineering concerns the generation, use, conversion, storage, and exchange of heat transfer. As such, heat transfer is involved in almost every sector of the economy. Heat transfer is classified into various mechanisms, such as thermal conduction, thermal convection, thermal radiation, and transfer of energy by phase changes.
Mechanisms
The fundamental modes of heat transfer are:
Advection
Advection is the transport mechanism of a fluid from one location to another, and is dependent on motion and momentum of that fluid.
Conduction or diffusion
The transfer of energy between objects that are in physical contact. Thermal conductivity is the property of a material to conduct heat and is evaluated primarily in terms of Fourier's law for heat conduction.
Convection
The transfer of energy between an object and its environment, due to fluid motion. The average temperature is a reference for evaluating properties related to convective heat transfer.
Radiation
The transfer of energy by the emission of electromagnetic radiation.
Advection
By transferring matter, energy—including thermal energy—is moved by the physical transfer of a hot or cold object from one place to another. This can be as simple as placing hot water in a bottle and heating a bed, or the movement of an iceberg in changing ocean currents. A practical example is thermal hydraulics. This can be described by the formula:
where
is heat flux (W/m2),
is density (kg/m3),
is heat capacity at constant pressure (J/kg·K),
is the difference in temperature (K),
is velocity (m/s).
Conduction
On a microscopic scale, heat conduction occurs as hot, rapidly moving or vibrating atoms and molecules interact with neighboring atoms and molecules, transferring some of their energy (heat) to these neighboring particles. In other words, heat is transferred by conduction when adjacent atoms vibrate against one another, or as electrons move from one atom to another. Conduction is the most significant means of heat transfer within a solid or between solid objects in thermal contact. Fluids—especially gases—are less conductive. Thermal contact conductance is the study of heat conduction between solid bodies in contact. The process of heat transfer from one place to another place without the movement of particles is called conduction, such as when placing a hand on a cold glass of water—heat is conducted from the warm skin to the cold glass, but if the hand is held a few inches from the glass, little conduction would occur since air is a poor conductor of heat. Steady-state conduction is an idealized model of conduction that happens when the temperature difference driving the conduction is constant so that after a time, the spatial distribution of temperatures in the conducting object does not change any further (see Fourier's law). In steady state conduction, the amount of heat entering a section is equal to amount of heat coming out, since the temperature change (a measure of heat energy) is zero. An example of steady state conduction is the heat flow through walls of a warm house on a cold day—inside the house is maintained at a high temperature and, outside, the temperature stays low, so the transfer of heat per unit time stays near a constant rate determined by the insulation in the wall and the spatial distribution of temperature in the walls will be approximately constant over time.
Transient conduction (see Heat equation) occurs when the temperature within an object changes as a function of time. Analysis of transient systems is more complex, and analytic solutions of the heat equation are only valid for idealized model systems. Practical applications are generally investigated using numerical methods, approximation techniques, or empirical study.
Convection
The flow of fluid may be forced by external processes, or sometimes (in gravitational fields) by buoyancy forces caused when thermal energy expands the fluid (for example in a fire plume), thus influencing its own transfer. The latter process is often called "natural convection". All convective processes also move heat partly by diffusion, as well. Another form of convection is forced convection. In this case, the fluid is forced to flow by using a pump, fan, or other mechanical means.
Convective heat transfer, or simply, convection, is the transfer of heat from one place to another by the movement of fluids, a process that is essentially the transfer of heat via mass transfer. The bulk motion of fluid enhances heat transfer in many physical situations, such as between a solid surface and the fluid. Convection is usually the dominant form of heat transfer in liquids and gases. Although sometimes discussed as a third method of heat transfer, convection is usually used to describe the combined effects of heat conduction within the fluid (diffusion) and heat transference by bulk fluid flow streaming. The process of transport by fluid streaming is known as advection, but pure advection is a term that is generally associated only with mass transport in fluids, such as advection of pebbles in a river. In the case of heat transfer in fluids, where transport by advection in a fluid is always also accompanied by transport via heat diffusion (also known as heat conduction) the process of heat convection is understood to refer to the sum of heat transport by advection and diffusion/conduction.
Free, or natural, convection occurs when bulk fluid motions (streams and currents) are caused by buoyancy forces that result from density variations due to variations of temperature in the fluid. Forced convection is a term used when the streams and currents in the fluid are induced by external means—such as fans, stirrers, and pumps—creating an artificially induced convection current.
Convection-cooling
Convective cooling is sometimes described as Newton's law of cooling:
However, by definition, the validity of Newton's law of cooling requires that the rate of heat loss from convection be a linear function of ("proportional to") the temperature difference that drives heat transfer, and in convective cooling this is sometimes not the case. In general, convection is not linearly dependent on temperature gradients, and in some cases is strongly nonlinear. In these cases, Newton's law does not apply.
Convection vs. conduction
In a body of fluid that is heated from underneath its container, conduction, and convection can be considered to compete for dominance. If heat conduction is too great, fluid moving down by convection is heated by conduction so fast that its downward movement will be stopped due to its buoyancy, while fluid moving up by convection is cooled by conduction so fast that its driving buoyancy will diminish. On the other hand, if heat conduction is very low, a large temperature gradient may be formed and convection might be very strong.
The Rayleigh number is the product of the Grashof and Prandtl numbers. It is a measure that determines the relative strength of conduction and convection.
where
g is the acceleration due to gravity,
ρ is the density with being the density difference between the lower and upper ends,
μ is the dynamic viscosity,
α is the Thermal diffusivity,
β is the volume thermal expansivity (sometimes denoted α elsewhere),
T is the temperature,
ν is the kinematic viscosity, and
L is characteristic length.
The Rayleigh number can be understood as the ratio between the rate of heat transfer by convection to the rate of heat transfer by conduction; or, equivalently, the ratio between the corresponding timescales (i.e. conduction timescale divided by convection timescale), up to a numerical factor. This can be seen as follows, where all calculations are up to numerical factors depending on the geometry of the system.
The buoyancy force driving the convection is roughly , so the corresponding pressure is roughly . In steady state, this is canceled by the shear stress due to viscosity, and therefore roughly equals , where V is the typical fluid velocity due to convection and the order of its timescale. The conduction timescale, on the other hand, is of the order of .
Convection occurs when the Rayleigh number is above 1,000–2,000.
Radiation
Radiative heat transfer is the transfer of energy via thermal radiation, i.e., electromagnetic waves. It occurs across vacuum or any transparent medium (solid or fluid or gas). Thermal radiation is emitted by all objects at temperatures above absolute zero, due to random movements of atoms and molecules in matter. Since these atoms and molecules are composed of charged particles (protons and electrons), their movement results in the emission of electromagnetic radiation which carries away energy. Radiation is typically only important in engineering applications for very hot objects, or for objects with a large temperature difference.
When the objects and distances separating them are large in size and compared to the wavelength of thermal radiation, the rate of transfer of radiant energy is best described by the Stefan-Boltzmann equation. For an object in vacuum, the equation is:
For radiative transfer between two objects, the equation is as follows:
where
is the heat flux,
is the emissivity (unity for a black body),
is the Stefan–Boltzmann constant,
is the view factor between two surfaces a and b, and
and are the absolute temperatures (in kelvins or degrees Rankine) for the two objects.
The blackbody limit established by the Stefan-Boltzmann equation can be exceeded when the objects exchanging thermal radiation or the distances separating them are comparable in scale or smaller than the dominant thermal wavelength. The study of these cases is called near-field radiative heat transfer.
Radiation from the sun, or solar radiation, can be harvested for heat and power. Unlike conductive and convective forms of heat transfer, thermal radiation – arriving within a narrow-angle i.e. coming from a source much smaller than its distance – can be concentrated in a small spot by using reflecting mirrors, which is exploited in concentrating solar power generation or a burning glass. For example, the sunlight reflected from mirrors heats the PS10 solar power tower and during the day it can heat water to .
The reachable temperature at the target is limited by the temperature of the hot source of radiation. (T4-law lets the reverse flow of radiation back to the source rise.) The (on its surface) somewhat 4000 K hot sun allows to reach coarsely 3000 K (or 3000 °C, which is about 3273 K) at a small probe in the focus spot of a big concave, concentrating mirror of the Mont-Louis Solar Furnace in France.
Phase transition
Phase transition or phase change, takes place in a thermodynamic system from one phase or state of matter to another one by heat transfer. Phase change examples are the melting of ice or the boiling of water.
The Mason equation explains the growth of a water droplet based on the effects of heat transport on evaporation and condensation.
Phase transitions involve the four fundamental states of matter:
Solid – Deposition, freezing, and solid-to-solid transformation.
Liquid – Condensation and melting / fusion.
Gas – Boiling / evaporation, recombination/ deionization, and sublimation.
Plasma – Ionization.
Boiling
The boiling point of a substance is the temperature at which the vapor pressure of the liquid equals the pressure surrounding the liquid and the liquid evaporates resulting in an abrupt change in vapor volume.
In a closed system, saturation temperature and boiling point mean the same thing. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy. Any addition of thermal energy results in a phase transition.
At standard atmospheric pressure and low temperatures, no boiling occurs and the heat transfer rate is controlled by the usual single-phase mechanisms. As the surface temperature is increased, local boiling occurs and vapor bubbles nucleate, grow into the surrounding cooler fluid, and collapse. This is sub-cooled nucleate boiling, and is a very efficient heat transfer mechanism. At high bubble generation rates, the bubbles begin to interfere and the heat flux no longer increases rapidly with surface temperature (this is the departure from nucleate boiling, or DNB).
At similar standard atmospheric pressure and high temperatures, the hydrodynamically quieter regime of film boiling is reached. Heat fluxes across the stable vapor layers are low but rise slowly with temperature. Any contact between the fluid and the surface that may be seen probably leads to the extremely rapid nucleation of a fresh vapor layer ("spontaneous nucleation"). At higher temperatures still, a maximum in the heat flux is reached (the critical heat flux, or CHF).
The Leidenfrost Effect demonstrates how nucleate boiling slows heat transfer due to gas bubbles on the heater's surface. As mentioned, gas-phase thermal conductivity is much lower than liquid-phase thermal conductivity, so the outcome is a kind of "gas thermal barrier".
Condensation
Condensation occurs when a vapor is cooled and changes its phase to a liquid. During condensation, the latent heat of vaporization must be released. The amount of heat is the same as that absorbed during vaporization at the same fluid pressure.
There are several types of condensation:
Homogeneous condensation, as during the formation of fog.
Condensation in direct contact with subcooled liquid.
Condensation on direct contact with a cooling wall of a heat exchanger: This is the most common mode used in industry: Dropwise condensation is difficult to sustain reliably; therefore, industrial equipment is normally designed to operate in filmwise condensation mode.
Melting
Melting is a thermal process that results in the phase transition of a substance from a solid to a liquid. The internal energy of a substance is increased, typically through heat or pressure, resulting in a rise of its temperature to the melting point, at which the ordering of ionic or molecular entities in the solid breaks down to a less ordered state and the solid liquefies. Molten substances generally have reduced viscosity with elevated temperature; an exception to this maxim is the element sulfur, whose viscosity increases to a point due to polymerization and then decreases with higher temperatures in its molten state.
Modeling approaches
Heat transfer can be modeled in various ways.
Heat equation
The heat equation is an important partial differential equation that describes the distribution of heat (or temperature variation) in a given region over time. In some cases, exact solutions of the equation are available; in other cases the equation must be solved numerically using computational methods such as DEM-based models for thermal/reacting particulate systems (as critically reviewed by Peng et al.).
Lumped system analysis
Lumped system analysis often reduces the complexity of the equations to one first-order linear differential equation, in which case heating and cooling are described by a simple exponential solution, often referred to as Newton's law of cooling.
System analysis by the lumped capacitance model is a common approximation in transient conduction that may be used whenever heat conduction within an object is much faster than heat conduction across the boundary of the object. This is a method of approximation that reduces one aspect of the transient conduction system—that within the object—to an equivalent steady-state system. That is, the method assumes that the temperature within the object is completely uniform, although its value may change over time.
In this method, the ratio of the conductive heat resistance within the object to the convective heat transfer resistance across the object's boundary, known as the Biot number, is calculated. For small Biot numbers, the approximation of spatially uniform temperature within the object can be used: it can be presumed that heat transferred into the object has time to uniformly distribute itself, due to the lower resistance to doing so, as compared with the resistance to heat entering the object.
Climate models
Climate models study the radiant heat transfer by using quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice.
Engineering
Heat transfer has broad application to the functioning of numerous devices and systems. Heat-transfer principles may be used to preserve, increase, or decrease temperature in a wide variety of circumstances. Heat transfer methods are used in numerous disciplines, such as automotive engineering, thermal management of electronic devices and systems, climate control, insulation, materials processing, chemical engineering and power station engineering.
Insulation, radiance and resistance
Thermal insulators are materials specifically designed to reduce the flow of heat by limiting conduction, convection, or both. Thermal resistance is a heat property and the measurement by which an object or material resists to heat flow (heat per time unit or thermal resistance) to temperature difference.
Radiance, or spectral radiance, is a measure of the quantity of radiation that passes through or is emitted. Radiant barriers are materials that reflect radiation, and therefore reduce the flow of heat from radiation sources. Good insulators are not necessarily good radiant barriers, and vice versa. Metal, for instance, is an excellent reflector and a poor insulator.
The effectiveness of a radiant barrier is indicated by its reflectivity, which is the fraction of radiation reflected. A material with a high reflectivity (at a given wavelength) has a low emissivity (at that same wavelength), and vice versa. At any specific wavelength, reflectivity=1 - emissivity. An ideal radiant barrier would have a reflectivity of 1, and would therefore reflect 100 percent of incoming radiation. Vacuum flasks, or Dewars, are silvered to approach this ideal. In the vacuum of space, satellites use multi-layer insulation, which consists of many layers of aluminized (shiny) Mylar to greatly reduce radiation heat transfer and control satellite temperature.
Devices
A heat engine is a system that performs the conversion of a flow of thermal energy (heat) to mechanical energy to perform mechanical work.
A thermocouple is a temperature-measuring device and a widely used type of temperature sensor for measurement and control, and can also be used to convert heat into electric power.
A thermoelectric cooler is a solid-state electronic device that pumps (transfers) heat from one side of the device to the other when an electric current is passed through it. It is based on the Peltier effect.
A thermal diode or thermal rectifier is a device that causes heat to flow preferentially in one direction.
Heat exchangers
A heat exchanger is used for more efficient heat transfer or to dissipate heat. Heat exchangers are widely used in refrigeration, air conditioning, space heating, power generation, and chemical processing. One common example of a heat exchanger is a car's radiator, in which the hot coolant fluid is cooled by the flow of air over the radiator's surface.
Common types of heat exchanger flows include parallel flow, counter flow, and cross flow. In parallel flow, both fluids move in the same direction while transferring heat; in counter flow, the fluids move in opposite directions; and in cross flow, the fluids move at right angles to each other. Common types of heat exchangers include shell and tube, double pipe, extruded finned pipe, spiral fin pipe, u-tube, and stacked plate. Each type has certain advantages and disadvantages over other types.
A heat sink is a component that transfers heat generated within a solid material to a fluid medium, such as air or a liquid. Examples of heat sinks are the heat exchangers used in refrigeration and air conditioning systems or the radiator in a car. A heat pipe is another heat-transfer device that combines thermal conductivity and phase transition to efficiently transfer heat between two solid interfaces.
Applications
Architecture
Efficient energy use is the goal to reduce the amount of energy required in heating or cooling. In architecture, condensation and air currents can cause cosmetic or structural damage. An energy audit can help to assess the implementation of recommended corrective procedures. For instance, insulation improvements, air sealing of structural leaks, or the addition of energy-efficient windows and doors.
Smart meter is a device that records electric energy consumption in intervals.
Thermal transmittance is the rate of transfer of heat through a structure divided by the difference in temperature across the structure. It is expressed in watts per square meter per kelvin, or W/(m2K). Well-insulated parts of a building have a low thermal transmittance, whereas poorly-insulated parts of a building have a high thermal transmittance.
Thermostat is a device to monitor and control temperature.
Climate engineering
Climate engineering consists of carbon dioxide removal and solar radiation management. Since the amount of carbon dioxide determines the radiative balance of Earth's atmosphere, carbon dioxide removal techniques can be applied to reduce the radiative forcing. Solar radiation management is the attempt to absorb less solar radiation to offset the effects of greenhouse gases.
An alternative method is passive daytime radiative cooling, which enhances terrestrial heat flow to outer space through the infrared window (8–13 μm). Rather than merely blocking solar radiation, this method increases outgoing longwave infrared (LWIR) thermal radiation heat transfer with the extremely cold temperature of outer space (~2.7 K) to lower ambient temperatures while requiring zero energy input.
Greenhouse effect
The greenhouse effect is a process by which thermal radiation from a planetary surface is absorbed by atmospheric greenhouse gases and clouds, and is re-radiated in all directions, resulting in a reduction in the amount of thermal radiation reaching space relative to what would reach space in the absence of absorbing materials. This reduction in outgoing radiation leads to a rise in the temperature of the surface and troposphere until the rate of outgoing radiation again equals the rate at which heat arrives from the Sun.
Heat transfer in the human body
The principles of heat transfer in engineering systems can be applied to the human body to determine how the body transfers heat. Heat is produced in the body by the continuous metabolism of nutrients which provides energy for the systems of the body. The human body must maintain a consistent internal temperature to maintain healthy bodily functions. Therefore, excess heat must be dissipated from the body to keep it from overheating. When a person engages in elevated levels of physical activity, the body requires additional fuel which increases the metabolic rate and the rate of heat production. The body must then use additional methods to remove the additional heat produced to keep the internal temperature at a healthy level.
Heat transfer by convection is driven by the movement of fluids over the surface of the body. This convective fluid can be either a liquid or a gas. For heat transfer from the outer surface of the body, the convection mechanism is dependent on the surface area of the body, the velocity of the air, and the temperature gradient between the surface of the skin and the ambient air. The normal temperature of the body is approximately 37 °C. Heat transfer occurs more readily when the temperature of the surroundings is significantly less than the normal body temperature. This concept explains why a person feels cold when not enough covering is worn when exposed to a cold environment. Clothing can be considered an insulator which provides thermal resistance to heat flow over the covered portion of the body. This thermal resistance causes the temperature on the surface of the clothing to be less than the temperature on the surface of the skin. This smaller temperature gradient between the surface temperature and the ambient temperature will cause a lower rate of heat transfer than if the skin were not covered.
To ensure that one portion of the body is not significantly hotter than another portion, heat must be distributed evenly through the bodily tissues. Blood flowing through blood vessels acts as a convective fluid and helps to prevent any buildup of excess heat inside the tissues of the body. This flow of blood through the vessels can be modeled as pipe flow in an engineering system. The heat carried by the blood is determined by the temperature of the surrounding tissue, the diameter of the blood vessel, the thickness of the fluid, the velocity of the flow, and the heat transfer coefficient of the blood. The velocity, blood vessel diameter, and fluid thickness can all be related to the Reynolds Number, a dimensionless number used in fluid mechanics to characterize the flow of fluids.
Latent heat loss, also known as evaporative heat loss, accounts for a large fraction of heat loss from the body. When the core temperature of the body increases, the body triggers sweat glands in the skin to bring additional moisture to the surface of the skin. The liquid is then transformed into vapor which removes heat from the surface of the body. The rate of evaporation heat loss is directly related to the vapor pressure at the skin surface and the amount of moisture present on the skin. Therefore, the maximum of heat transfer will occur when the skin is completely wet. The body continuously loses water by evaporation but the most significant amount of heat loss occurs during periods of increased physical activity.
Cooling techniques
Evaporative cooling
Evaporative cooling happens when water vapor is added to the surrounding air. The energy needed to evaporate the water is taken from the air in the form of sensible heat and converted into latent heat, while the air remains at a constant enthalpy. Latent heat describes the amount of heat that is needed to evaporate the liquid; this heat comes from the liquid itself and the surrounding gas and surfaces. The greater the difference between the two temperatures, the greater the evaporative cooling effect. When the temperatures are the same, no net evaporation of water in the air occurs; thus, there is no cooling effect.
Laser cooling
In quantum physics, laser cooling is used to achieve temperatures of near absolute zero (−273.15 °C, −459.67 °F) of atomic and molecular samples to observe unique quantum effects that can only occur at this heat level.
Doppler cooling is the most common method of laser cooling.
Sympathetic cooling is a process in which particles of one type cool particles of another type. Typically, atomic ions that can be directly laser-cooled are used to cool nearby ions or atoms. This technique allows the cooling of ions and atoms that cannot be laser-cooled directly.
Magnetic cooling
Magnetic evaporative cooling is a process for lowering the temperature of a group of atoms, after pre-cooled by methods such as laser cooling. Magnetic refrigeration cools below 0.3K, by making use of the magnetocaloric effect.
Radiative cooling
Radiative cooling is the process by which a body loses heat by radiation. Outgoing energy is an important effect in the Earth's energy budget. In the case of the Earth-atmosphere system, it refers to the process by which long-wave (infrared) radiation is emitted to balance the absorption of short-wave (visible) energy from the Sun. The thermosphere (top of atmosphere) cools to space primarily by infrared energy radiated by carbon dioxide at 15 μm and by nitric oxide (NO) at 5.3 μm. Convective transport of heat and evaporative transport of latent heat both remove heat from the surface and redistribute it in the atmosphere.
Thermal energy storage
Thermal energy storage includes technologies for collecting and storing energy for later use. It may be employed to balance energy demand between day and nighttime. The thermal reservoir may be maintained at a temperature above or below that of the ambient environment. Applications include space heating, domestic or process hot water systems, or generating electricity.
History
Newton's law of cooling
In 1701, Isaac Newton anonymously published an article in Philosophical Transactions noting (in modern terms) that the rate of temperature change of a body is proportional to the difference in temperatures (, "degrees of heat") between the body and its surroundings. The phrase "temperature change" was later replaced with "heat loss", and the relationship was named Newton's law of cooling. In general, the law is valid only if the temperature difference is small and the heat transfer mechanism remains the same.
Thermal conduction
In heat conduction, the law is valid only if the thermal conductivity of the warmer body is independent of temperature. The thermal conductivity of most materials is only weakly dependent on temperature, so in general the law holds true.
Thermal convection
In convective heat transfer, the law is valid for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference.
Thermal radiation
In the case of heat transfer by thermal radiation, Newton's law of cooling holds only for very small temperature differences.
Thermal conductivity of different metals
In a 1780 letter to Benjamin Franklin, Dutch-born British scientist Jan Ingenhousz relates an experiment which enabled him to rank seven different metals according to their thermal conductivities:
Benjamin Thompson's experiments on heat transfer
During the years 1784 – 1798, the British physicist Benjamin Thompson (Count Rumford) lived in Bavaria, reorganizing the Bavarian army for the Prince-elector Charles Theodore among other official and charitable duties. The Elector gave Thompson access to the facilities of the Electoral Academy of Sciences in Mannheim. During his years in Mannheim and later in Munich, Thompson made a large number of discoveries and inventions related to heat.
Conductivity experiments
"New Experiments upon Heat"
In 1785 Thompson performed a series of thermal conductivity experiments, which he describes in great detail in the Philosophical Transactions article "New Experiments upon Heat" from 1786. The fact that good electrical conductors are often also good heat conductors and vice versa must have been well known at the time, for Thompson mentions it in passing. He intended to measure the relative conductivities of mercury, water, moist air, "common air" (dry air at normal atmospheric pressure), dry air of various rarefication, and a "Torricellian vacuum".
For these experiments, Thompson employed a thermometer inside a large, closed glass tube. Under the circumstances described, heat may—unbeknownst to Thompson—have been transferred more by radiation than by conduction. These were his results.
After the experiments, Thompson was surprised to observe that a vacuum was a significantly poorer heat conductor than air "which of itself is reckoned among the worst", but only a very small difference between common air and rarefied air. He also noted the great difference between dry air and moist air, and the great benefit this affords.
Temperature vs. sensible heat
Thompson concluded with some comments on the important difference between temperature and sensible heat.
Coining of the term "convection"
In the 1830s, in The Bridgewater Treatises, the term convection is attested in a scientific sense. In treatise VIII by William Prout, in the book on chemistry, it says:This motion of heat takes place in three ways, which a common fire-place very well illustrates. If, for instance, we place a thermometer directly before a fire, it soon begins to rise, indicating an increase of temperature. In this case the heat has made its way through the space between the fire and the thermometer, by the process termed radiation. If we place a second thermometer in contact with any part of the grate, and away from the direct influence of the fire, we shall find that this thermometer also denotes an increase of temperature; but here the heat must have travelled through the metal of the grate, by what is termed conduction. Lastly, a third thermometer placed in the chimney, away from the direct influence of the fire, will also indicate a considerable increase of temperature; in this case a portion of the air, passing through and near the fire, has become heated, and has carried up the chimney the temperature acquired from the fire. There is at present no single term in our language employed to denote this third mode of the propagation of heat; but we venture to propose for that purpose, the term convection, [in footnote: [Latin] Convectio, a carrying or conveying] which not only expresses the leading fact, but also accords very well with the two other terms.Later, in the same treatise VIII, in the book on meteorology, the concept of convection is also applied to "the process by which heat is communicated through water".
See also
Combined forced and natural convection
Heat capacity
Heat transfer enhancement
Heat transfer physics
Stefan–Boltzmann law
Thermal contact conductance
Thermal physics
Thermal resistance
Citations
References
External links
A Heat Transfer Textbook - (free download).
Thermal-FluidsPedia - An online thermal fluids encyclopedia.
Hyperphysics Article on Heat Transfer - Overview
Interseasonal Heat Transfer - a practical example of how heat transfer is used to heat buildings without burning fossil fuels.
Aspects of Heat Transfer, Cambridge University
Thermal-Fluids Central
Energy2D: Interactive Heat Transfer Simulations for Everyone
Chemical engineering
Mechanical engineering
Unit operations
Transport phenomena | 0.778525 | 0.998256 | 0.777167 |
Boltzmann constant | The Boltzmann constant ( or ) is the proportionality factor that relates the average relative thermal energy of particles in a gas with the thermodynamic temperature of the gas. It occurs in the definitions of the kelvin (K) and the gas constant, in Planck's law of black-body radiation and Boltzmann's entropy formula, and is used in calculating thermal noise in resistors. The Boltzmann constant has dimensions of energy divided by temperature, the same as entropy and heat capacity. It is named after the Austrian scientist Ludwig Boltzmann.
As part of the 2019 revision of the SI, the Boltzmann constant is one of the seven "defining constants" that have been given exact definitions. They are used in various combinations to define the seven SI base units. The Boltzmann constant is defined to be exactly joules per kelvin.
Roles of the Boltzmann constant
Macroscopically, the ideal gas law states that, for an ideal gas, the product of pressure and volume is proportional to the product of amount of substance and absolute temperature :
where is the molar gas constant. Introducing the Boltzmann constant as the gas constant per molecule (NA being the Avogadro constant) transforms the ideal gas law into an alternative form:
where is the number of molecules of gas.
Role in the equipartition of energy
Given a thermodynamic system at an absolute temperature , the average thermal energy carried by each microscopic degree of freedom in the system is (i.e., about , or , at room temperature). This is generally true only for classical systems with a large number of particles, and in which quantum effects are negligible.
In classical statistical mechanics, this average is predicted to hold exactly for homogeneous ideal gases. Monatomic ideal gases (the six noble gases) possess three degrees of freedom per atom, corresponding to the three spatial directions. According to the equipartition of energy this means that there is a thermal energy of per atom. This corresponds very well with experimental data. The thermal energy can be used to calculate the root-mean-square speed of the atoms, which turns out to be inversely proportional to the square root of the atomic mass. The root mean square speeds found at room temperature accurately reflect this, ranging from for helium, down to for xenon.
Kinetic theory gives the average pressure for an ideal gas as
Combination with the ideal gas law
shows that the average translational kinetic energy is
Considering that the translational motion velocity vector has three degrees of freedom (one for each dimension) gives the average energy per degree of freedom equal to one third of that, i.e. .
The ideal gas equation is also obeyed closely by molecular gases; but the form for the heat capacity is more complicated, because the molecules possess additional internal degrees of freedom, as well as the three degrees of freedom for movement of the molecule as a whole. Diatomic gases, for example, possess a total of six degrees of simple freedom per molecule that are related to atomic motion (three translational, two rotational, and one vibrational). At lower temperatures, not all these degrees of freedom may fully participate in the gas heat capacity, due to quantum mechanical limits on the availability of excited states at the relevant thermal energy per molecule.
Role in Boltzmann factors
More generally, systems in equilibrium at temperature have probability of occupying a state with energy weighted by the corresponding Boltzmann factor:
where is the partition function. Again, it is the energy-like quantity that takes central importance.
Consequences of this include (in addition to the results for ideal gases above) the Arrhenius equation in chemical kinetics.
Role in the statistical definition of entropy
In statistical mechanics, the entropy of an isolated system at thermodynamic equilibrium is defined as the natural logarithm of , the number of distinct microscopic states available to the system given the macroscopic constraints (such as a fixed total energy ):
This equation, which relates the microscopic details, or microstates, of the system (via ) to its macroscopic state (via the entropy ), is the central idea of statistical mechanics. Such is its importance that it is inscribed on Boltzmann's tombstone.
The constant of proportionality serves to make the statistical mechanical entropy equal to the classical thermodynamic entropy of Clausius:
One could choose instead a rescaled dimensionless entropy in microscopic terms such that
This is a more natural form and this rescaled entropy exactly corresponds to Shannon's subsequent information entropy.
The characteristic energy is thus the energy required to increase the rescaled entropy by one nat.
Thermal voltage
In semiconductors, the Shockley diode equation—the relationship between the flow of electric current and the electrostatic potential across a p–n junction—depends on a characteristic voltage called the thermal voltage, denoted by . The thermal voltage depends on absolute temperature as
where is the magnitude of the electrical charge on the electron with a value Equivalently,
At room temperature , is approximately which can be derived by plugging in the values as follows:
At the standard state temperature of , it is approximately . The thermal voltage is also important in plasmas and electrolyte solutions (e.g. the Nernst equation); in both cases it provides a measure of how much the spatial distribution of electrons or ions is affected by a boundary held at a fixed voltage.
History
The Boltzmann constant is named after its 19th century Austrian discoverer, Ludwig Boltzmann. Although Boltzmann first linked entropy and probability in 1877, the relation was never expressed with a specific constant until Max Planck first introduced , and gave a more precise value for it (, about 2.5% lower than today's figure), in his derivation of the law of black-body radiation in 1900–1901. Before 1900, equations involving Boltzmann factors were not written using the energies per molecule and the Boltzmann constant, but rather using a form of the gas constant , and macroscopic energies for macroscopic quantities of the substance. The iconic terse form of the equation on Boltzmann's tombstone is in fact due to Planck, not Boltzmann. Planck actually introduced it in the same work as his eponymous .
In 1920, Planck wrote in his Nobel Prize lecture:
This "peculiar state of affairs" is illustrated by reference to one of the great scientific debates of the time. There was considerable disagreement in the second half of the nineteenth century as to whether atoms and molecules were real or whether they were simply a heuristic tool for solving problems. There was no agreement whether chemical molecules, as measured by atomic weights, were the same as physical molecules, as measured by kinetic theory. Planck's 1920 lecture continued:
In versions of SI prior to the 2019 revision of the SI, the Boltzmann constant was a measured quantity rather than a fixed value. Its exact definition also varied over the years due to redefinitions of the kelvin (see ) and other SI base units (see ).
In 2017, the most accurate measures of the Boltzmann constant were obtained by acoustic gas thermometry, which determines the speed of sound of a monatomic gas in a triaxial ellipsoid chamber using microwave and acoustic resonances. This decade-long effort was undertaken with different techniques by several laboratories; it is one of the cornerstones of the 2019 revision of the SI. Based on these measurements, the CODATA recommended to be the final fixed value of the Boltzmann constant to be used for the International System of Units.
As a precondition for redefining the Boltzmann constant, there must be one experimental value with a relative uncertainty below 1 ppm, and at least one measurement from a second technique with a relative uncertainty below 3 ppm. The acoustic gas thermometry reached 0.2 ppm, and Johnson noise thermometry reached 2.8 ppm.
Value in different units
Since is a proportionality factor between temperature and energy, its numerical value depends on the choice of units for energy and temperature. The small numerical value of the Boltzmann constant in SI units means a change in temperature by 1 K only changes a particle's energy by a small amount. A change of is defined to be the same as a change of . The characteristic energy is a term encountered in many physical relationships.
The Boltzmann constant sets up a relationship between wavelength and temperature (dividing hc/k by a wavelength gives a temperature) with one micrometer being related to , and also a relationship between voltage and temperature (kT in units of eV corresponds to a voltage) with one volt being related to . The ratio of these two temperatures, / ≈ 1.239842, is the numerical value of hc in units of eV⋅μm.
Natural units
The Boltzmann constant provides a mapping from the characteristic microscopic energy to the macroscopic temperature scale . In fundamental physics, this mapping is often simplified by using the natural units of setting to unity. This convention means that temperature and energy quantities have the same dimensions. In particular, the SI unit kelvin becomes superfluous, being defined in terms of joules as . With this convention, temperature is always given in units of energy, and the Boltzmann constant is not explicitly needed in formulas.
This convention simplifies many physical relationships and formulas. For example, the equipartition formula for the energy associated with each classical degree of freedom ( above) becomes
As another example, the definition of thermodynamic entropy coincides with the form of information entropy:
where is the probability of each microstate.
See also
Committee on Data of the International Science Council
Thermodynamic beta
List of scientists whose names are used in physical constants
Notes
References
External links
Draft Chapter 2 for SI Brochure, following redefinitions of the base units (prepared by the Consultative Committee for Units)
Big step towards redefining the kelvin: Scientists find new way to determine Boltzmann constant
Constant
Fundamental constants
Statistical mechanics
Thermodynamics | 0.777508 | 0.999544 | 0.777153 |
Classical central-force problem | In classical mechanics, the central-force problem is to determine the motion of a particle in a single central potential field. A central force is a force (possibly negative) that points from the particle directly towards a fixed point in space, the center, and whose magnitude only depends on the distance of the object to the center. In a few important cases, the problem can be solved analytically, i.e., in terms of well-studied functions such as trigonometric functions.
The solution of this problem is important to classical mechanics, since many naturally occurring forces are central. Examples include gravity and electromagnetism as described by Newton's law of universal gravitation and Coulomb's law, respectively. The problem is also important because some more complicated problems in classical physics (such as the two-body problem with forces along the line connecting the two bodies) can be reduced to a central-force problem. Finally, the solution to the central-force problem often makes a good initial approximation of the true motion, as in calculating the motion of the planets in the Solar System.
Basics
The essence of the central-force problem is to solve for the position r of a particle moving under the influence of a central force F, either as a function of time t or as a function of the angle φ relative to the center of force and an arbitrary axis.
Definition of a central force
A conservative central force F has two defining properties. First, it must drive particles either directly towards or directly away from a fixed point in space, the center of force, which is often labeled O. In other words, a central force must act along the line joining O with the present position of the particle. Second, a conservative central force depends only on the distance r between O and the moving particle; it does not depend explicitly on time or other descriptors of position.
This two-fold definition may be expressed mathematically as follows. The center of force O can be chosen as the origin of a coordinate system. The vector r joining O to the present position of the particle is known as the position vector. Therefore, a central force must have the mathematical form
where r is the vector magnitude |r| (the distance to the center of force) and r̂ = r/r is the corresponding unit vector. According to Newton's second law of motion, the central force F generates a parallel acceleration a scaled by the mass m of the particle
For attractive forces, F(r) is negative, because it works to reduce the distance r to the center. Conversely, for repulsive forces, F(r) is positive.
Potential energy
If the central force is a conservative force, then the magnitude F(r) of a central force can always be expressed as the derivative of a time-independent potential energy function U(r)
Thus, the total energy of the particle—the sum of its kinetic energy and its potential energy U—is a constant; energy is said to be conserved. To show this, it suffices that the work W done by the force depends only on initial and final positions, not on the path taken between them.
Equivalently, it suffices that the curl of the force field F is zero; using the formula for the curl in spherical coordinates,
because the partial derivatives are zero for a central force; the magnitude F does not depend on the angular spherical coordinates θ and φ.
Since the scalar potential V(r) depends only on the distance r to the origin, it has spherical symmetry. In this respect, the central-force problem is analogous to the Schwarzschild geodesics in general relativity and to the quantum mechanical treatments of particles in potentials of spherical symmetry.
One-dimensional problem
If the initial velocity v of the particle is aligned with position vector r, then the motion remains forever on the line defined by r. This follows because the force—and by Newton's second law, also the acceleration a—is also aligned with r. To determine this motion, it suffices to solve the equation
One solution method is to use the conservation of total energy
Taking the reciprocal and integrating we get:
For the remainder of the article, it is assumed that the initial velocity v of the particle is not aligned with position vector r, i.e., that the angular momentum vector L = r × m v is not zero.
Uniform circular motion
Every central force can produce uniform circular motion, provided that the initial radius r and speed v satisfy the equation for the centripetal force
If this equation is satisfied at the initial moments, it will be satisfied at all later times; the particle will continue to move in a circle of radius r at speed v forever.
Relation to the classical two-body problem
The central-force problem concerns an ideal situation (a "one-body problem") in which a single particle is attracted or repelled from an immovable point O, the center of force. However, physical forces are generally between two bodies; and by Newton's third law, if the first body applies a force on the second, the second body applies an equal and opposite force on the first. Therefore, both bodies are accelerated if a force is present between them; there is no perfectly immovable center of force. However, if one body is overwhelmingly more massive than the other, its acceleration relative to the other may be neglected; the center of the more massive body may be treated as approximately fixed. For example, the Sun is overwhelmingly more massive than the planet Mercury; hence, the Sun may be approximated as an immovable center of force, reducing the problem to the motion of Mercury in response to the force applied by the Sun. In reality, however, the Sun also moves (albeit only slightly) in response to the force applied by the planet Mercury.
Such approximations are unnecessary, however. Newton's laws of motion allow any classical two-body problem to be converted into a corresponding exact one-body problem. To demonstrate this, let x1 and x2 be the positions of the two particles, and let r = x1 − x2 be their relative position. Then, by Newton's second law,
The final equation derives from Newton's third law; the force of the second body on the first body (F21) is equal and opposite to the force of the first body on the second (F12). Thus, the equation of motion for r can be written in the form
where is the reduced mass
As a special case, the problem of two bodies interacting by a central force can be reduced to a central-force problem of one body.
Qualitative properties
Planar motion
The motion of a particle under a central force F always remains in the plane defined by its initial position and velocity. This may be seen by symmetry. Since the position r, velocity v and force F all lie in the same plane, there is never an acceleration perpendicular to that plane, because that would break the symmetry between "above" the plane and "below" the plane.
To demonstrate this mathematically, it suffices to show that the angular momentum of the particle is constant. This angular momentum L is defined by the equation
where m is the mass of the particle and p is its linear momentum. In this equation the times symbol × indicates the vector cross product, not multiplication. Therefore, the angular momentum vector L is always perpendicular to the plane defined by the particle's position vector r and velocity vector v.
In general, the rate of change of the angular momentum L equals the net torque r × F
The first term m v × v is always zero, because the vector cross product is always zero for any two vectors pointing in the same or opposite directions. However, when F is a central force, the remaining term r × F is also zero because the vectors r and F point in the same or opposite directions. Therefore, the angular momentum vector L is constant. Then
Consequently, the particle's position r (and hence velocity v) always lies in a plane perpendicular to L.
Polar coordinates
Since the motion is planar and the force radial, it is customary to switch to polar coordinates. In these coordinates, the position vector r is represented in terms of the radial distance r and the azimuthal angle φ.
Taking the first derivative with respect to time yields the particle's velocity vector v
Similarly, the second derivative of the particle's position r equals its acceleration a
The velocity v and acceleration a can be expressed in terms of the radial and azimuthal unit vectors. The radial unit vector is obtained by dividing the position vector r by its magnitude r, as described above
The azimuthal unit vector is given by
Thus, the velocity can be written as
whereas the acceleration equals
Specific angular momentum
Since F = ma by Newton's second law of motion and since F is a central force, then only the radial component of the acceleration a can be non-zero; the angular component aφ must be zero
Therefore,
This expression in parentheses is usually denoted h
which equals the speed v times r⊥, the component of the radius vector perpendicular to the velocity. h is the magnitude of the specific angular momentum because it equals the magnitude L of the angular momentum divided by the mass m of the particle.
For brevity, the angular speed is sometimes written ω
However, it should not be assumed that ω is constant. Since h is constant, ω varies with the radius r according to the formula
Since h is constant and r2 is positive, the angle φ changes monotonically in any central-force problem, either continuously increasing (h positive) or continuously decreasing (h negative).
Constant areal velocity
The magnitude of h also equals twice the areal velocity, which is the rate at which area is being swept out by the particle relative to the center. Thus, the areal velocity is constant for a particle acted upon by any type of central force; this is Kepler's second law. Conversely, if the motion under a conservative force F is planar and has constant areal velocity for all initial conditions of the radius r and velocity v, then the azimuthal acceleration aφ is always zero. Hence, by Newton's second law, F = ma, the force is a central force.
The constancy of areal velocity may be illustrated by uniform circular and linear motion. In uniform circular motion, the particle moves with constant speed v around the circumference of a circle of radius r. Since the angular velocity ω = v/r is constant, the area swept out in a time Δt equals ω r2Δt; hence, equal areas are swept out in equal times Δt. In uniform linear motion (i.e., motion in the absence of a force, by Newton's first law of motion), the particle moves with constant velocity, that is, with constant speed v along a line. In a time Δt, the particle sweeps out an area vΔtr⊥ (the impact parameter). The distance r⊥ does not change as the particle moves along the line; it represents the distance of closest approach of the line to the center O (the impact parameter). Since the speed v is likewise unchanging, the areal velocity vr⊥ is a constant of motion; the particle sweeps out equal areas in equal times.
Equivalent parallel force field
By a transformation of variables, any central-force problem can be converted into an equivalent parallel-force problem. In place of the ordinary x and y Cartesian coordinates, two new position variables ξ = x/y and η = 1/y are defined, as is a new time coordinate τ
The corresponding equations of motion for ξ and η are given by
Since the rate of change of ξ is constant, its second derivative is zero
Since this is the acceleration in the ξ direction and since F=ma by Newton's second law, it follows that the force in the ξ direction is zero. Hence the force is only along the η direction, which is the criterion for a parallel-force problem. Explicitly, the acceleration in the η direction equals
because the acceleration in the y-direction equals
Here, Fy denotes the y-component of the central force, and y/r equals the cosine of the angle between the y-axis and the radial vector r.
General solution
Binet equation
Since a central force F acts only along the radius, only the radial component of the acceleration is nonzero. By Newton's second law of motion, the magnitude of F equals the mass m of the particle times the magnitude of its radial acceleration
This equation has integration factor
Integrating yields
If h is not zero, the independent variable can be changed from t to ϕ
giving the new equation of motion
Making the change of variables to the inverse radius u = 1/r yields
where C is a constant of integration and the function G(u) is defined by
This equation becomes quasilinear on differentiating by ϕ
This is known as the Binet equation. Integrating yields the solution for ϕ
where ϕ0 is another constant of integration. A central-force problem is said to be "integrable" if this final integration can be solved in terms of known functions.
Orbit of the particle
Take the scalar product of Newton's second law of motion with the particle's velocity where the force is obtained from the potential energy
gives
where summation is assumed over the spatial Cartesian index and we have used the fact that and used the chain rule
.
Rearranging
The term in parentheses on the left hand side is a constant, label this with , the total mechanical energy. Clearly, this is the sum of the kinetic energy and the potential energy.
Furthermore if the potential is central, and so the force is along the radial direction. In this case, the cross product of Newton's second law of motion with the particle's position vector must vanish since the cross product of two parallel vectors is zero:
but (cross product of parallel vectors), so
The term in parentheses on the left hand side is a constant, label this with the angular momentum,
In particular, in polar coordinates, or
Further, , so the energy equation may be simplified with the angular momentum as
This indicates that the angular momentum contributes an effective potential energy
Solve this equation for
which may be converted to the derivative of with respect to the azimuthal angle as
This is a separable first order differential equation. Integrating and yields the formula
Changing the variable of integration to the inverse radius yields the integral
which expresses the above constants C = 2mEtot/L2 and G(u) = 2mU(1/u)/L2 above in terms of the total energy Etot and the potential energy U(r).
Turning points and closed orbits
The rate of change of r is zero whenever the effective potential energy equals the total energy
The points where this equation is satisfied are known as turning points. The orbit on either side of a turning point is symmetrical; in other words, if the azimuthal angle is defined such that φ = 0 at the turning point, then the orbit is the same in opposite directions, r(φ) = r(−φ).
If there are two turning points such that the radius r is bounded between rmin and rmax, then the motion is contained within an annulus of those radii. As the radius varies from the one turning point to the other, the change in azimuthal angle φ equals
The orbit will close upon itself provided that Δφ equals a rational fraction of 2π, i.e.,
where m and n are integers. In that case, the radius oscillates exactly m times while the azimuthal angle φ makes exactly n revolutions. In general, however, Δφ/2π will not be such a rational number, and thus the orbit will not be closed. In that case, the particle will eventually pass arbitrarily close to every point within the annulus. Two types of central force always produce closed orbits: F(r) = αr (a linear force) and F(r) = α/r2 (an inverse-square law). As shown by Bertrand, these two central forces are the only ones that guarantee closed orbits.
In general, if the angular momentum L is nonzero, the L2/2mr2 term prevents the particle from falling into the origin, unless the effective potential energy goes to negative infinity in the limit of r going to zero. Therefore, if there is a single turning point, the orbit generally goes to infinity; the turning point corresponds to a point of minimum radius.
Specific solutions
Kepler problem
In classical physics, many important forces follow an inverse-square law, such as gravity or electrostatics. The general mathematical form of such inverse-square central forces is
for a constant , which is negative for an attractive force and positive for a repulsive one.
This special case of the classical central-force problem is called the Kepler problem. For an inverse-square force, the Binet equation derived above is linear
The solution of this equation is
which shows that the orbit is a conic section of eccentricity e; here, φ0 is the initial angle, and the center of force is at the focus of the conic section. Using the half-angle formula for sine, this solution can also be written as
where u1 and u2 are constants, with u2 larger than u1. The two versions of the solution are related by the equations
and
Since the sin2 function is always greater than zero, u2 is the largest possible value of u and the inverse of the smallest possible value of r, i.e., the distance of closest approach (periapsis). Since the radial distance r cannot be a negative number, neither can its inverse u; therefore, u2 must be a positive number. If u1 is also positive, it is the smallest possible value of u, which corresponds to the largest possible value of r, the distance of furthest approach (apoapsis). If u1 is zero or negative, then the smallest possible value of u is zero (the orbit goes to infinity); in this case, the only relevant values of φ are those that make u positive.
For an attractive force (α < 0), the orbit is an ellipse, a hyperbola or parabola, depending on whether u1 is positive, negative, or zero, respectively; this corresponds to an eccentricity e less than one, greater than one, or equal to one. For a repulsive force (α > 0), u1 must be negative, since u2 is positive by definition and their sum is negative; hence, the orbit is a hyperbola. Naturally, if no force is present (α=0), the orbit is a straight line.
Central forces with exact solutions
The Binet equation for u(φ) can be solved numerically for nearly any central force F(1/u). However, only a handful of forces result in formulae for u in terms of known functions. As derived above, the solution for φ can be expressed as an integral over u
A central-force problem is said to be "integrable" if this integration can be solved in terms of known functions.
If the force is a power law, i.e., if F(r) = α rn, then u can be expressed in terms of circular functions and/or elliptic functions if n equals 1, -2, -3 (circular functions) and -7, -5, -4, 0, 3, 5, -3/2, -5/2, -1/3, -5/3 and -7/3 (elliptic functions). Similarly, only six possible linear combinations of power laws give solutions in terms of circular and elliptic functions
The following special cases of the first two force types always result in circular functions.
The special case was mentioned by Newton, in corollary 1 to proposition VII of the principia, as the force implied by circular orbits passing through the point of attraction.
Revolving orbits
The term r−3 occurs in all the force laws above, indicating that the addition of the inverse-cube force does not influence the solubility of the problem in terms of known functions. Newton showed that, with adjustments in the initial conditions, the addition of such a force does not affect the radial motion of the particle, but multiplies its angular motion by a constant factor k. An extension of Newton's theorem was discovered in 2000 by Mahomed and Vawda.
Assume that a particle is moving under an arbitrary central force F1(r), and let its radius r and azimuthal angle φ be denoted as r(t) and φ1(t) as a function of time t. Now consider a second particle with the same mass m that shares the same radial motion r(t), but one whose angular speed is k times faster than that of the first particle. In other words, the azimuthal angles of the two particles are related by the equation φ2(t) = k φ1(t). Newton showed that the force acting on the second particle equals the force F1(r) acting on the first particle, plus an inverse-cube central force
where L1 is the magnitude of the first particle's angular momentum.
If k2 is greater than one, F2−F1 is a negative number; thus, the added inverse-cube force is attractive. Conversely, if k2 is less than one, F2−F1 is a positive number; the added inverse-cube force is repulsive. If k is an integer such as 3, the orbit of the second particle is said to be a harmonic of the first particle's orbit; by contrast, if k is the inverse of an integer, such as , the second orbit is said to be a subharmonic of the first orbit.
Historical development
Newton's derivation
The classical central-force problem was solved geometrically by Isaac Newton in his Philosophiæ Naturalis Principia Mathematica, in which Newton introduced his laws of motion. Newton used an equivalent of leapfrog integration to convert the continuous motion to a discrete one, so that geometrical methods may be applied. In this approach, the position of the particle is considered only at evenly spaced time points. For illustration, the particle in Figure 10 is located at point A at time t = 0, at point B at time t = Δt, at point C at time t = 2Δt, and so on for all times t = nΔt, where n is an integer. The velocity is assumed to be constant between these time points. Thus, the vector rAB = rB − rA equals Δt times the velocity vector vAB (red line), whereas rBC = rC − rB equals vBCΔt (blue line). Since the velocity is constant between points, the force is assumed to act instantaneously at each new position; for example, the force acting on the particle at point B instantly changes the velocity from vAB to vBC. The difference vector Δr = rBC − rAB equals ΔvΔt (green line), where Δv = vBC − vAB is the change in velocity resulting from the force at point B. Since the acceleration a is parallel to Δv and since F = ma, the force F must be parallel to Δv and Δr. If F is a central force, it must be parallel to the vector rB from the center O to the point B (dashed green line); in that case, Δr is also parallel to rB.
If no force acts at point B, the velocity is unchanged, and the particle arrives at point K at time t = 2Δt. The areas of the triangles OAB and OBK are equal, because they share the same base (rAB) and height (r⊥). If Δr is parallel to rB, the triangles OBK and OBC are likewise equal, because they share the same base (rB) and the height is unchanged. In that case, the areas of the triangles OAB and OBC are the same, and the particle sweeps out equal areas in equal time. Conversely, if the areas of all such triangles are equal, then Δr must be parallel to rB, from which it follows that F is a central force. Thus, a particle sweeps out equal areas in equal times if and only if F is a central force.
Alternative derivations of the equations of motion
Lagrangian mechanics
The formula for the radial force may also be obtained using Lagrangian mechanics. In polar coordinates, the Lagrangian L of a single particle in a potential energy field U(r) is given by
Then Lagrange's equations of motion
take the form
since the magnitude F(r) of the radial force equals the negative derivative of the potential energy U(r) in the radial direction.
Hamiltonian mechanics
The radial force formula may also be derived using Hamiltonian mechanics. In polar coordinates, the Hamiltonian can be written as
Since the azimuthal angle φ does not appear in the Hamiltonian, its conjugate momentum pφ is a constant of the motion. This conjugate momentum is the magnitude L of the angular momentum, as shown by the Hamiltonian equation of motion for φ
The corresponding equation of motion for r is
Taking the second derivative of r with respect to time and using Hamilton's equation of motion for pr yields the radial-force equation
Hamilton-Jacobi equation
The orbital equation can be derived directly from the Hamilton–Jacobi equation. Adopting the radial distance r and the azimuthal angle φ as the coordinates, the Hamilton-Jacobi equation for a central-force problem can be written
where S = Sφ(φ) + Sr(r) − Etott is Hamilton's principal function, and Etot and t represent the total energy and time, respectively. This equation may be solved by successive integrations of ordinary differential equations, beginning with the φ equation
where pφ is a constant of the motion equal to the magnitude of the angular momentum L. Thus, Sφ(φ) = Lφ and the Hamilton–Jacobi equation becomes
Integrating this equation for Sr yields
Taking the derivative of S with respect to L yields the orbital equation derived above
See also
Schwarzschild geodesics, the analog in general relativity
Particle in a spherically symmetric potential, the analog in quantum mechanics
Hydrogen-like atom, the Kepler problem in quantum mechanics
Inverse square potential
Notes
References
Bibliography
External links
Two-body Central Force Problems by D. E. Gary of the New Jersey Institute of Technology
Motion in a Central-Force Field by A. Brizard of Saint Michael's College
Motion under the Influence of a Central Force by G. W. Collins, II of Case Western Reserve University
Video lecture by W. H. G. Lewin of the Massachusetts Institute of Technology
Classical mechanics
Articles containing video clips | 0.78811 | 0.986059 | 0.777123 |
Linkage (mechanical) | A mechanical linkage is an assembly of systems connected so as to manage forces and movement. The movement of a body, or link, is studied using geometry so the link is considered to be rigid. The connections between links are modeled as providing ideal movement, pure rotation or sliding for example, and are called joints. A linkage modeled as a network of rigid links and ideal joints is called a kinematic chain.
Linkages may be constructed from open chains, closed chains, or a combination of open and closed chains. Each link in a chain is connected by a joint to one or more other links. Thus, a kinematic chain can be modeled as a graph in which the links are paths and the joints are vertices, which is called a linkage graph.
The movement of an ideal joint is generally associated with a subgroup of the group of Euclidean displacements. The number of parameters in the subgroup is called the degrees of freedom (DOF) of the joint.
Mechanical linkages are usually designed to transform a given input force and movement into a desired output force and movement. The ratio of the output force to the input force is known as the mechanical advantage of the linkage, while the ratio of the input speed to the output speed is known as the speed ratio. The speed ratio and mechanical advantage are defined so they yield the same number in an ideal linkage.
A kinematic chain, in which one link is fixed or stationary, is called a mechanism, and a linkage designed to be stationary is called a structure.
History
Archimedes applied geometry to the study of the lever. Into the 1500s the work of Archimedes and Hero of Alexandria were the primary sources of machine theory. It was Leonardo da Vinci who brought an inventive energy to machines and mechanism.
In the mid-1700s the steam engine was of growing importance, and James Watt realized that efficiency could be increased by using different cylinders for expansion and condensation of the steam. This drove his search for a linkage that could transform rotation of a crank into a linear slide, and resulted in his discovery of what is called Watt's linkage. This led to the study of linkages that could generate straight lines, even if only approximately; and inspired the mathematician J. J. Sylvester, who lectured on the Peaucellier linkage, which generates an exact straight line from a rotating crank.
The work of Sylvester inspired A. B. Kempe, who showed that linkages for addition and multiplication could be assembled into a system that traced a given algebraic curve. Kempe's design procedure has inspired research at the intersection of geometry and computer science.
In the late 1800s F. Reuleaux, A. B. W. Kennedy, and L. Burmester formalized the analysis and synthesis of linkage systems using descriptive geometry, and P. L. Chebyshev introduced analytical techniques for the study and invention of linkages.
In the mid-1900s F. Freudenstein and G. N. Sandor used the newly developed digital computer to solve the loop equations of a linkage and determine its dimensions for a desired function, initiating the computer-aided design of linkages. Within two decades these computer techniques were integral to the analysis of complex machine systems and the control of robot manipulators.
R. E. Kaufman combined the computer's ability to rapidly compute the roots of polynomial equations with a graphical user interface to unite Freudenstein's techniques with the geometrical methods of Reuleaux and Burmester and form KINSYN, an interactive computer graphics system for linkage design
The modern study of linkages includes the analysis and design of articulated systems that appear in robots, machine tools, and cable driven and tensegrity systems. These techniques are also being applied to biological systems and even the study of proteins.
Mobility
The configuration of a system of rigid links connected by ideal joints is defined by a set of configuration parameters, such as the angles around a revolute joint and the slides along prismatic joints measured between adjacent links. The geometric constraints of the linkage allow calculation of all of the configuration parameters in terms of a minimum set, which are the input parameters. The number of input parameters is called the mobility, or degree of freedom, of the linkage system.
A system of n rigid bodies moving in space has 6n degrees of freedom measured relative to a fixed frame. Include this frame in the count of bodies, so that mobility is independent of the choice of the fixed frame, then we have M = 6(N − 1), where N = n + 1 is the number of moving bodies plus the fixed body.
Joints that connect bodies in this system remove degrees of freedom and reduce mobility. Specifically, hinges and sliders each impose five constraints and therefore remove five degrees of freedom. It is convenient to define the number of constraints c that a joint imposes in terms of the joint's freedom f, where c = 6 − f. In the case of a hinge or slider, which are one degree of freedom joints, we have f = 1 and therefore c = 6 − 1 = 5.
Thus, the mobility of a linkage system formed from n moving links and j joints each with fi, i = 1, ..., j, degrees of freedom can be computed as,
where N includes the fixed link. This is known as Kutzbach–Grübler's equation
There are two important special cases: (i) a simple open chain, and (ii) a simple closed chain. A simple open chain consists of n moving links connected end to end by j joints, with one end connected to a ground link. Thus, in this case N = j + 1 and the mobility of the chain is
For a simple closed chain, n moving links are connected end-to-end by n+1 joints such that the two ends are connected to the ground link forming a loop. In this case, we have N=j and the mobility of the chain is
An example of a simple open chain is a serial robot manipulator. These robotic systems are constructed from a series of links connected by six one degree-of-freedom revolute or prismatic joints, so the system has six degrees of freedom.
An example of a simple closed chain is the RSSR (revolute-spherical-spherical-revolute) spatial four-bar linkage. The sum of the freedom of these joints is eight, so the mobility of the linkage is two, where one of the degrees of freedom is the rotation of the coupler around the line joining the two S joints.
Planar and spherical movement
It is common practice to design the linkage system so that the movement of all of the bodies are constrained to lie on parallel planes, to form what is known as a planar linkage. It is also possible to construct the linkage system so that all of the bodies move on concentric spheres, forming a spherical linkage. In both cases, the degrees of freedom of the link is now three rather than six, and the constraints imposed by joints are now c = 3 − f.
In this case, the mobility formula is given by
and we have the special cases,
planar or spherical simple open chain,
planar or spherical simple closed chain,
An example of a planar simple closed chain is the planar four-bar linkage, which is a four-bar loop with four one degree-of-freedom joints and therefore has mobility M = 1.
Joints
The most familiar joints for linkage systems are the revolute, or hinged, joint denoted by an R, and the prismatic, or sliding, joint denoted by a P. Most other joints used for spatial linkages are modeled as combinations of revolute and prismatic joints. For example,
the cylindric joint consists of an RP or PR serial chain constructed so that the axes of the revolute and prismatic joints are parallel,
the universal joint consists of an RR serial chain constructed such that the axes of the revolute joints intersect at a 90° angle;
the spherical joint consists of an RRR serial chain for which each of the hinged joint axes intersect in the same point;
the planar joint can be constructed either as a planar RRR, RPR, and PPR serial chain that has three degrees-of-freedom.
Analysis and synthesis of linkages
The primary mathematical tool for the analysis of a linkage is known as the kinematic equations of the system. This is a sequence of rigid body transformation along a serial chain within the linkage that locates a floating link relative to the ground frame. Each serial chain within the linkage that connects this floating link to ground provides a set of equations that must be satisfied by the configuration parameters of the system. The result is a set of non-linear equations that define the configuration parameters of the system for a set of values for the input parameters.
Freudenstein introduced a method to use these equations for the design of a planar four-bar linkage to achieve a specified relation between the input parameters and the configuration of the linkage. Another approach to planar four-bar linkage design was introduced by L. Burmester, and is called Burmester theory.
Planar one degree-of-freedom linkages
The mobility formula provides a way to determine the number of links and joints in a planar linkage that yields a one degree-of-freedom linkage. If we require the mobility of a planar linkage to be M = 1 and fi = 1, the result is
or
This formula shows that the linkage must have an even number of links, so we have
N = 2, j = 1: this is a two-bar linkage known as the lever;
N = 4, j = 4: this is the four-bar linkage;
N = 6, j = 7: this is a six-bar linkage [ it has two links that have three joints, called ternary links, and there are two topologies of this linkage depending how these links are connected. In the Watt topology, the two ternary links are connected by a joint. In the Stephenson topology the two ternary links are connected by binary links;
N = 8, j = 10: the eight-bar linkage has 16 different topologies;
N = 10, j = 13: the 10-bar linkage has 230 different topologies,
N = 12, j = 16: the 12-bar has 6856 topologies.
See Sunkari and Schmidt for the number of 14- and 16-bar topologies, as well as the number of linkages that have two, three and four degrees-of-freedom.
The planar four-bar linkage is probably the simplest and most common linkage. It is a one degree-of-freedom system that transforms an input crank rotation or slider displacement into an output rotation or slide.
Examples of four-bar linkages are:
the crank-rocker, in which the input crank fully rotates and the output link rocks back and forth;
the slider-crank, in which the input crank rotates and the output slide moves back and forth;
drag-link mechanisms, in which the input crank fully rotates and drags the output crank in a fully rotational movement.
Biological linkages
Linkage systems are widely distributed in animals. The most thorough overview of the different types of linkages in animals has been provided by Mees Muller, who also designed a new classification system which is especially well suited for biological systems. A well-known example is the cruciate ligaments of the knee.
An important difference between biological and engineering linkages is that revolving bars are rare in biology and that usually only a small range of the theoretically possible is possible due to additional functional constraints (especially the necessity to deliver blood). Biological linkages frequently are compliant. Often one or more bars are formed by ligaments, and often the linkages are three-dimensional. Coupled linkage systems are known, as well as five-, six-, and even seven-bar linkages. Four-bar linkages are by far the most common though.
Linkages can be found in joints, such as the knee of tetrapods, the hock of sheep, and the cranial mechanism of birds and reptiles. The latter is responsible for the upward motion of the upper bill in many birds.
Linkage mechanisms are especially frequent and manifold in the head of bony fishes, such as wrasses, which have evolved many specialized feeding mechanisms. Especially advanced are the linkage mechanisms of jaw protrusion. For suction feeding a system of linked four-bar linkages is responsible for the coordinated opening of the mouth and 3-D expansion of the buccal cavity. Other linkages are responsible for protrusion of the premaxilla.
Linkages are also present as locking mechanisms, such as in the knee of the horse, which enables the animal to sleep standing, without active muscle contraction. In pivot feeding, used by certain bony fishes, a four-bar linkage at first locks the head in a ventrally bent position by the alignment of two bars. The release of the locking mechanism jets the head up and moves the mouth toward the prey within 5–10 ms.
Examples
Pantograph (four-bar, two DOF)
Five bar linkages often have meshing gears for two of the links, creating a one DOF linkage. They can provide greater power transmission with more design flexibility than four-bar linkages.
Jansen's linkage is an eight-bar leg mechanism that was invented by kinetic sculptor Theo Jansen.
Klann linkage is a six-bar linkage that forms a leg mechanism;
Toggle mechanisms are four-bar linkages that are dimensioned so that they can fold and lock. The toggle positions are determined by the colinearity of two of the moving links. The linkage is dimensioned so that the linkage reaches a toggle position just before it folds. The high mechanical advantage allows the input crank to deform the linkage just enough to push it beyond the toggle position. This locks the input in place. Toggle mechanisms are used as clamps.
Straight line mechanisms
James Watt's parallel motion and Watt's linkage
Peaucellier–Lipkin linkage, the first planar linkage to create a perfect straight line output from rotary input; eight-bar, one DOF.
A Scott Russell linkage, which converts linear motion, to (almost) linear motion in a line perpendicular to the input.
Chebyshev linkage, which provides nearly straight motion of a point with a four-bar linkage.
Hoekens linkage, which provides nearly straight motion of a point with a four-bar linkage.
Sarrus linkage, which provides motion of one surface in a direction normal to another.
Hart's inversor, which provides a perfect straight line motion without sliding guides.
Gallery
See also
Assur Groups
Dwell mechanism
Deployable structure
Engineering mechanics
Four-bar linkage
Mechanical function generator
Kinematics
Kinematic coupling
Kinematic pair
Kinematic synthesis
Kinematic models in Mathcad
Leg mechanism
Lever
Machine
Outline of machines
Overconstrained mechanism
Parallel motion
Reciprocating motion
Slider-crank linkage
Three-point hitch
References
Further reading
— Connections between mathematical and real-world mechanical models, historical development of precision machining, some practical advice on fabricating physical models, with ample illustrations and photographs
Hartenberg, R.S. & J. Denavit (1964) Kinematic synthesis of linkages, New York: McGraw-Hill — Online link from Cornell University.
— "Linkages: a peculiar fascination" (Chapter 14) is a discussion of mechanical linkage usage in American mathematical education, includes extensive references
How to Draw a Straight Line — Historical discussion of linkage design from Cornell University
Parmley, Robert. (2000). "Section 23: Linkage." Illustrated Sourcebook of Mechanical Components. New York: McGraw Hill. Drawings and discussion of various linkages.
Sclater, Neil. (2011). "Linkages: Drives and Mechanisms." Mechanisms and Mechanical Devices Sourcebook. 5th ed. New York: McGraw Hill. pp. 89–129. . Drawings and designs of various linkages.
External links
Kinematic Models for Design Digital Library (KMODDL) — Major web resource for kinematics. Movies and photos of hundreds of working mechanical-systems models in the Reuleaux Collection of Mechanisms and Machines at Cornell University, plus 5 other major collections. Includes an e-book library of dozens of classic texts on mechanical design and engineering. Includes CAD models and stereolithographic files for selected mechanisms.
Digital Mechanism and Gear Library (DMG-Lib) (in German: Digitale Mechanismen- und Getriebebibliothek) — Online library about linkages and cams (mostly in German)
Linkage calculations
Introductory linkage lecture
Virtual Mechanisms Animated by Java
Linkage-based Drawing Apparatus by Robert Howsare
(ASOM) Analysis, synthesis and optimization of multibar linkages
Linkage animations on mechanicaldesign101.com include planar and spherical four-bar and six-bar linkages.
Animations of planar and spherical four-bar linkages.
Animation of Bennett's linkage.
Example of a six-bar function generator that computes the elevation angle for a given range.
Animations of six-bar linkage for a bicycle suspension.
A variety of six-bar linkage designs.
Introduction to Linkages
An open source planar linkage mechanism simulation and mechanical synthesis system.
Mechanisms (engineering) | 0.784285 | 0.990791 | 0.777063 |
CRC Handbook of Chemistry and Physics | The CRC Handbook of Chemistry and Physics is a comprehensive one-volume reference resource for science research. First published in 1914, it is currently in its 104th edition, published in 2023. It is known colloquially among chemists as the "Rubber Bible", as CRC originally stood for "Chemical Rubber Company".
As late as the 1962–1963 edition (3604 pages), the Handbook contained myriad information for every branch of science and engineering. Sections in that edition include: Mathematics, Properties and Physical Constants, Chemical Tables, Properties of Matter, Heat, Hygrometric and Barometric Tables, Sound, Quantities and Units, and Miscellaneous. Mathematical Tables from Handbook of Chemistry and Physics was originally published as a supplement to the handbook up to the 9th edition (1952); afterwards, the 10th edition (1956) was published separately as CRC Standard Mathematical Tables. Earlier editions included sections such as "Antidotes of Poisons", "Rules for Naming Organic Compounds", "Surface Tension of Fused Salts", "Percent Composition of Anti-Freeze Solutions", "Spark-gap Voltages", "Greek Alphabet", "Musical Scales", "Pigments and Dyes", "Comparison of Tons and Pounds", "Twist Drill and Steel Wire Gauges" and "Properties of the Earth's Atmosphere at Elevations up to 160 Kilometers". Later editions focus almost exclusively on chemistry and physics topics and eliminated much of the more "common" information.
CRC Press is a leading publisher of engineering handbooks and references and textbooks across virtually all scientific disciplines.
Contents by edition
7th edition
Mathematical Tables
General Chemical Tables
Properties of Matter
Heat
Hygrometric and Barometric Tables
Sound
Electricity and Magnetism
Light
Miscellaneous Tables
Definitions and Formulae
Laboratory Arts and Recipes
Photographic Formulae
Measures and Units
Wire Tables
Apparatus Lists
Problems
Index
22nd–44th editions
Section A: Mathematical Tables
Section B: Properties and Physical Constants
Section C: General Chemical Tables/Specific Gravity and Properties of Matter
Section D: Heat and Hygrometry/Sound/Electricity and Magnetism/Light
Section E: Quantities and Units/Miscellaneous
Index
45th–70th editions
Section A: Mathematical Tables
Section B: Elements and Inorganic Compounds
Section C: Organic Compounds
Section D: General Chemical
Section E: General Physical Constants
Section F: Miscellaneous
Index
71st–102nd editions
Section 1: Basic Constants, Units, and Conversion Factors
Section 2: Symbols, Terminology, and Nomenclature
Section 3: Physical Constants of Organic Compounds
Section 4: Properties of the Elements and Inorganic Compounds
Section 5: Thermochemistry, Electrochemistry, and Kinetics (or Thermo, Electro & Solution Chemistry)
Section 6: Fluid Properties
Section 7: Biochemistry
Section 8: Analytical Chemistry
Section 9: Molecular Structure and Spectroscopy
Section 10: Atomic, Molecular, and Optical Physics
Section 11: Nuclear and Particle Physics
Section 12: Properties of Solids
Section 13: Polymer Properties
Section 14: Geophysics, Astronomy, and Acoustics
Section 15: Practical Laboratory Data
Section 16: Health and Safety Information
Appendix A: Mathematical Tables
Appendix B: CAS Registry Numbers and Molecular Formulas of Inorganic Substances (72nd–75th)
Appendix C: Sources of Physical and Chemical Data (83rd–)
Index
See also
CRC Standard Mathematical Tables
References
External links
PDF copy of the 8th edition, published in 1920
Handbook of Chemistry and Physics online
Tables Relocated or Removed from CRC Handbook of Chemistry and Physics, 71st through 87th Editions
Handbooks and manuals
Chemistry books
Physics books
Encyclopedias of science
CRC Press books
1914 non-fiction books | 0.783855 | 0.991166 | 0.77693 |
Phase space | The phase space of a physical system is the set of all possible physical states of the system when described by a given parameterization. Each possible state corresponds uniquely to a point in the phase space. For mechanical systems, the phase space usually consists of all possible values of the position and momentum parameters. It is the direct product of direct space and reciprocal space. The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré, and Josiah Willard Gibbs.
Principles
In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional space; a one-dimensional system is called a phase line, while a two-dimensional system is called a phase plane. For every possible state of the system or allowed combination of values of the system's parameters, a point is included in the multidimensional space. The system's evolving state over time traces a path (a phase-space trajectory for the system) through the high-dimensional space. The phase-space trajectory represents the set of states compatible with starting from one particular initial condition, located in the full phase space that represents the set of states compatible with starting from any initial condition. As a whole, the phase diagram represents all that the system can be, and its shape can easily elucidate qualities of the system that might not be obvious otherwise. A phase space may contain a great number of dimensions. For instance, a gas containing many molecules may require a separate dimension for each particle's x, y and z positions and momenta (6 dimensions for an idealized monatomic gas), and for more complex molecular systems additional dimensions are required to describe vibrational modes of the molecular bonds, as well as spin around 3 axes. Phase spaces are easier to use when analyzing the behavior of mechanical systems restricted to motion around and along various axes of rotation or translation e.g. in robotics, like analyzing the range of motion of a robotic arm or determining the optimal path to achieve a particular position/momentum result.
Conjugate momenta
In classical mechanics, any choice of generalized coordinates qi for the position (i.e. coordinates on configuration space) defines conjugate generalized momenta pi, which together define co-ordinates on phase space. More abstractly, in classical mechanics phase space is the cotangent bundle of configuration space, and in this interpretation the procedure above expresses that a choice of local coordinates on configuration space induces a choice of natural local Darboux coordinates for the standard symplectic structure on a cotangent space.
Statistical ensembles in phase space
The motion of an ensemble of systems in this space is studied by classical statistical mechanics. The local density of points in such systems obeys Liouville's theorem, and so can be taken as constant. Within the context of a model system in classical mechanics, the phase-space coordinates of the system at any given time are composed of all of the system's dynamic variables. Because of this, it is possible to calculate the state of the system at any given time in the future or the past, through integration of Hamilton's or Lagrange's equations of motion.
In low dimensions
For simple systems, there may be as few as one or two degrees of freedom. One degree of freedom occurs when one has an autonomous ordinary differential equation in a single variable, with the resulting one-dimensional system being called a phase line, and the qualitative behaviour of the system being immediately visible from the phase line. The simplest non-trivial examples are the exponential growth model/decay (one unstable/stable equilibrium) and the logistic growth model (two equilibria, one stable, one unstable).
The phase space of a two-dimensional system is called a phase plane, which occurs in classical mechanics for a single particle moving in one dimension, and where the two variables are position and velocity. In this case, a sketch of the phase portrait may give qualitative information about the dynamics of the system, such as the limit cycle of the Van der Pol oscillator shown in the diagram.
Here the horizontal axis gives the position, and vertical axis the velocity. As the system evolves, its state follows one of the lines (trajectories) on the phase diagram.
Related concepts
Phase plot
A plot of position and momentum variables as a function of time is sometimes called a phase plot or a phase diagram. However the latter expression, "phase diagram", is more usually reserved in the physical sciences for a diagram showing the various regions of stability of the thermodynamic phases of a chemical system, which consists of pressure, temperature, and composition.
Phase portrait
Phase integral
In classical statistical mechanics (continuous energies) the concept of phase space provides a classical analog to the partition function (sum over states) known as the phase integral. Instead of summing the Boltzmann factor over discretely spaced energy states (defined by appropriate integer quantum numbers for each degree of freedom), one may integrate over continuous phase space. Such integration essentially consists of two parts: integration of the momentum component of all degrees of freedom (momentum space) and integration of the position component of all degrees of freedom (configuration space). Once the phase integral is known, it may be related to the classical partition function by multiplication of a normalization constant representing the number of quantum energy states per unit phase space. This normalization constant is simply the inverse of the Planck constant raised to a power equal to the number of degrees of freedom for the system.
Applications
Chaos theory
Classic examples of phase diagrams from chaos theory are:
the Lorenz attractor
population growth (i.e. logistic map)
parameter plane of complex quadratic polynomials with Mandelbrot set.
Quantum mechanics
In quantum mechanics, the coordinates p and q of phase space normally become Hermitian operators in a Hilbert space.
But they may alternatively retain their classical interpretation, provided functions of them compose in novel algebraic ways (through Groenewold's 1946 star product). This is consistent with the uncertainty principle of quantum mechanics.
Every quantum mechanical observable corresponds to a unique function or distribution on phase space, and conversely, as specified by Hermann Weyl (1927) and supplemented by John von Neumann (1931); Eugene Wigner (1932); and, in a grand synthesis, by H. J. Groenewold (1946).
With J. E. Moyal (1949), these completed the foundations of the phase-space formulation of quantum mechanics, a complete and logically autonomous reformulation of quantum mechanics. (Its modern abstractions include deformation quantization and geometric quantization.)
Expectation values in phase-space quantization are obtained isomorphically to tracing operator observables with the density matrix in Hilbert space: they are obtained by phase-space integrals of observables, with the Wigner quasi-probability distribution effectively serving as a measure.
Thus, by expressing quantum mechanics in phase space (the same ambit as for classical mechanics), the Weyl map facilitates recognition of quantum mechanics as a deformation (generalization) of classical mechanics, with deformation parameter ħ/S, where S is the action of the relevant process. (Other familiar deformations in physics involve the deformation of classical Newtonian into relativistic mechanics, with deformation parameter v/c; or the deformation of Newtonian gravity into general relativity, with deformation parameter Schwarzschild radius/characteristic dimension.)
Classical expressions, observables, and operations (such as Poisson brackets) are modified by ħ-dependent quantum corrections, as the conventional commutative multiplication applying in classical mechanics is generalized to the noncommutative star-multiplication characterizing quantum mechanics and underlying its uncertainty principle.
Thermodynamics and statistical mechanics
In thermodynamics and statistical mechanics contexts, the term "phase space" has two meanings: for one, it is used in the same sense as in classical mechanics. If a thermodynamic system consists of N particles, then a point in the 6N-dimensional phase space describes the dynamic state of every particle in that system, as each particle is associated with 3 position variables and 3 momentum variables. In this sense, as long as the particles are distinguishable, a point in phase space is said to be a microstate of the system. (For indistinguishable particles a microstate consists of a set of N! points, corresponding to all possible exchanges of the N particles.) N is typically on the order of the Avogadro number, thus describing the system at a microscopic level is often impractical. This leads to the use of phase space in a different sense.
The phase space can also refer to the space that is parameterized by the macroscopic states of the system, such as pressure, temperature, etc. For instance, one may view the pressure–volume diagram or temperature–entropy diagram as describing part of this phase space. A point in this phase space is correspondingly called a macrostate. There may easily be more than one microstate with the same macrostate. For example, for a fixed temperature, the system could have many dynamic configurations at the microscopic level. When used in this sense, a phase is a region of phase space where the system in question is in, for example, the liquid phase, or solid phase, etc.
Since there are many more microstates than macrostates, the phase space in the first sense is usually a manifold of much larger dimensions than in the second sense. Clearly, many more parameters are required to register every detail of the system down to the molecular or atomic scale than to simply specify, say, the temperature or the pressure of the system.
Optics
Phase space is extensively used in nonimaging optics, the branch of optics devoted to illumination. It is also an important concept in Hamiltonian optics.
Medicine
In medicine and bioengineering, the phase space method is used to visualize multidimensional physiological responses.
See also
Configuration space (mathematics)
Minisuperspace
Phase line, 1-dimensional case
Phase plane, 2-dimensional case
Phase portrait
Phase space method
Parameter space
Separatrix
Applications
Optical phase space
State space (controls) for information about state space (similar to phase state) in control engineering.
State space for information about state space with discrete states in computer science.
Molecular dynamics
Mathematics
Cotangent bundle
Dynamic system
Symplectic manifold
Wigner–Weyl transform
Physics
Classical mechanics
Hamiltonian mechanics
Lagrangian mechanics
State space (physics) for information about state space in physics
Phase-space formulation of quantum mechanics
Characteristics in phase space of quantum mechanics
References
Further reading
External links
Concepts in physics
Dynamical systems
Dimensional analysis
Hamiltonian mechanics | 0.780549 | 0.995355 | 0.776923 |
Spacetime diagram | A spacetime diagram is a graphical illustration of locations in space at various times, especially in the special theory of relativity. Spacetime diagrams can show the geometry underlying phenomena like time dilation and length contraction without mathematical equations.
The history of an object's location through time traces out a line or curve on a spacetime diagram, referred to as the object's world line. Each point in a spacetime diagram represents a unique position in space and time and is referred to as an event.
The most well-known class of spacetime diagrams are known as Minkowski diagrams, developed by Hermann Minkowski in 1908. Minkowski diagrams are two-dimensional graphs that depict events as happening in a universe consisting of one space dimension and one time dimension. Unlike a regular distance-time graph, the distance is displayed on the horizontal axis and time on the vertical axis. Additionally, the time and space units of measurement are chosen in such a way that an object moving at the speed of light is depicted as following a 45° angle to the diagram's axes.
Introduction to kinetic diagrams
Position versus time graphs
In the study of 1-dimensional kinematics, position vs. time graphs (called x-t graphs for short) provide a useful means to describe motion. Kinematic features besides the object's position are visible by the slope and shape of the lines. In Fig 1-1, the plotted object moves away from the origin at a positive constant velocity (1.66 m/s) for 6 seconds, halts for 5 seconds, then returns to the origin over a period of 7 seconds at a non-constant speed (but negative velocity).
At its most basic level, a spacetime diagram is merely a time vs position graph, with the directions of the axes in a usual p-t graph exchanged; that is, the vertical axis refers to temporal and the horizontal axis to spatial coordinate values. Especially when used in special relativity (SR), the temporal axes of a spacetime diagram are often scaled with the speed of light , and thus are often labeled by This changes the dimension of the addressed physical quantity from <Time> to <Length>, in accordance with the dimension associated with the spatial axis, which is frequently labeled
Standard configuration of reference frames
To ease insight into how spacetime coordinates, measured by observers in different reference frames, compare with each other, it is useful to standardize and simplify the setup. Two Galilean reference frames (i.e., conventional 3-space frames), S and S′ (pronounced "S prime"), each with observers O and O′ at rest in their respective frames, but measuring the other as moving with speeds ±v are said to be in standard configuration, when:
The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame S′.
The origins of frames S and S′ coincide at time t = 0 in frame S and also at t′ = 0 in frame S′.
Frame S′ moves in the x-direction of frame S with velocity v as measured in frame S.
This spatial setting is displayed in the Fig 1-2, in which the temporal coordinates are separately annotated as quantities t and t'''.
In a further step of simplification it is often sufficient to consider just the direction of the observed motion and ignore the other two spatial components, allowing x and ct to be plotted in 2-dimensional spacetime diagrams, as introduced above.
Non-relativistic "spacetime diagrams"
The black axes labelled and on Fig 1-3 are the coordinate system of an observer, referred to as at rest, and who is positioned at . This observer's world line is identical with the time axis. Each parallel line to this axis would correspond also to an object at rest but at another position. The blue line describes an object moving with constant speed to the right, such as a moving observer.
This blue line labelled may be interpreted as the time axis for the second observer. Together with the axis, which is identical for both observers, it represents their coordinate system. Since the reference frames are in standard configuration, both observers agree on the location of the origin of their coordinate systems. The axes for the moving observer are not perpendicular to each other and the scale on their time axis is stretched. To determine the coordinates of a certain event, two lines, each parallel to one of the two axes, must be constructed passing through the event, and their intersections with the axes read off.
Determining position and time of the event A as an example in the diagram leads to the same time for both observers, as expected. Only for the position different values result, because the moving observer has approached the position of the event A since . Generally stated, all events on a line parallel to the axis happen simultaneously for both observers. There is only one universal time , modelling the existence of one common position axis. On the other hand, due to two different time axes the observers usually measure different coordinates for the same event. This graphical translation from and to and and vice versa is described mathematically by the so-called Galilean transformation.
Minkowski diagrams
Overview
The term Minkowski diagram refers to a specific form of spacetime diagram frequently used in special relativity. A Minkowski diagram is a two-dimensional graphical depiction of a portion of Minkowski space, usually where space has been curtailed to a single dimension. The units of measurement in these diagrams are taken such that the light cone at an event consists of the lines of slope plus or minus one through that event. The horizontal lines correspond to the usual notion of simultaneous events for a stationary observer at the origin.
A particular Minkowski diagram illustrates the result of a Lorentz transformation. The Lorentz transformation relates two inertial frames of reference, where an observer stationary at the event makes a change of velocity along the -axis. As shown in Fig 2-1, the new time axis of the observer forms an angle with the previous time axis, with . In the new frame of reference the simultaneous events lie parallel to a line inclined by to the previous lines of simultaneity. This is the new -axis. Both the original set of axes and the primed set of axes have the property that they are orthogonal with respect to the Minkowski inner product or relativistic dot product.
The original position on your time line (ct) is perpendicular to position A, the original position on your mutual timeline (x) where (t) is zero. This timeline where timelines come together are positioned then on the same timeline even when there are 2 different positions. The 2 positions are on the 45 degree Event line on the original position of A. Hence position A and position A’ on the Event line and (t)=0, relocate A’ back to position A.
Whatever the magnitude of , the line forms the universal bisector, as shown in Fig 2-2.
One frequently encounters Minkowski diagrams where the time units of measurement are scaled by a factor of such that one unit of equals one unit of . Such a diagram may have units of
Approximately 30 centimetres length and nanoseconds
Astronomical units and intervals of about 8 minutes and 19 seconds (499 seconds)
Light years and years
Light-second and second
With that, light paths are represented by lines parallel to the bisector between the axes.
Mathematical details
The angle between the and axes will be identical with that between the time axes and . This follows from the second postulate of special relativity, which says that the speed of light is the same for all observers, regardless of their relative motion (see below). The angle is given by
The corresponding boost from and to and and vice versa is described mathematically by the Lorentz transformation, which can be written
where is the Lorentz factor. By applying the Lorentz transformation, the spacetime axes obtained for a boosted frame will always correspond to conjugate diameters of a pair of hyperbolas.
As illustrated in Fig 2-3, the boosted and unboosted spacetime axes will in general have unequal unit lengths. If is the unit length on the axes of and respectively, the unit length on the axes of and is:
The -axis represents the worldline of a clock resting in , with representing the duration between two events happening on this worldline, also called the proper time between these events. Length upon the -axis represents the rest length or proper length of a rod resting in . The same interpretation can also be applied to distance upon the - and -axes for clocks and rods resting in .
History
Albert Einstein announced his theory of special relativity in 1905, with Hermann Minkowski providing his graphical representation in 1908.
In Minkowski's 1908 paper there were three diagrams, first to illustrate the Lorentz transformation, then the partition of the plane by the light-cone, and finally illustration of worldlines. The first diagram used a branch of the unit hyperbola to show the locus of a unit of proper time depending on velocity, thus illustrating time dilation. The second diagram showed the conjugate hyperbola to calibrate space, where a similar stretching leaves the impression of FitzGerald contraction. In 1914 Ludwik Silberstein included a diagram of "Minkowski's representation of the Lorentz transformation". This diagram included the unit hyperbola, its conjugate, and a pair of conjugate diameters. Since the 1960s a version of this more complete configuration has been referred to as The Minkowski Diagram, and used as a standard illustration of the transformation geometry of special relativity. E. T. Whittaker has pointed out that the principle of relativity is tantamount to the arbitrariness of what hyperbola radius is selected for time in the Minkowski diagram. In 1912 Gilbert N. Lewis and Edwin B. Wilson applied the methods of synthetic geometry to develop the properties of the non-Euclidean plane that has Minkowski diagrams.Synthetic Spacetime, a digest of the axioms used, and theorems proved, by Wilson and Lewis. Archived by WebCite
When Taylor and Wheeler composed Spacetime Physics (1966), they did not use the term Minkowski diagram for their spacetime geometry. Instead they included an acknowledgement of Minkowski's contribution to philosophy by the totality of his innovation of 1908.
Loedel diagrams
While a frame at rest in a Minkowski diagram has orthogonal spacetime axes, a frame moving relative to the rest frame in a Minkowski diagram has spacetime axes which form an acute angle. This asymmetry of Minkowski diagrams can be misleading, since special relativity postulates that any two inertial reference frames must be physically equivalent. The Loedel diagram is an alternative spacetime diagram that makes the symmetry of inertial references frames much more manifest.
Formulation via median frame
Several authors showed that there is a frame of reference between the resting and moving ones where their symmetry would be apparent ("median frame"). In this frame, the two other frames are moving in opposite directions with equal speed. Using such coordinates makes the units of length and time the same for both axes. If and are given between and , then these expressions are connected with the values in their median frame S0 as follows: See [ Google books, p. 460]
For instance, if between and , then by (2) they are moving in their median frame S0 with approximately each in opposite directions. On the other hand, if in S0, then by (1) the relative velocity between and in their own rest frames is . The construction of the axes of and is done in accordance with the ordinary method using with respect to the orthogonal axes of the median frame (Fig. 3-1).
However, it turns out that when drawing such a symmetric diagram, it is possible to derive the diagram's relations even without mentioning the median frame and at all. Instead, the relative velocity between and can directly be used in the following construction, providing the same result:
If is the angle between the axes of and (or between and ), and between the axes of and , it is given:
Two methods of construction are obvious from Fig. 3-2: the -axis is drawn perpendicular to the -axis, the and -axes are added at angle ; and the x′-axis is drawn at angle with respect to the -axis, the -axis is added perpendicular to the -axis and the -axis perpendicular to the -axis.
In a Minkowski diagram, lengths on the page cannot be directly compared to each other, due to warping factor between the axes' unit lengths in a Minkowski diagram. In particular, if and are the unit lengths of the rest frame axes and moving frame axes, respectively, in a Minkowski diagram, then the two unit lengths are warped relative to each other via the formula:
By contrast, in a symmetric Loedel diagram, both the and frame axes are warped by the same factor relative to the median frame and hence have identical unit lengths. This implies that, for a Loedel spacetime diagram, we can directly compare spacetime lengths between different frames as they appear on the page; no unit length scaling/conversion between frames is necessary due to the symmetric nature of the Loedel diagram.
History
Max Born (1920) drew Minkowski diagrams by placing the -axis almost perpendicular to the -axis, as well as the -axis to the -axis, in order to demonstrate length contraction and time dilation in the symmetric case of two rods and two clocks moving in opposite direction.
Dmitry Mirimanoff (1921) showed that there is always a median frame with respect to two relatively moving frames, and derived the relations between them from the Lorentz transformation. However, he didn't give a graphical representation in a diagram.
Symmetric diagrams were systematically developed by Paul Gruner in collaboration with Josef Sauter in two papers in 1921. Relativistic effects such as length contraction and time dilation and some relations to covariant and contravariant vectors were demonstrated by them. (translation: An elementary geometrical representation of the transformation formulas of the special theory of relativity) Gruner extended this method in subsequent papers (1922–1924), and gave credit to Mirimanoff's treatment as well. (translation: Graphical representation of the four-dimensional space-time universe)
The construction of symmetric Minkowski diagrams was later independently rediscovered by several authors. For instance, starting in 1948, Enrique Loedel Palumbo published a series of papers in Spanish language, presenting the details of such an approach.Fisica relativista, Kapelusz Editorial, Buenos Aires, Argentina (1955). In 1955, Henri Amar also published a paper presenting such relations, and gave credit to Loedel in a subsequent paper in 1957. Some authors of textbooks use symmetric Minkowski diagrams, denoting as Loedel diagrams.
Relativistic phenomena in diagrams
Time dilation
Relativistic time dilation refers to the fact that a clock (indicating its proper time in its rest frame) that moves relative to an observer is observed to run slower. The situation is depicted in the symmetric Loedel diagrams of Fig 4-1. Note that we can compare spacetime lengths on page directly with each other, due to the symmetric nature of the Loedel diagram.
In Fig 4-2, the observer whose reference frame is given by the black axes is assumed to move from the origin O towards A. The moving clock has the reference frame given by the blue axes and moves from O to B. For the black observer, all events happening simultaneously with the event at A are located on a straight line parallel to its space axis. This line passes through A and B, so A and B are simultaneous from the reference frame of the observer with black axes. However, the clock that is moving relative to the black observer marks off time along the blue time axis. This is represented by the distance from O to B. Therefore, the observer at A with the black axes notices their clock as reading the distance from O to A while they observe the clock moving relative him or her to read the distance from O to B. Due to the distance from O to B being smaller than the distance from O to A, they conclude that the time passed on the clock moving relative to them is smaller than that passed on their own clock.
A second observer, having moved together with the clock from O to B, will argue that the black axis clock has only reached C and therefore runs slower. The reason for these apparently paradoxical statements is the different determination of the events happening synchronously at different locations. Due to the principle of relativity, the question of who is right has no answer and does not make sense.
Length contraction
Relativistic length contraction refers to the fact that a ruler (indicating its proper length in its rest frame) that moves relative to an observer is observed to contract/shorten. The situation is depicted in symmetric Loedel diagrams in Fig 4-3. Note that we can compare spacetime lengths on page directly with each other, due to the symmetric nature of the Loedel diagram.
In Fig 4-4, the observer is assumed again to move along the -axis. The world lines of the endpoints of an object moving relative to him are assumed to move along the -axis and the parallel line passing through A and B. For this observer the endpoints of the object at are O and A. For a second observer moving together with the object, so that for him the object is at rest, it has the proper length OB at . Due to . the object is contracted for the first observer.
The second observer will argue that the first observer has evaluated the endpoints of the object at O and A respectively and therefore at different times, leading to a wrong result due to his motion in the meantime. If the second observer investigates the length of another object with endpoints moving along the -axis and a parallel line passing through C and D he concludes the same way this object to be contracted from OD to OC. Each observer estimates objects moving with the other observer to be contracted. This apparently paradoxical situation is again a consequence of the relativity of simultaneity as demonstrated by the analysis via Minkowski diagram.
For all these considerations it was assumed, that both observers take into account the speed of light and their distance to all events they see in order to determine the actual times at which these events happen from their point of view.
Constancy of the speed of light
Another postulate of special relativity is the constancy of the speed of light. It says that any observer in an inertial reference frame measuring the vacuum speed of light relative to themself obtains the same value regardless of his own motion and that of the light source. This statement seems to be paradoxical, but it follows immediately from the differential equation yielding this, and the Minkowski diagram agrees. It explains also the result of the Michelson–Morley experiment which was considered to be a mystery before the theory of relativity was discovered, when photons were thought to be waves through an undetectable medium.
For world lines of photons passing the origin in different directions and holds. That means any position on such a world line corresponds with steps on - and -axes of equal absolute value. From the rule for reading off coordinates in coordinate system with tilted axes follows that the two world lines are the angle bisectors of the - and -axes. As shown in Fig 4-5, the Minkowski diagram illustrates them as being angle bisectors of the - and -axes as well. That means both observers measure the same speed for both photons.
Further coordinate systems corresponding to observers with arbitrary velocities can be added to this Minkowski diagram. For all these systems both photon world lines represent the angle bisectors of the axes. The more the relative speed approaches the speed of light the more the axes approach the corresponding angle bisector. The axis is always more flat and the time axis more steep than the photon world lines. The scales on both axes are always identical, but usually different from those of the other coordinate systems.
Speed of light and causality
Straight lines passing the origin which are steeper than both photon world lines correspond with objects moving more slowly than the speed of light. If this applies to an object, then it applies from the viewpoint of all observers, because the world lines of these photons are the angle bisectors for any inertial reference frame. Therefore, any point above the origin and between the world lines of both photons can be reached with a speed smaller than that of the light and can have a cause-and-effect relationship with the origin. This area is the absolute future, because any event there happens later compared to the event represented by the origin regardless of the observer, which is obvious graphically from the Minkowski diagram in Fig 4-6.
Following the same argument the range below the origin and between the photon world lines is the absolute past relative to the origin. Any event there belongs definitely to the past and can be the cause of an effect at the origin.
The relationship between any such pairs of event is called timelike, because they have a time distance greater than zero for all observers. A straight line connecting these two events is always the time axis of a possible observer for whom they happen at the same place. Two events which can be connected just with the speed of light are called lightlike.
In principle a further dimension of space can be added to the Minkowski diagram leading to a three-dimensional representation. In this case the ranges of future and past become cones with apexes touching each other at the origin. They are called light cones.
The speed of light as a limit
Following the same argument, all straight lines passing through the origin and which are more nearly horizontal than the photon world lines, would correspond to objects or signals moving faster than light regardless of the speed of the observer. Therefore, no event outside the light cones can be reached from the origin, even by a light-signal, nor by any object or signal moving with less than the speed of light. Such pairs of events are called spacelike because they have a finite spatial distance different from zero for all observers. On the other hand, a straight line connecting such events is always the space coordinate axis of a possible observer for whom they happen at the same time. By a slight variation of the velocity of this coordinate system in both directions it is always possible to find two inertial reference frames whose observers estimate the chronological order of these events to be different.
Given an object moving faster than light, say from O to A in Fig 4-7, then for any observer watching the object moving from O to A, another observer can be found (moving at less than the speed of light with respect to the first) for whom the object moves from A to O. The question of which observer is right has no unique answer, and therefore makes no physical sense. Any such moving object or signal would violate the principle of causality.
Also, any general technical means of sending signals faster than light would permit information to be sent into the originator's own past. In the diagram, an observer at O in the system sends a message moving faster than light to A. At A, it is received by another observer, moving so as to be in the system, who sends it back, again faster than light, arriving at B. But B is in the past relative to O. The absurdity of this process becomes obvious when both observers subsequently confirm that they received no message at all, but all messages were directed towards the other observer as can be seen graphically in the Minkowski diagram. Furthermore, if it were possible to accelerate an observer to the speed of light, their space and time axes would coincide with their angle bisector. The coordinate system would collapse, in concordance with the fact that due to time dilation, time would effectively stop passing for them.
These considerations show that the speed of light as a limit is a consequence of the properties of spacetime, and not of the properties of objects such as technologically imperfect space ships. The prohibition of faster-than-light motion, therefore, has nothing in particular to do with electromagnetic waves or light, but comes as a consequence of the structure of spacetime.
Accelerating observers
It is often, incorrectly, asserted that special relativity cannot handle accelerating particles or accelerating reference frames. In reality, accelerating particles present no difficulty at all in special relativity. On the other hand, accelerating frames do require some special treatment, However, as long as one is dealing with flat, Minkowskian spacetime, special relativity can handle the situation. It is only in the presence of gravitation that general relativity is required.
An accelerating particle's 4-vector acceleration is the derivative with respect to proper time of its 4-velocity. This is not a difficult situation to handle. Accelerating frames require that one understand the concept of a momentarily comoving reference frame'' (MCRF), which is to say, a frame traveling at the same instantaneous velocity of a particle at any given instant.
Consider the animation in Fig 5-1. The curved line represents the world line of a particle that undergoes continuous acceleration, including complete changes of direction in the positive and negative x-directions. The red axes are the axes of the MCRF for each point along the particle's trajectory. The coordinates of events in the unprimed (stationary) frame can be related to their coordinates in any momentarily co-moving primed frame using the Lorentz transformations.
Fig 5-2 illustrates the changing views of spacetime along the world line of a rapidly accelerating particle. The axis (not drawn) is vertical, while the axis (not drawn) is horizontal. The dashed line is the spacetime trajectory ("world line") of the particle. The balls are placed at regular intervals of proper time along the world line. The solid diagonal lines are the light cones for the observer's current event, and they intersect at that event. The small dots are other arbitrary events in the spacetime.
The slope of the world line (deviation from being vertical) is the velocity of the particle on that section of the world line. Bends in the world line represent particle acceleration. As the particle accelerates, its view of spacetime changes. These changes in view are governed by the Lorentz transformations. Also note that:
the balls on the world line before/after future/past accelerations are more spaced out due to time dilation.
events which were simultaneous before an acceleration (horizontally spaced events) are at different times afterwards due to the relativity of simultaneity,
events pass through the light cone lines due to the progression of proper time, but not due to the change of views caused by the accelerations, and
the world line always remains within the future and past light cones of the current event.
If one imagines each event to be the flashing of a light, then the events that are within the past light cone of the observer are the events visible to the observer. The slope of the world line (deviation from being vertical) gives the velocity relative to the observer.
Case of non-inertial reference frames
The photon world lines are determined using the metric with . The light cones are deformed according to the position. In an inertial reference frame a free particle has a straight world line. In a non-inertial reference frame the world line of a free particle is curved.
Let's take the example of the fall of an object dropped without initial velocity from a rocket. The rocket has a uniformly accelerated motion with respect to an inertial reference frame. As can be seen from Fig 6-2 of a Minkowski diagram in a non-inertial reference frame, the object once dropped, gains speed, reaches a maximum, and then sees its speed decrease and asymptotically cancel on the horizon where its proper time freezes at . The velocity is measured by an observer at rest in the accelerated rocket.
See also
Minkowski space
Penrose diagram
Rapidity
References
(see page 10 of e-link)
External links
Special relativity
Geometry
Diagrams | 0.782352 | 0.993009 | 0.776883 |
Galilean invariance | Galilean invariance or Galilean relativity states that the laws of motion are the same in all inertial frames of reference. Galileo Galilei first described this principle in 1632 in his Dialogue Concerning the Two Chief World Systems using the example of a ship travelling at constant velocity, without rocking, on a smooth sea; any observer below the deck would not be able to tell whether the ship was moving or stationary.
Formulation
Specifically, the term Galilean invariance today usually refers to this principle as applied to Newtonian mechanics, that is, Newton's laws of motion hold in all frames related to one another by a Galilean transformation. In other words, all frames related to one another by such a transformation are inertial (meaning, Newton's equation of motion is valid in these frames). In this context it is sometimes called Newtonian relativity.
Among the axioms from Newton's theory are:
There exists an absolute space, in which Newton's laws are true. An inertial frame is a reference frame in relative uniform motion to absolute space.
All inertial frames share a universal time.
Galilean relativity can be shown as follows. Consider two inertial frames S and S' . A physical event in S will have position coordinates r = (x, y, z) and time t in S, and r' = (x' , y' , z' ) and time t' in S' . By the second axiom above, one can synchronize the clock in the two frames and assume t = t' . Suppose S' is in relative uniform motion to S with velocity v. Consider a point object whose position is given by functions r' (t) in S' and r(t) in S. We see that
The velocity of the particle is given by the time derivative of the position:
Another differentiation gives the acceleration in the two frames:
It is this simple but crucial result that implies Galilean relativity. Assuming that mass is invariant in all inertial frames, the above equation shows Newton's laws of mechanics, if valid in one frame, must hold for all frames. But it is assumed to hold in absolute space, therefore Galilean relativity holds.
Newton's theory versus special relativity
A comparison can be made between Newtonian relativity and special relativity.
Some of the assumptions and properties of Newton's theory are:
The existence of infinitely many inertial frames. Each frame is of infinite size (the entire universe may be covered by many linearly equivalent frames). Any two frames may be in relative uniform motion. (The relativistic nature of mechanics derived above shows that the absolute space assumption is not necessary.)
The inertial frames may move in all possible relative forms of uniform motion.
There is a universal, or absolute, notion of elapsed time.
Two inertial frames are related by a Galilean transformation.
In all inertial frames, Newton's laws, and gravity, hold.
In comparison, the corresponding statements from special relativity are as follows:
The existence, as well, of infinitely many non-inertial frames, each of which referenced to (and physically determined by) a unique set of spacetime coordinates. Each frame may be of infinite size, but its definition is always determined locally by contextual physical conditions. Any two frames may be in relative non-uniform motion (as long as it is assumed that this condition of relative motion implies a relativistic dynamical effect – and later, mechanical effect in general relativity – between both frames).
Rather than freely allowing all conditions of relative uniform motion between frames of reference, the relative velocity between two inertial frames becomes bounded above by the speed of light.
Instead of universal elapsed time, each inertial frame possesses its own notion of elapsed time.
The Galilean transformations are replaced by Lorentz transformations.
In all inertial frames, all laws of physics are the same.
Both theories assume the existence of inertial frames. In practice, the size of the frames in which they remain valid differ greatly, depending on gravitational tidal forces.
In the appropriate context, a local Newtonian inertial frame, where Newton's theory remains a good model, extends to roughly 107 light years.
In special relativity, one considers Einstein's cabins, cabins that fall freely in a gravitational field. According to Einstein's thought experiment, a man in such a cabin experiences (to a good approximation) no gravity and therefore the cabin is an approximate inertial frame. However, one has to assume that the size of the cabin is sufficiently small so that the gravitational field is approximately parallel in its interior. This can greatly reduce the sizes of such approximate frames, in comparison to Newtonian frames. For example, an artificial satellite orbiting the Earth can be viewed as a cabin. However, reasonably sensitive instruments could detect "microgravity" in such a situation because the "lines of force" of the Earth's gravitational field converge.
In general, the convergence of gravitational fields in the universe dictates the scale at which one might consider such (local) inertial frames. For example, a spaceship falling into a black hole or neutron star would (at a certain distance) be subjected to tidal forces strong enough to crush it in width and tear it apart in length. In comparison, however, such forces might only be uncomfortable for the astronauts inside (compressing their joints, making it difficult to extend their limbs in any direction perpendicular to the gravity field of the star). Reducing the scale further, the forces at that distance might have almost no effects at all on a mouse. This illustrates the idea that all freely falling frames are locally inertial (acceleration and gravity-free) if the scale is chosen correctly.
Electromagnetism
There are two consistent Galilean transformations that may be used with electromagnetic fields in certain situations.
A transformation is not consistent if where and are velocities. A consistent transformation will produce the same results when transforming to a new velocity in one step or multiple steps. It is not possible to have a consistent Galilean transformation that transforms both the magnetic and electric fields. There are useful consistent Galilean transformations that may be applied whenever either the magnetic field or the electric field is dominant.
Magnetic field system
Magnetic field systems are those systems in which the electric field in the initial frame of reference is insignificant, but the magnetic field is strong. When the magnetic field is dominant and the relative velocity, , is low, then the following transformation may be useful:
where is free current density, is magnetization density. The electric field is transformed under this transformation when changing frames of reference, but the magnetic field and related quantities are unchanged. An example of this situation is a wire is moving in a magnetic field such as would occur in an ordinary generator or motor. The transformed electric field in the moving frame of reference could induce current in the wire.
Electric field system
Electric field systems are those systems in which the magnetic field in the initial frame of reference is insignificant, but the electric field is strong. When the electric field is dominant and the relative velocity, , is low, then the following transformation may be useful:
where is free charge density, is polarization density. The magnetic field and free current density are transformed under this transformation when changing frames of reference, but the electric field and related quantities are unchanged
Work, kinetic energy, and momentum
Because the distance covered while applying a force to an object depends on the inertial frame of reference, so depends the work done. Due to Newton's law of reciprocal actions there is a reaction force; it does work depending on the inertial frame of reference in an opposite way. The total work done is independent of the inertial frame of reference.
Correspondingly the kinetic energy of an object, and even the change in this energy due to a change in velocity, depends on the inertial frame of reference. The total kinetic energy of an isolated system also depends on the inertial frame of reference: it is the sum of the total kinetic energy in a center-of-momentum frame and the kinetic energy the total mass would have if it were concentrated in the center of mass. Due to the conservation of momentum the latter does not change with time, so changes with time of the total kinetic energy do not depend on the inertial frame of reference.
By contrast, while the momentum of an object also depends on the inertial frame of reference, its change due to a change in velocity does not.
See also
Absolute space and time
Faster-than-light
Galilei-covariant tensor formulation (no relation to Galileo)
Superluminal motion
Notes and references
Classical mechanics
Invariance
he:מערכת ייחוס#עקרון היחסות של גלילאו | 0.784391 | 0.990425 | 0.776881 |
Kinetic energy weapon | A kinetic energy weapon (also known as kinetic weapon, kinetic energy warhead, kinetic warhead, kinetic projectile, kinetic kill vehicle) is a projectile weapon based solely on a projectile's kinetic energy to inflict damage to a target, instead of using any explosive, incendiary/thermal, chemical or radiological payload. All kinetic weapons work by attaining a high flight speed — generally supersonic or even up to hypervelocity — and collide with their targets, converting its kinetic energy and relative impulse into destructive shock waves, heat and cavitation. In kinetic weapons with unpowered flight, the muzzle velocity or launch velocity often determines the effective range and potential damage of the kinetic projectile.
Kinetic weapons are the oldest and most common ranged weapons used in human history, with the projectiles varying from blunt projectiles such as rocks and round shots, pointed missiles such as arrows, bolts, darts, and javelins, to modern tapered high-velocity impactors such as bullets, flechettes, and penetrators. Typical kinetic weapons accelerate their projectiles mechanically (by muscle power, mechanical advantage devices, elastic energy or pneumatics) or chemically (by propellant combustion, as with firearms), but newer technologies are enabling the development of potential weapons using electromagnetically launched projectiles, such as railguns, coilguns and mass drivers. There are also concept weapons that are accelerated by gravity, as in the case of kinetic bombardment weapons designed for space warfare.
The term hit-to-kill, or kinetic kill, is also used in the military aerospace field to describe kinetic energy weapons accelerated by a rocket engine. It has been used primarily in the anti-ballistic missile (ABM) and anti-satellite weapon (ASAT) fields, but some modern anti-aircraft missiles are also kinetic kill vehicles. Hit-to-kill systems are part of the wider class of kinetic projectiles, a class that has widespread use in the anti-tank field.
Basic concept
Kinetic energy is a function of mass and the velocity of an object. For a kinetic energy weapon in the aerospace field, both objects are moving and it is the relative velocity that is important. In the case of the interception of a reentry vehicle (RV) from an intercontinental ballistic missile (ICBM) during the terminal phase of the approach, the RV will be traveling at approximately while the interceptor will be on the order of . Because the interceptor may not be approaching head-on, a lower bound on the relative velocity on the order of can be assumed, or converting to SI units, approximately 7150 meters per second.
At that speed, every kilogram of the interceptor will have an energy of:
TNT has an explosive energy of about 4853 joules per gram, or about 5 MJ per kilogram. That means the impact energy of the mass of the interceptor is over five times that of a detonating warhead of the same mass.
It may seem like this makes a warhead superfluous, but a hit-to-kill system has to actually hit the target, which may be on the order of half a meter wide, while a conventional warhead releases numerous small fragments that increase the possibility of impact over a much larger area, albeit with a much smaller impact mass. This has led to alternative concepts that attempt to spread out the potential impact zone without explosives. The SPAD concept of the 1960s used a metal net with small steel balls that would be released from the interceptor missile, while the Homing Overlay Experiment of the 1980s used a fan-like metal disk.
As the accuracy and speed of modern surface-to-air missiles (SAMs) improved, and their targets began to include theatre ballistic missiles (TBMs), many existing systems have moved to hit-to-kill attacks as well. This includes the MIM-104 Patriot, whose PAC-3 version removed the warhead and upgraded the solid fuel rocket motor to produce an interceptor missile that is much smaller overall, as well as the RIM-161 Standard Missile 3, which is dedicated to the anti-missile role.
Delivery
Some kinetic weapons for targeting objects in spaceflight are anti-satellite weapons and anti-ballistic missiles. Since in order to reach an object in orbit it is necessary to attain an extremely high velocity, their released kinetic energy alone is enough to destroy their target; explosives are not necessary. For example: the energy of TNT is 4.6 MJ/kg, and the energy of a kinetic kill vehicle with a closing speed of is 50 MJ/kg. For comparison, 50MJ is equivalent to the kinetic energy of a school bus weighing 5 metric tons, traveling at . This saves costly weight and there is no detonation to be precisely timed. This method, however, requires direct contact with the target, which requires a more accurate trajectory. Some hit-to-kill warheads are additionally equipped with an explosive directional warhead to enhance the kill probability (e.g. Israeli Arrow missile or U.S. Patriot PAC-3).
With regard to anti-missile weapons, the Arrow missile and MIM-104 Patriot PAC-2 have explosives, while the Kinetic Energy Interceptor (KEI), Lightweight Exo-Atmospheric Projectile (LEAP, used in Aegis BMDS), and THAAD do not (see Missile Defense Agency).
A kinetic projectile can also be dropped from aircraft. This is applied by replacing the explosives of a regular bomb with a non-explosive material (e.g. concrete), for a precision hit with less collateral damage; these are called concrete bombs. A typical bomb has a mass of and a speed of impact of . It is also applied for training the act of dropping a bomb with explosives. This method has been used in Operation Iraqi Freedom and the subsequent military operations in Iraq by mating concrete-filled training bombs with JDAM GPS guidance kits, to attack vehicles and other relatively "soft" targets located too close to civilian structures for the use of conventional high explosive bombs.
Advantages and disadvantages
The primary advantage kinetic energy weapons is that they minimize the launch mass of the weapon, as no weight has to be set aside for a separate warhead. Every part of the weapon, including the airframe, electronics and even the unburned maneuvering fuel contributes to the destruction of the target. Lowering the total mass of the vehicle offers advantages in terms of the required launch vehicle needed to reach the required performance, and also reduces the mass that needs to be accelerated during maneuvering.
Another advantage of kinetic energy weapons is that any impact will almost certainly guarantee the destruction of the target. In contrast, a weapon using a blast fragmentation warhead will produce a large cloud of small fragments that will not cause as much destruction on impact. Both will produce effects that can easily be seen at long distance using radar or infrared detectors, but such a signal will generally indicate complete destruction in the case of a kinetic energy weapons while the fragmentation case does not guarantee a "kill".
No chemical munitions in the weapons also means that there is far less pollution of an area from a kinetic weapon.
The main disadvantage of the kinetic energy weapons is that they require extremely high accuracy in the guidance system, on the order of .
See also
Explanatory notes
References
Bibliography
External links
Anti-ballistic missiles
Projectiles
Collision | 0.783568 | 0.991336 | 0.776779 |
Effective accelerationism | Effective accelerationism, often abbreviated as "e/acc", is a 21st-century philosophical movement that advocates for an explicitly pro-technology stance. Its proponents believe that unrestricted technological progress (especially driven by artificial intelligence) is a solution to universal human problems like poverty, war and climate change. They see themselves as a counterweight to more cautious views on technological innovation, often giving their opponents the derogatory labels of "doomers" or "decels" (short for deceleration).
The movement carries utopian undertones and argues that humans need to develop and build faster to ensure their survival and propagate consciousness throughout the universe. Its founders Guillaume Verdon and the pseudonymous Bayeslord see it as a way to "usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms."
Although effective accelerationism has been described as a fringe movement and as cult-like, it has gained mainstream visibility in 2023. A number of high-profile Silicon Valley figures, including investors Marc Andreessen and Garry Tan explicitly endorsed it by adding "e/acc" to their public social media profiles.
Etymology and central beliefs
Effective accelerationism, a portmanteau of "effective altruism" and "accelerationism", is a fundamentally techno-optimist movement. According to Guillaume Verdon, one of the movement's founders, its aim is for human civilization to "clim[b] the Kardashev gradient", meaning its purpose is for human civilization to rise to next levels on the Kardashev scale by maximizing energy usage.
To achieve this goal, effective accelerationism wants to accelerate technological progress. It is strongly focused on artificial general intelligence (AGI), because it sees AGI as fundamental for climbing the Kardashev scale. The movement therefore advocates for unrestricted development and deployment of artificial intelligence. Regulation of artificial intelligence and government intervention in markets more generally is met with opposition. Many of its proponents have libertarian views and think that AGI will be most aligned if many AGIs compete against each other on the marketplace.
The founders of the movement see it as rooted in Jeremy England's theory on the origin of life, which is focused on entropy and thermodynamics. According to them, the universe aims to increase entropy, and life is a way of increasing it. By spreading life throughout the universe and making life use up ever increasing amounts of energy, the universe's purpose would thus be fulfilled.
History
Intellectual origins
While Nick Land is seen as the intellectual originator of contemporary accelerationism in general, the precise origins of effective accelerationism remain unclear. The earliest known reference to the movement can be traced back to a May 2022 newsletter published by four pseudonymous authors known by their X (formerly Twitter) usernames @BasedBeffJezos, @bayeslord, @zestular and @creatine_cycle.
Effective accelerationism incorporates elements of older Silicon Valley subcultures such as transhumanism and extropianism, which similarly emphasized the value of progress and resisted efforts to restrain the development of technology, as well as the work of the Cybernetic Culture Research Unit.
Disclosure of the identity of BasedBeffJezos
Forbes disclosed in December 2023 that the @BasedBeffJezos persona is maintained by Guillaume Verdon, a Canadian former Google quantum computing engineer and theoretical physicist. The revelation was supported by a voice analysis conducted by the National Center for Media Forensics of the University of Colorado Denver, which further confirmed the match between Jezos and Verdon. The magazine justified its decision to disclose Verdon's identity on the grounds of it being "in the public interest".
On 29 December 2023 Guillaume Verdon was interviewed by Lex Fridman on the Lex Fridman Podcast and introduced as the "founder of [the] e/acc (effective accelerationism) movement".
Relation to other movements
Traditional accelerationism
Traditional accelerationism, as developed by the British philosopher Nick Land, sees the acceleration of technological change as a way to bring about a fundamental transformation of current culture, society, and the political economy. In his earlier writings he saw the acceleration of capitalism as a way to overcome this economic system itself. In contrast, effective accelerationism does not seek to overcome capitalism or to introduce radical societal change but tries to maximize the probability of a technocapital singularity, triggering an intelligence explosion throughout the universe and maximizing energy usage.
Effective altruism
Effective accelerationism also diverges from the principles of effective altruism, which prioritizes using evidence and reasoning to identify the most effective ways to altruistically improve the world. This divergence comes primarily from one of the causes effective altruists focus on – AI existential risk. Effective altruists argue that AI companies should be cautious and strive to develop safe AI systems, as they fear that any misaligned AGI could eventually lead to human extinction. Proponents of Effective Accelerationism generally consider that existential risks from AGI are negligible, and that even if they were not, decentralized free markets would much better mitigate this risk than centralized governmental regulation.
d/acc
Introduced by Vitalik Buterin in November 2023, d/acc is pro-technology like e/acc. But it assumes that maximizing profit does not automatically lead to the best outcome. The "d" in d/acc primarily means "defensive", but can also refer to "decentralization" or "differential". d/acc acknowledges existential risks and seeks a more targeted approach to technological development than e/acc, intentionally prioritizing technologies that are expected to make the world better or safer.
Degrowth
Effective accelerationism also stands in stark contrast with the degrowth movement, sometimes described by it as "decelerationism" or "decels". The degrowth movement advocates for reducing economic activity and consumption to address ecological and social issues. Effective accelerationism on the contrary embraces technological progress, energy consumption and the dynamics of capitalism, rather than advocating for a reduction in economic activity.
Reception
The "Techno-Optimist Manifesto", a 2023 essay by Marc Andreessen, has been described by the Financial Times and the German Süddeutsche Zeitung as espousing the views of effective accelerationism.
David Swan of The Sydney Morning Herald has criticized effective accelerationism due to its opposition to government and industry self-regulation. He argues that "innovations like AI needs thoughtful regulations and guardrails [...] to avoid the myriad mistakes Silicon Valley has already made". During the 2023 Reagan National Defense Forum, U.S. Secretary of Commerce Gina Raimondo cautioned against embracing the "move fast and break things" mentality associated with "effective acceleration". She emphasized the need to exercise caution in dealing with AI, stating "that's too dangerous. You can't break things when you are talking about AI". In a similar vein, Ellen Huet argued on Bloomberg News that some of the ideas of the movement were "deeply unsettling", focusing especially on Guillaume Verdon's "post-humanism" and the view that "natural selection could lead AI to replace us [humans] as the dominant species."
See also
Technological utopianism
Transhumanism
References
External links
Computational neuroscience
Concepts in ethics
Cybernetics
Doomsday scenarios
Effective altruism
Ethics of science and technology
Existential risk from artificial general intelligence
Future problems
Human extinction
Philosophy of artificial intelligence
Singularitarianism
Technology hazards
Effective accelerationism | 0.779524 | 0.996471 | 0.776773 |
Biomechanics | Biomechanics is the study of the structure, function and motion of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. Biomechanics is a branch of biophysics.
Today computational mechanics goes far beyond pure mechanics, and involves other physical actions: chemistry, heat and mass transfer, electric and magnetic stimuli and many others.
Etymology
The word "biomechanics" (1899) and the related "biomechanical" (1856) come from the Ancient Greek βίος bios "life" and μηχανική, mēchanikē "mechanics", to refer to the study of the mechanical principles of living organisms, particularly their movement and structure.
Subfields
Biofluid mechanics
Biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluid problem is that of blood flow in the human cardiovascular system. Under certain mathematical circumstances, blood flow can be modeled by the Navier–Stokes equations. In vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this assumption fails when considering forward flow within arterioles. At the microscopic scale, the effects of individual red blood cells become significant, and whole blood can no longer be modeled as a continuum. When the diameter of the blood vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs and there is a decrease in wall shear stress. However, as the diameter of the blood vessel decreases further, the red blood cells have to squeeze through the vessel and often can only pass in a single file. In this case, the inverse Fahraeus–Lindquist effect occurs and the wall shear stress increases.
An example of a gaseous biofluids problem is that of human respiration. Recently, respiratory systems in insects have been studied for bioinspiration for designing improved microfluidic devices.
Biotribology
Biotribology is the study of friction, wear and lubrication of biological systems, especially human joints such as hips and knees. In general, these processes are studied in the context of contact mechanics and tribology.
Additional aspects of biotribology include analysis of subsurface damage resulting from two surfaces coming in contact during motion, i.e. rubbing against each other, such as in the evaluation of tissue-engineered cartilage.
Comparative biomechanics
Comparative biomechanics is the application of biomechanics to non-human organisms, whether used to gain greater insights into humans (as in physical anthropology) or into the functions, ecology and adaptations of the organisms themselves. Common areas of investigation are Animal locomotion and feeding, as these have strong connections to the organism's fitness and impose high mechanical demands. Animal locomotion, has many manifestations, including running, jumping and flying. Locomotion requires energy to overcome friction, drag, inertia, and gravity, though which factor predominates varies with environment.
Comparative biomechanics overlaps strongly with many other fields, including ecology, neurobiology, developmental biology, ethology, and paleontology, to the extent of commonly publishing papers in the journals of these other fields. Comparative biomechanics is often applied in medicine (with regards to common model organisms such as mice and rats) as well as in biomimetics, which looks to nature for solutions to engineering problems.
Computational biomechanics
Computational biomechanics is the application of engineering computational tools, such as the Finite element method to study the mechanics of biological systems. Computational models and simulations are used to predict the relationship between parameters that are otherwise challenging to test experimentally, or used to design more relevant experiments reducing the time and costs of experiments. Mechanical modeling using finite element analysis has been used to interpret the experimental observation of plant cell growth to understand how they differentiate, for instance. In medicine, over the past decade, the Finite element method has become an established alternative to in vivo surgical assessment. One of the main advantages of computational biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modeling (or other discretization techniques) to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g., BioSpine) and SOniCS, as well as the SOFA, FEniCS frameworks and FEBio.
Computational biomechanics is an essential ingredient in surgical simulation, which is used for surgical planning, assistance, and training. In this case, numerical (discretization) methods are used to compute, as fast as possible, a system's response to boundary conditions such as forces, heat and mass transfer, and electrical and magnetic stimuli.
Continuum biomechanics
The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics. This assumption breaks down when the length scales of interest approach the order of the microstructural details of the material. One of the most remarkable characteristics of biomaterials is their hierarchical structure. In other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels.
Biomaterials are classified into two groups: hard and soft tissues. Mechanical deformation of hard tissues (like wood, shell and bone) may be analysed with the theory of linear elasticity. On the other hand, soft tissues (like skin, tendon, muscle, and cartilage) usually undergo large deformations, and thus, their analysis relies on the finite strain theory and computer simulations. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation.
Neuromechanics
Neuromechanics uses a biomechanical approach to better understand how the brain and nervous system interact to control the body. During motor tasks, motor units activate a set of muscles to perform a specific movement, which can be modified via motor adaptation and learning. In recent years, neuromechanical experiments have been enabled by combining motion capture tools with neural recordings.
Plant biomechanics
The application of biomechanical principles to plants, plant organs and cells has developed into the subfield of plant biomechanics. Application of biomechanics for plants ranges from studying the resilience of crops to environmental stress to development and morphogenesis at cell and tissue scale, overlapping with mechanobiology.
Sports biomechanics
In sports biomechanics, the laws of mechanics are applied to human movement in order to gain a greater understanding of athletic performance and to reduce sport injuries as well. It focuses on the application of the scientific principles of mechanical physics to understand movements of action of human bodies and sports implements such as cricket bat, hockey stick and javelin etc. Elements of mechanical engineering (e.g., strain gauges), electrical engineering (e.g., digital filtering), computer science (e.g., numerical methods), gait analysis (e.g., force platforms), and clinical neurophysiology (e.g., surface EMG) are common methods used in sports biomechanics.
Biomechanics in sports can be stated as the body's muscular, joint, and skeletal actions while executing a given task, skill, or technique. Understanding biomechanics relating to sports skills has the greatest implications on sports performance, rehabilitation and injury prevention, and sports mastery. As noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best.
Vascular biomechanics
The main topics of the vascular biomechanics is the description of the mechanical behaviour of vascular tissues.
It is well known that cardiovascular disease is the leading cause of death worldwide. Vascular system in the human body is the main component that is supposed to maintain pressure and allow for blood flow and chemical exchanges. Studying the mechanical properties of these complex tissues improves the possibility of better understanding cardiovascular diseases and drastically improves personalized medicine.
Vascular tissues are inhomogeneous with a strongly non linear behaviour. Generally this study involves complex geometry with intricate load conditions and material properties. The correct description of these mechanisms is based on the study of physiology and biological interaction. Therefore is necessary to study wall mechanics and hemodynamics with their interaction.
It is also necessary to premise that the vascular wall is a dynamic structure in continuous evolution. This evolution directly follows the chemical and mechanical environment in which the tissues are immersed like Wall Shear Stress or biochemical signaling.
Immunomechanics
The emerging field of immunomechanics focuses on characterising mechanical properties of the immune cells and their functional relevance. Mechanics of immune cells can be characterised using various force spectroscopy approaches such as acoustic force spectroscopy and optical tweezers, and these measurements can be performed at physiological conditions (e.g. temperature). Furthermore, one can study the link between immune cell mechanics and immunometabolism and immune signalling. The term "immunomechanics" is some times interchangeably used with immune cell mechanobiology or cell mechanoimmunology.
Other applied subfields of biomechanics include
Allometry
Animal locomotion and Gait analysis
Biotribology
Biofluid mechanics
Cardiovascular biomechanics
Comparative biomechanics
Computational biomechanics
Ergonomy
Forensic Biomechanics
Human factors engineering and occupational biomechanics
Injury biomechanics
Implant (medicine), Orthotics and Prosthesis
Kinaesthetics
Kinesiology (kinetics + physiology)
Musculoskeletal and orthopedic biomechanics
Rehabilitation
Soft body dynamics
Sports biomechanics
History
Antiquity
Aristotle, a student of Plato, can be considered the first bio-mechanic because of his work with animal anatomy. Aristotle wrote the first book on the motion of animals, De Motu Animalium, or On the Movement of Animals. He saw animal's bodies as mechanical systems, pursued questions such as the physiological difference between imagining performing an action and actual performance. In another work, On the Parts of Animals, he provided an accurate description of how the ureter uses peristalsis to carry urine from the kidneys to the bladder.
With the rise of the Roman Empire, technology became more popular than philosophy and the next bio-mechanic arose. Galen (129 AD-210 AD), physician to Marcus Aurelius, wrote his famous work, On the Function of the Parts (about the human body). This would be the world's standard medical book for the next 1,400 years.
Renaissance
The next major biomechanic would not be around until the 1490s, with the studies of human anatomy and biomechanics by Leonardo da Vinci. He had a great understanding of science and mechanics and studied anatomy in a mechanics context. He analyzed muscle forces and movements and studied joint functions. These studies could be considered studies in the realm of biomechanics. Leonardo da Vinci studied anatomy in the context of mechanics. He analyzed muscle forces as acting along lines connecting origins and insertions, and studied joint function. Da Vinci is also known for mimicking some animal features in his machines. For example, he studied the flight of birds to find means by which humans could fly; and because horses were the principal source of mechanical power in that time, he studied their muscular systems to design machines that would better benefit from the forces applied by this animal.
In 1543, Galen's work, On the Function of the Parts was challenged by Andreas Vesalius at the age of 29. Vesalius published his own work called, On the Structure of the Human Body. In this work, Vesalius corrected many errors made by Galen, which would not be globally accepted for many centuries. With the death of Copernicus came a new desire to understand and learn about the world around people and how it works. On his deathbed, he published his work, On the Revolutions of the Heavenly Spheres. This work not only revolutionized science and physics, but also the development of mechanics and later bio-mechanics.
Galileo Galilei, the father of mechanics and part time biomechanic was born 21 years after the death of Copernicus. Over his years of science, Galileo made a lot of biomechanical aspects known. For example, he discovered that "animals' masses increase disproportionately to their size, and their bones must consequently also disproportionately increase in girth, adapting to loadbearing rather than mere size. The bending strength of a tubular structure such as a bone is increased relative to its weight by making it hollow and increasing its diameter. Marine animals can be larger than terrestrial animals because the water's buoyancy relieves their tissues of weight."
Galileo Galilei was interested in the strength of bones and suggested that bones are hollow because this affords maximum strength with minimum weight. He noted that animals' bone masses increased disproportionately to their size. Consequently, bones must also increase disproportionately in girth rather than mere size. This is because the bending strength of a tubular structure (such as a bone) is much more efficient relative to its weight. Mason suggests that this insight was one of the first grasps of the principles of biological optimization.
In the 17th century, Descartes suggested a philosophic system whereby all living systems, including the human body (but not the soul), are simply machines ruled by the same mechanical laws, an idea that did much to promote and sustain biomechanical study.
Industrial era
The next major bio-mechanic, Giovanni Alfonso Borelli, embraced Descartes' mechanical philosophy and studied walking, running, jumping, the flight of birds, the swimming of fish, and even the piston action of the heart within a mechanical framework. He could determine the position of the human center of gravity, calculate and measure inspired and expired air volumes, and he showed that inspiration is muscle-driven and expiration is due to tissue elasticity.
Borelli was the first to understand that "the levers of the musculature system magnify motion rather than force, so that muscles must produce much larger forces than those resisting the motion". Influenced by the work of Galileo, whom he personally knew, he had an intuitive understanding of static equilibrium in various joints of the human body well before Newton published the laws of motion. His work is often considered the most important in the history of bio-mechanics because he made so many new discoveries that opened the way for the future generations to continue his work and studies.
It was many years after Borelli before the field of bio-mechanics made any major leaps. After that time, more and more scientists took to learning about the human body and its functions. There are not many notable scientists from the 19th or 20th century in bio-mechanics because the field is far too vast now to attribute one thing to one person. However, the field is continuing to grow every year and continues to make advances in discovering more about the human body. Because the field became so popular, many institutions and labs have opened over the last century and people continue doing research. With the Creation of the American Society of Bio-mechanics in 1977, the field continues to grow and make many new discoveries.
In the 19th century Étienne-Jules Marey used cinematography to scientifically investigate locomotion. He opened the field of modern 'motion analysis' by being the first to correlate ground reaction forces with movement. In Germany, the brothers Ernst Heinrich Weber and Wilhelm Eduard Weber hypothesized a great deal about human gait, but it was Christian Wilhelm Braune who significantly advanced the science using recent advances in engineering mechanics. During the same period, the engineering mechanics of materials began to flourish in France and Germany under the demands of the Industrial Revolution. This led to the rebirth of bone biomechanics when the railroad engineer Karl Culmann and the anatomist Hermann von Meyer compared the stress patterns in a human femur with those in a similarly shaped crane. Inspired by this finding Julius Wolff proposed the famous Wolff's law of bone remodeling.
Applications
The study of biomechanics ranges from the inner workings of a cell to the movement and development of limbs, to the mechanical properties of soft tissue, and bones. Some simple examples of biomechanics research include the investigation of the forces that act on limbs, the aerodynamics of bird and insect flight, the hydrodynamics of swimming in fish, and locomotion in general across all forms of life, from individual cells to whole organisms. With growing understanding of the physiological behavior of living tissues, researchers are able to advance the field of tissue engineering, as well as develop improved treatments for a wide array of pathologies including cancer.
Biomechanics is also applied to studying human musculoskeletal systems. Such research utilizes force platforms to study human ground reaction forces and infrared videography to capture the trajectories of markers attached to the human body to study human 3D motion. Research also applies electromyography to study muscle activation, investigating muscle responses to external forces and perturbations.
Biomechanics is widely used in orthopedic industry to design orthopedic implants for human joints, dental parts, external fixations and other medical purposes. Biotribology is a very important part of it. It is a study of the performance and function of biomaterials used for orthopedic implants. It plays a vital role to improve the design and produce successful biomaterials for medical and clinical purposes. One such example is in tissue engineered cartilage. The dynamic loading of joints considered as impact is discussed in detail by Emanuel Willert.
It is also tied to the field of engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Applied mechanics, most notably mechanical engineering disciplines such as continuum mechanics, mechanism analysis, structural analysis, kinematics and dynamics play prominent roles in the study of biomechanics.
Usually biological systems are much more complex than man-built systems. Numerical methods are hence applied in almost every biomechanical study. Research is done in an iterative process of hypothesis and verification, including several steps of modeling, computer simulation and experimental measurements.
See also
Biomechatronics
Biomedical engineering
Cardiovascular System Dynamics Society
Evolutionary physiology
Forensic biomechanics
International Society of Biomechanics
List of biofluid mechanics research groups
Mechanics of human sexuality
OpenSim (simulation toolkit)
Physical oncology
References
Further reading
External links
Biomechanics and Movement Science Listserver (Biomch-L)
Biomechanics Links
A Genealogy of Biomechanics
Motor control | 0.780243 | 0.995541 | 0.776764 |
Theoretical physics | Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain, and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena.
The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation.
Overview
A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms.
A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical idea that (action and) energy are not continuously variable.
Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ (semi-) empirical formulas and heuristics to agree with experimental results, often without deep physical understanding. "Modelers" (also called "model-builders") often appear much like phenomenologists, but try to model speculative theories that have certain desirable features (rather than on experimental data), or apply the techniques of mathematical modeling to physics problems. Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled; e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics.
Theoretical advances may consist in setting aside old, incorrect paradigms (e.g., aether theory of light propagation, caloric theory of heat, burning consisting of evolving phlogiston, or astronomical bodies revolving around the Earth) or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result. Sometimes though, advances may proceed along different paths. For example, an essentially correct theory may need some conceptual or factual revisions; atomic theory, first postulated millennia ago (by several thinkers in Greece and India) and the two-fluid theory of electricity are two cases in this point. However, an exception to all the above is the wave–particle duality, a theory combining aspects of different, opposing models via the Bohr complementarity principle.
Physical theories become accepted if they are able to make correct predictions and no (or few) incorrect ones. The theory should have, at least as a secondary objective, a certain economy and elegance (compare to mathematical beauty), a notion sometimes called "Occam's razor" after the 13th-century English philosopher William of Occam (or Ockham), in which the simpler of two theories that describe the same matter just as adequately is preferred (but conceptual simplicity may mean mathematical complexity). They are also more likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method.
Physical theories can be grouped into three categories: mainstream theories, proposed theories and fringe theories.
History
Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, and continued by Plato and Aristotle, whose views held sway for a millennium. During the rise of medieval universities, the only acknowledged intellectual disciplines were the seven liberal arts of the Trivium like grammar, logic, and rhetoric and of the Quadrivium like arithmetic, geometry, music and astronomy. During the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon. As the Scientific Revolution gathered pace, the concepts of matter, energy, space, time and causality slowly began to acquire the form we know today, and other sciences spun off from the rubric of natural philosophy. Thus began the modern era of theory with the Copernican paradigm shift in astronomy, soon followed by Johannes Kepler's expressions for planetary orbits, which summarized the meticulous observations of Tycho Brahe; the works of these men (alongside Galileo's) can perhaps be considered to constitute the Scientific Revolution.
The great push toward the modern concept of explanation started with Galileo, one of the few physicists who was both a consummate theoretician and a great experimentalist. The analytic geometry and mechanics of Descartes were incorporated into the calculus and mechanics of Isaac Newton, another theoretician/experimentalist of the highest order, writing Principia Mathematica. In it contained a grand synthesis of the work of Copernicus, Galileo and Kepler; as well as Newton's theories of mechanics and gravitation, which held sway as worldviews until the early 20th century. Simultaneously, progress was also made in optics (in particular colour theory and the ancient science of geometrical optics), courtesy of Newton, Descartes and the Dutchmen Snell and Huygens. In the 18th and 19th centuries Joseph-Louis Lagrange, Leonhard Euler and William Rowan Hamilton would extend the theory of classical mechanics considerably. They picked up the interactive intertwining of mathematics and physics begun two millennia earlier by Pythagoras.
Among the great conceptual achievements of the 19th and 20th centuries were the consolidation of the idea of energy (as well as its global conservation) by the inclusion of heat, electricity and magnetism, and then light. The laws of thermodynamics, and most importantly the introduction of the singular concept of entropy began to provide a macroscopic explanation for the properties of matter. Statistical mechanics (followed by statistical physics and Quantum statistical mechanics) emerged as an offshoot of thermodynamics late in the 19th century. Another important event in the 19th century was the discovery of electromagnetic theory, unifying the previously separate phenomena of electricity, magnetism and light.
The pillars of modern physics, and perhaps the most revolutionary theories in the history of physics, have been relativity theory and quantum mechanics. Newtonian mechanics was subsumed under special relativity and Newton's gravity was given a kinematic explanation by general relativity. Quantum mechanics led to an understanding of blackbody radiation (which indeed, was an original motivation for the theory) and of anomalies in the specific heats of solids — and finally to an understanding of the internal structures of atoms and molecules. Quantum mechanics soon gave way to the formulation of quantum field theory (QFT), begun in the late 1920s. In the aftermath of World War 2, more progress brought much renewed interest in QFT, which had since the early efforts, stagnated. The same period also saw fresh attacks on the problems of superconductivity and phase transitions, as well as the first applications of QFT in the area of theoretical condensed matter. The 1960s and 70s saw the formulation of the Standard model of particle physics using QFT and progress in condensed matter physics (theoretical foundations of superconductivity and critical phenomena, among others), in parallel to the applications of relativity to problems in astronomy and cosmology respectively.
All of these achievements depended on the theoretical physics as a moving force both to suggest experiments and to consolidate results — often by ingenious application of existing mathematics, or, as in the case of Descartes and Newton (with Leibniz), by inventing new mathematics. Fourier's studies of heat conduction led to a new branch of mathematics: infinite, orthogonal series.
Modern theoretical physics attempts to unify theories and explain phenomena in further attempts to understand the Universe, from the cosmological to the elementary particle scale. Where experimentation cannot be done, theoretical physics still tries to advance through the use of mathematical models.
Mainstream theories
Mainstream theories (sometimes referred to as central theories) are the body of knowledge of both factual and scientific views and possess a usual scientific quality of the tests of repeatability, consistency with existing well-established science and experimentation. There do exist mainstream theories that are generally accepted theories based solely upon their effects explaining a wide variety of data, although the detection, explanation, and possible composition are subjects of debate.
Examples
Big Bang
Chaos theory
Classical mechanics
Classical field theory
Dynamo theory
Field theory
Ginzburg–Landau theory
Kinetic theory of gases
Classical electromagnetism
Perturbation theory (quantum mechanics)
Physical cosmology
Quantum chromodynamics
Quantum complexity theory
Quantum electrodynamics
Quantum field theory
Quantum field theory in curved spacetime
Quantum information theory
Quantum mechanics
Quantum thermodynamics
Relativistic quantum mechanics
Scattering theory
Standard Model
Statistical physics
Theory of relativity
Wave–particle duality
Proposed theories
The proposed theories of physics are usually relatively new theories which deal with the study of physics which include scientific approaches, means for determining the validity of models and new types of reasoning used to arrive at the theory. However, some proposed theories include theories that have been around for decades and have eluded methods of discovery and testing. Proposed theories can include fringe theories in the process of becoming established (and, sometimes, gaining wider acceptance). Proposed theories usually have not been tested. In addition to the theories like those listed below, there are also different interpretations of quantum mechanics, which may or may not be considered different theories since it is debatable whether they yield different predictions for physical experiments, even in principle. For example, AdS/CFT correspondence, Chern–Simons theory, graviton, magnetic monopole, string theory, theory of everything.
Fringe theories
Fringe theories include any new area of scientific endeavor in the process of becoming established and some proposed theories. It can include speculative sciences. This includes physics fields and physical theories presented in accordance with known evidence, and a body of associated predictions have been made according to that theory.
Some fringe theories go on to become a widely accepted part of physics. Other fringe theories end up being disproven. Some fringe theories are a form of protoscience and others are a form of pseudoscience. The falsification of the original theory sometimes leads to reformulation of the theory.
Examples
Aether (classical element)
Luminiferous aether
Digital physics
Electrogravitics
Stochastic electrodynamics
Tesla's dynamic theory of gravity
Thought experiments vs real experiments
"Thought" experiments are situations created in one's mind, asking a question akin to "suppose you are in this situation, assuming such is true, what would follow?". They are usually created to investigate phenomena that are not readily experienced in every-day situations. Famous examples of such thought experiments are Schrödinger's cat, the EPR thought experiment, simple illustrations of time dilation, and so on. These usually lead to real experiments designed to verify that the conclusion (and therefore the assumptions) of the thought experiments are correct. The EPR thought experiment led to the Bell inequalities, which were then tested to various degrees of rigor, leading to the acceptance of the current formulation of quantum mechanics and probabilism as a working hypothesis.
See also
List of theoretical physicists
Philosophy of physics
Symmetry in quantum mechanics
Timeline of developments in theoretical physics
Double field theory
Notes
References
Further reading
Duhem, Pierre. La théorie physique - Son objet, sa structure, (in French). 2nd edition - 1914. English translation: The physical theory - its purpose, its structure. Republished by Joseph Vrin philosophical bookstore (1981), .
Feynman, et al. The Feynman Lectures on Physics (3 vol.). First edition: Addison–Wesley, (1964, 1966).
Bestselling three-volume textbook covering the span of physics. Reference for both (under)graduate student and professional researcher alike.
Landau et al. Course of Theoretical Physics.
Famous series of books dealing with theoretical concepts in physics covering 10 volumes, translated into many languages and reprinted over many editions. Often known simply as "Landau and Lifschits" or "Landau-Lifschits" in the literature.
Longair, MS. Theoretical Concepts in Physics: An Alternative View of Theoretical Reasoning in Physics. Cambridge University Press; 2d edition (4 Dec 2003). .
Planck, Max (1909). Eight Lectures on theoretical physics. Library of Alexandria. , .
A set of lectures given in 1909 at Columbia University.
Sommerfeld, Arnold. Vorlesungen über theoretische Physik (Lectures on Theoretical Physics); German, 6 volumes.
A series of lessons from a master educator of theoretical physicists.
External links
MIT Center for Theoretical Physics
How to become a GOOD Theoretical Physicist, a website made by Gerard 't Hooft
de:Physik#Theoretische Physik | 0.778331 | 0.997929 | 0.776719 |
Energy–maneuverability theory | Energy–maneuverability theory is a model of aircraft performance. It was developed by Col. John Boyd, a fighter pilot, and Thomas P. Christie, a mathematician with the United States Air Force, and is useful in describing an aircraft's performance as the total of kinetic and potential energies or aircraft specific energy. It relates the thrust, weight, aerodynamic drag, wing area, and other flight characteristics of an aircraft into a quantitative model. This enables the combat capabilities of various aircraft or prospective design trade-offs to be predicted and compared.
Formula
All of these aspects of airplane performance are compressed into a single value by the following formula:
History
John Boyd, a U.S. jet fighter pilot in the Korean War, began developing the theory in the early 1960s. He teamed with mathematician Thomas Christie at Eglin Air Force Base to use the base's high-speed computer to compare the performance envelopes of U.S. and Soviet aircraft from the Korean and Vietnam Wars. They completed a two-volume report on their studies in 1964. Energy Maneuverability came to be accepted within the U.S. Air Force and brought about improvements in the requirements for the F-15 Eagle and later the F-16 Fighting Falcon fighters.
See also
Lagrangian mechanics
Notes
References
Hammond, Grant T. The Mind of War: John Boyd and American Security. Washington, D.C.: Smithsonian Institution Press, 2001. and .
Coram, Robert. Boyd: The Fighter Pilot Who Changed the Art of War. New York: Back Bay Books, 2002. and .
Wendl, M.J., G.G. Grose, J.L. Porter, and V.R. Pruitt. Flight/Propulsion Control Integration Aspects of Energy Management. Society of Automotive Engineers, 1974, p. 740480.
Aerospace engineering | 0.787005 | 0.986868 | 0.77667 |
Aristotelian physics | Aristotelian physics is the form of natural philosophy described in the works of the Greek philosopher Aristotle (384–322 BC). In his work Physics, Aristotle intended to establish general principles of change that govern all natural bodies, both living and inanimate, celestial and terrestrialincluding all motion (change with respect to place), quantitative change (change with respect to size or number), qualitative change, and substantial change ("coming to be" [coming into existence, 'generation'] or "passing away" [no longer existing, 'corruption']). To Aristotle, 'physics' was a broad field including subjects which would now be called the philosophy of mind, sensory experience, memory, anatomy and biology. It constitutes the foundation of the thought underlying many of his works.
Key concepts of Aristotelian physics include the structuring of the cosmos into concentric spheres, with the Earth at the centre and celestial spheres around it. The terrestrial sphere was made of four elements, namely earth, air, fire, and water, subject to change and decay. The celestial spheres were made of a fifth element, an unchangeable aether. Objects made of these elements have natural motions: those of earth and water tend to fall; those of air and fire, to rise. The speed of such motion depends on their weights and the density of the medium. Aristotle argued that a vacuum could not exist as speeds would become infinite.
Aristotle described four causes or explanations of change as seen on earth: the material, formal, efficient, and final causes of things. As regards living things, Aristotle's biology relied on observation of what he considered to be ‘natural kinds’, both those he considered basic and the groups to which he considered these belonged. He did not conduct experiments in the modern sense, but relied on amassing data, observational procedures such as dissection, and making hypotheses about relationships between measurable quantities such as body size and lifespan.
Methods
While consistent with common human experience, Aristotle's principles were not based on controlled, quantitative experiments, so they do not describe our universe in the precise, quantitative way now expected of science. Contemporaries of Aristotle like Aristarchus rejected these principles in favor of heliocentrism, but their ideas were not widely accepted. Aristotle's principles were difficult to disprove merely through casual everyday observation, but later development of the scientific method challenged his views with experiments and careful measurement, using increasingly advanced technology such as the telescope and vacuum pump.
There are clear differences between modern and Aristotelian physics, the main being the use of mathematics, largely absent in Aristotle. Some recent studies, however, have re-evaluated Aristotle's physics, stressing both its empirical validity and its continuity with modern physics.
Concepts
Elements and spheres
Aristotle divided his universe into "terrestrial spheres" which were "corruptible" and where humans lived, and moving but otherwise unchanging celestial spheres.
Aristotle believed that four classical elements make up everything in the terrestrial spheres: earth, air, fire and water. He also held that the heavens are made of a special weightless and incorruptible (i.e. unchangeable) fifth element called "aether". Aether also has the name "quintessence", meaning, literally, "fifth being".
Aristotle considered heavy matter such as iron and other metals to consist primarily of the element earth, with a smaller amount of the other three terrestrial elements. Other, lighter objects, he believed, have less earth, relative to the other three elements in their composition.
The four classical elements were not invented by Aristotle; they were originated by Empedocles. During the Scientific Revolution, the ancient theory of classical elements was found to be incorrect, and was replaced by the empirically tested concept of chemical elements.
Celestial spheres
According to Aristotle, the Sun, Moon, planets and starsare embedded in perfectly concentric "crystal spheres" that rotate eternally at fixed rates. Because the celestial spheres are incapable of any change except rotation, the terrestrial sphere of fire must account for the heat, starlight and occasional meteorites. The lowest, lunar sphere is the only celestial sphere that actually comes in contact with the sublunary orb's changeable, terrestrial matter, dragging the rarefied fire and air along underneath as it rotates. Like Homer's æthere (αἰθήρ)the "pure air" of Mount Olympuswas the divine counterpart of the air breathed by mortal beings (άήρ, aer). The celestial spheres are composed of the special element aether, eternal and unchanging, the sole capability of which is a uniform circular motion at a given rate (relative to the diurnal motion of the outermost sphere of fixed stars).
The concentric, aetherial, cheek-by-jowl "crystal spheres" that carry the Sun, Moon and stars move eternally with unchanging circular motion. Spheres are embedded within spheres to account for the "wandering stars" (i.e. the planets, which, in comparison with the Sun, Moon and stars, appear to move erratically). Mercury, Venus, Mars, Jupiter, and Saturn are the only planets (including minor planets) which were visible before the invention of the telescope, which is why Neptune and Uranus are not included, nor are any asteroids. Later, the belief that all spheres are concentric was forsaken in favor of Ptolemy's deferent and epicycle model. Aristotle submits to the calculations of astronomers regarding the total number of spheres and various accounts give a number in the neighborhood of fifty spheres. An unmoved mover is assumed for each sphere, including a "prime mover" for the sphere of fixed stars. The unmoved movers do not push the spheres (nor could they, being immaterial and dimensionless) but are the final cause of the spheres' motion, i.e. they explain it in a way that's similar to the explanation "the soul is moved by beauty".
Terrestrial change
Unlike the eternal and unchanging celestial aether, each of the four terrestrial elements are capable of changing into either of the two elements they share a property with: e.g. the cold and wet (water) can transform into the hot and wet (air) or the cold and dry (earth). Any apparent change from cold and wet into the hot and dry (fire) is actually a two-step process, as first one of the property changes, then the other. These properties are predicated of an actual substance relative to the work it is able to do; that of heating or chilling and of desiccating or moistening. The four elements exist only with regard to this capacity and relative to some potential work. The celestial element is eternal and unchanging, so only the four terrestrial elements account for "coming to be" and "passing away"or, in the terms of Aristotle's On Generation and Corruption (Περὶ γενέσεως καὶ φθορᾶς), "generation" and "corruption".
Natural place
The Aristotelian explanation of gravity is that all bodies move toward their natural place. For the elements earth and water, that place is the center of the (geocentric) universe; the natural place of water is a concentric shell around the Earth because earth is heavier; it sinks in water. The natural place of air is likewise a concentric shell surrounding that of water; bubbles rise in water. Finally, the natural place of fire is higher than that of air but below the innermost celestial sphere (carrying the Moon).
In Book Delta of his Physics (IV.5), Aristotle defines topos (place) in terms of two bodies, one of which contains the other: a "place" is where the inner surface of the former (the containing body) touches the contained body. This definition remained dominant until the beginning of the 17th century, even though it had been questioned and debated by philosophers since antiquity. The most significant early critique was made in terms of geometry by the 11th-century Arab polymath al-Hasan Ibn al-Haytham (Alhazen) in his Discourse on Place.
Natural motion
Terrestrial objects rise or fall, to a greater or lesser extent, according to the ratio of the four elements of which they are composed. For example, earth, the heaviest element, and water, fall toward the center of the cosmos; hence the Earth and for the most part its oceans, will have already come to rest there. At the opposite extreme, the lightest elements, air and especially fire, rise up and away from the center.
The elements are not proper substances in Aristotelian theory (or the modern sense of the word). Instead, they are abstractions used to explain the varying natures and behaviors of actual materials in terms of ratios between them.
Motion and change are closely related in Aristotelian physics. Motion, according to Aristotle, involved a change from potentiality to actuality. He gave example of four types of change, namely change in substance, in quality, in quantity and in place.
Aristotle proposed that the speed at which two identically shaped objects sink or fall is directly proportional to their weights and inversely proportional to the density of the medium through which they move. While describing their terminal velocity, Aristotle must stipulate that there would be no limit at which to compare the speed of atoms falling through a vacuum, (they could move indefinitely fast because there would be no particular place for them to come to rest in the void). Now however it is understood that at any time prior to achieving terminal velocity in a relatively resistance-free medium like air, two such objects are expected to have nearly identical speeds because both are experiencing a force of gravity proportional to their masses and have thus been accelerating at nearly the same rate. This became especially apparent from the eighteenth century when partial vacuum experiments began to be made, but some two hundred years earlier Galileo had already demonstrated that objects of different weights reach the ground in similar times.
Unnatural motion
Apart from the natural tendency of terrestrial exhalations to rise and objects to fall, unnatural or forced motion from side to side results from the turbulent collision and sliding of the objects as well as transmutation between the elements (On Generation and Corruption). Aristotle phrased this principle as: "Everything that moves is moved by something else. (Omne quod moventur ab alio movetur.)" When the cause ceases, so does the effect. The cause, according to Aristotle, must be a power (i.e., force) that drives the body as long as the external agent remains in direct contact. Aristotle went on to say that the velocity of the body is directly proportional to the force imparted and inversely proportional to the resistance of the medium in which the motion takes place. This gives the law in today's notationThis law presented three difficulties that Aristotle was aware of. The first is that if the imparted power is less than the resistance, then in reality it will not move the body, but Aristotle's relation says otherwise. Second, what is the source of the increase in imparted power required to increase the velocity of a freely falling body? Third, what is the imparted power that keeps a projectile in motion after it leaves the agent of projection? Aristotle, in his book Physics, Book 8, Chapter 10, 267a 4, proposed the following solution to the third problem in the case of a shot arrow. The bowstring or hand imparts a certain 'power of being a movent' to the air in contact with it, so that this imparted force is transmitted to the next layer of air, and so on, thus keeping the arrow in motion until the power gradually dissipates.
Chance
In his Physics Aristotle examines accidents (συμβεβηκός, symbebekòs) that have no cause but chance. "Nor is there any definite cause for an accident, but only chance (τύχη, týche), namely an indefinite (ἀόριστον, aóriston) cause" (Metaphysics V, 1025a25).
It is obvious that there are principles and causes which are generable and destructible apart from the actual processes of generation and destruction; for if this is not true, everything will be of necessity: that is, if there must necessarily be some cause, other than accidental, of that which is generated and destroyed. Will this be, or not? Yes, if this happens; otherwise not (Metaphysics VI, 1027a29).
Continuum and vacuum
Aristotle argues against the indivisibles of Democritus (which differ considerably from the historical and the modern use of the term "atom"). As a place without anything existing at or within it, Aristotle argued against the possibility of a vacuum or void. Because he believed that the speed of an object's motion is proportional to the force being applied (or, in the case of natural motion, the object's weight) and inversely proportional to the density of the medium, he reasoned that objects moving in a void would move indefinitely fastand thus any and all objects surrounding the void would immediately fill it. The void, therefore, could never form.
The "voids" of modern-day astronomy (such as the Local Void adjacent to our own galaxy) have the opposite effect: ultimately, bodies off-center are ejected from the void due to the gravity of the material outside.
Four causes
According to Aristotle, there are four ways to explain the aitia or causes of change. He writes that "we do not have knowledge of a thing until we have grasped its why, that is to say, its cause."
Aristotle held that there were four kinds of causes.
Material
The material cause of a thing is that of which it is made. For a table, that might be wood; for a statue, that might be bronze or marble.
Formal
The formal cause of a thing is the essential property that makes it the kind of thing it is. In Metaphysics Book Α Aristotle emphasizes that form is closely related to essence and definition. He says for example that the ratio 2:1, and number in general, is the cause of the octave.
Efficient
The efficient cause of a thing is the primary agency by which its matter took its form. For example, the efficient cause of a baby is a parent of the same species and that of a table is a carpenter, who knows the form of the table. In his Physics II, 194b29—32, Aristotle writes: "there is that which is the primary originator of the change and of its cessation, such as the deliberator who is responsible [sc. for the action] and the father of the child, and in general the producer of the thing produced and the changer of the thing changed".
Final
The final cause is that for the sake of which something takes place, its aim or teleological purpose: for a germinating seed, it is the adult plant, for a ball at the top of a ramp, it is coming to rest at the bottom, for an eye, it is seeing, for a knife, it is cutting.
Biology
According to Aristotle, the science of living things proceeds by gathering observations about each natural kind of animal, organizing them into genera and species (the differentiae in History of Animals) and then going on to study the causes (in Parts of Animals and Generation of Animals, his three main biological works).
Organism and mechanism
The four elements make up the uniform materials such as blood, flesh and bone, which are themselves the matter out of which are created the non-uniform organs of the body (e.g. the heart, liver and hands) "which in turn, as parts, are matter for the functioning body as a whole (PA II. 1 646a 13—24)".
See also Organic form.
Psychology
According to Aristotle, perception and thought are similar, though not exactly alike in that perception is concerned only with the external objects that are acting on our sense organs at any given time, whereas we can think about anything we choose. Thought is about universal forms, in so far as they have been successfully understood, based on our memory of having encountered instances of those forms directly.
Medieval commentary
The Aristotelian theory of motion came under criticism and modification during the Middle Ages. Modifications began with John Philoponus in the 6th century, who partly accepted Aristotle's theory that "continuation of motion depends on continued action of a force" but modified it to include his idea that a hurled body also acquires an inclination (or "motive power") for movement away from whatever caused it to move, an inclination that secures its continued motion. This impressed virtue would be temporary and self-expending, meaning that all motion would tend toward the form of Aristotle's natural motion.
In The Book of Healing (1027), the 11th-century Persian polymath Avicenna developed Philoponean theory into the first coherent alternative to Aristotelian theory. Inclinations in the Avicennan theory of motion were not self-consuming but permanent forces whose effects were dissipated only as a result of external agents such as air resistance, making him "the first to conceive such a permanent type of impressed virtue for non-natural motion". Such a self-motion (mayl) is "almost the opposite of the Aristotelian conception of violent motion of the projectile type, and it is rather reminiscent of the principle of inertia, i.e. Newton's first law of motion."
The eldest Banū Mūsā brother, Ja'far Muhammad ibn Mūsā ibn Shākir (800-873), wrote the Astral Motion and The Force of Attraction. The Persian physicist, Ibn al-Haytham (965-1039) discussed the theory of attraction between bodies. It seems that he was aware of the magnitude of acceleration due to gravity and he discovered that the heavenly bodies "were accountable to the laws of physics". During his debate with Avicenna, al-Biruni also criticized the Aristotelian theory of gravity firstly for denying the existence of levity or gravity in the celestial spheres; and, secondly, for its notion of circular motion being an innate property of the heavenly bodies.
Hibat Allah Abu'l-Barakat al-Baghdaadi (1080–1165) wrote al-Mu'tabar, a critique of Aristotelian physics where he negated Aristotle's idea that a constant force produces uniform motion, as he realized that a force applied continuously produces acceleration, a fundamental law of classical mechanics and an early foreshadowing of Newton's second law of motion. Like Newton, he described acceleration as the rate of change of speed.
In the 14th century, Jean Buridan developed the theory of impetus as an alternative to the Aristotelian theory of motion. The theory of impetus was a precursor to the concepts of inertia and momentum in classical mechanics. Buridan and Albert of Saxony also refer to Abu'l-Barakat in explaining that the acceleration of a falling body is a result of its increasing impetus. In the 16th century, Al-Birjandi discussed the possibility of the Earth's rotation and, in his analysis of what might occur if the Earth were rotating, developed a hypothesis similar to Galileo's notion of "circular inertia". He described it in terms of the following observational test:
Life and death of Aristotelian physics
The reign of Aristotelian physics, the earliest known speculative theory of physics, lasted almost two millennia. After the work of many pioneers such as Copernicus, Tycho Brahe, Galileo, Kepler, Descartes and Newton, it became generally accepted that Aristotelian physics was neither correct nor viable. Despite this, it survived as a scholastic pursuit well into the seventeenth century, until universities amended their curricula.
In Europe, Aristotle's theory was first convincingly discredited by Galileo's studies. Using a telescope, Galileo observed that the Moon was not entirely smooth, but had craters and mountains, contradicting the Aristotelian idea of the incorruptibly perfect smooth Moon. Galileo also criticized this notion theoretically; a perfectly smooth Moon would reflect light unevenly like a shiny billiard ball, so that the edges of the moon's disk would have a different brightness than the point where a tangent plane reflects sunlight directly to the eye. A rough moon reflects in all directions equally, leading to a disk of approximately equal brightness which is what is observed. Galileo also observed that Jupiter has moons – i.e. objects revolving around a body other than the Earth – and noted the phases of Venus, which demonstrated that Venus (and, by implication, Mercury) traveled around the Sun, not the Earth.
According to legend, Galileo dropped balls of various densities from the Tower of Pisa and found that lighter and heavier ones fell at almost the same speed. His experiments actually took place using balls rolling down inclined planes, a form of falling sufficiently slow to be measured without advanced instruments.
In a relatively dense medium such as water, a heavier body falls faster than a lighter one. This led Aristotle to speculate that the rate of falling is proportional to the weight and inversely proportional to the density of the medium. From his experience with objects falling in water, he concluded that water is approximately ten times denser than air. By weighing a volume of compressed air, Galileo showed that this overestimates the density of air by a factor of forty. From his experiments with inclined planes, he concluded that if friction is neglected, all bodies fall at the same rate (which is also not true, since not only friction but also density of the medium relative to density of the bodies has to be negligible. Aristotle correctly noticed that medium density is a factor but focused on body weight instead of density. Galileo neglected medium density which led him to correct conclusion for vacuum).
Galileo also advanced a theoretical argument to support his conclusion. He asked if two bodies of different weights and different rates of fall are tied by a string, does the combined system fall faster because it is now more massive, or does the lighter body in its slower fall hold back the heavier body? The only convincing answer is neither: all the systems fall at the same rate.
Followers of Aristotle were aware that the motion of falling bodies was not uniform, but picked up speed with time. Since time is an abstract quantity, the peripatetics postulated that the speed was proportional to the distance. Galileo established experimentally that the speed is proportional to the time, but he also gave a theoretical argument that the speed could not possibly be proportional to the distance. In modern terms, if the rate of fall is proportional to the distance, the differential expression for the distance y travelled after time t is:
with the condition that . Galileo demonstrated that this system would stay at for all time. If a perturbation set the system into motion somehow, the object would pick up speed exponentially in time, not linearly.
Standing on the surface of the Moon in 1971, David Scott famously repeated Galileo's experiment by dropping a feather and a hammer from each hand at the same time. In the absence of a substantial atmosphere, the two objects fell and hit the Moon's surface at the same time.
The first convincing mathematical theory of gravity – in which two masses are attracted toward each other by a force whose effect decreases according to the inverse square of the distance between them – was Newton's law of universal gravitation. This, in turn, was replaced by the General theory of relativity due to Albert Einstein.
Modern evaluations of Aristotle's physics
Modern scholars differ in their opinions of whether Aristotle's physics were sufficiently based on empirical observations to qualify as science, or else whether they were derived primarily from philosophical speculation and thus fail to satisfy the scientific method.
Carlo Rovelli has argued that Aristotle's physics are an accurate and non-intuitive representation of a particular domain (motion in fluids), and thus are just as scientific as Newton's laws of motion, which also are accurate in some domains while failing in others (i.e. special and general relativity).
As listed in the Corpus Aristotelicum
See also
Minima naturalia, a hylomorphic concept suggested by Aristotle broadly analogous in Peripatetic and Scholastic physical speculation to the atoms of Epicureanism
Notes
a Here, the term "Earth" does not refer to planet Earth, known by modern science to be composed of a large number of chemical elements. Modern chemical elements are not conceptually similar to Aristotle's elements; the term "air", for instance, does not refer to breathable air.
References
Sources
H. Carteron (1965) "Does Aristotle Have a Mechanics?" in Articles on Aristotle 1. Science eds. Jonathan Barnes, Malcolm Schofield, Richard Sorabji (London: General Duckworth and Company Limited), 161–174.
Further reading
Katalin Martinás, "Aristotelian Thermodynamics" in Thermodynamics: history and philosophy: facts, trends, debates (Veszprém, Hungary 23–28 July 1990), .
Physics
Ancient Greek physics
Obsolete theories in physics
Theories in ancient Greek philosophy
pt:Teoria aristotélica da gravitação | 0.782438 | 0.992505 | 0.776573 |
Thermal conduction | Thermal conduction is the diffusion of thermal energy (heat) within one material or between materials in contact. The higher temperature object has molecules with more kinetic energy; collisions between molecules distributes this kinetic energy until an object has the same kinetic energy throughout. Thermal conductivity, frequently represented by , is a property that relates the rate of heat loss per unit area of a material to its rate of change of temperature. Essentially, it is a value that accounts for any property of the material that could change the way it conducts heat. Heat spontaneously flows along a temperature gradient (i.e. from a hotter body to a colder body). For example, heat is conducted from the hotplate of an electric stove to the bottom of a saucepan in contact with it. In the absence of an opposing external driving energy source, within a body or between bodies, temperature differences decay over time, and thermal equilibrium is approached, temperature becoming more uniform.
Every process involving heat transfer takes place by only three methods:
Conduction is heat transfer through stationary matter by physical contact. (The matter is stationary on a macroscopic scale—we know there is thermal motion of the atoms and molecules at any temperature above absolute zero.) Heat transferred between the electric burner of a stove and the bottom of a pan is transferred by conduction.
Convection is the heat transfer by the macroscopic movement of a fluid. This type of transfer takes place in a forced-air furnace and in weather systems, for example.
Heat transfer by radiation occurs when microwaves, infrared radiation, visible light, or another form of electromagnetic radiation is emitted or absorbed. An obvious example is the warming of the Earth by the Sun. A less obvious example is thermal radiation from the human body.
Overview
A region with greater thermal energy (heat) corresponds with greater molecular agitation. Thus when a hot object touches a cooler surface, the highly agitated molecules from the hot object bump the calm molecules of the cooler surface, transferring the microscopic kinetic energy and causing the colder part or object to heat up. Mathematically, thermal conduction works just like diffusion. As temperature difference goes up, the distance traveled gets shorter or the area goes up thermal conduction increases:Where:
Thermal conduction (power) is the heat per unit time transferred some distance ℓ between the two temperatures.
κ is the thermal conductivity of the material
A is the cross-sectional area of the object
ΔT is the difference in temperature from one side to the other.
ℓ is the length of the path the heat has to be transferred.
Conduction is the main mode of heat transfer for solid materials because the strong inter-molecular forces allow the vibrations of particles to be easily transmitted, in comparison to liquids and gases. Liquids have weaker inter-molecular forces and more space between the particles, which makes the vibrations of particles harder to transmit. Gases have even more space, and therefore infrequent particle collisions. This makes liquids and gases poor conductors of heat.
Thermal contact conductance is the study of heat conduction between solid bodies in contact. A temperature drop is often observed at the interface between the two surfaces in contact. This phenomenon is said to be a result of a thermal contact resistance existing between the contacting surfaces. Interfacial thermal resistance is a measure of an interface's resistance to thermal flow. This thermal resistance differs from contact resistance, as it exists even at atomically perfect interfaces. Understanding the thermal resistance at the interface between two materials is of primary significance in the study of its thermal properties. Interfaces often contribute significantly to the observed properties of the materials.
The inter-molecular transfer of energy could be primarily by elastic impact, as in fluids, or by free-electron diffusion, as in metals, or phonon vibration, as in insulators. In insulators, the heat flux is carried almost entirely by phonon vibrations.
Metals (e.g., copper, platinum, gold, etc.) are usually good conductors of thermal energy. This is due to the way that metals bond chemically: metallic bonds (as opposed to covalent or ionic bonds) have free-moving electrons that transfer thermal energy rapidly through the metal. The electron fluid of a conductive metallic solid conducts most of the heat flux through the solid. Phonon flux is still present but carries less of the energy. Electrons also conduct electric current through conductive solids, and the thermal and electrical conductivities of most metals have about the same ratio. A good electrical conductor, such as copper, also conducts heat well. Thermoelectricity is caused by the interaction of heat flux and electric current. Heat conduction within a solid is directly analogous to diffusion of particles within a fluid, in the situation where there are no fluid currents.
In gases, heat transfer occurs through collisions of gas molecules with one another. In the absence of convection, which relates to a moving fluid or gas phase, thermal conduction through a gas phase is highly dependent on the composition and pressure of this phase, and in particular, the mean free path of gas molecules relative to the size of the gas gap, as given by the Knudsen number .
To quantify the ease with which a particular medium conducts, engineers employ the thermal conductivity, also known as the conductivity constant or conduction coefficient, k. In thermal conductivity, k is defined as "the quantity of heat, Q, transmitted in time (t) through a thickness (L), in a direction normal to a surface of area (A), due to a temperature difference (ΔT) [...]". Thermal conductivity is a material property that is primarily dependent on the medium's phase, temperature, density, and molecular bonding. Thermal effusivity is a quantity derived from conductivity, which is a measure of its ability to exchange thermal energy with its surroundings.
Steady-state conduction
Steady-state conduction is the form of conduction that happens when the temperature difference(s) driving the conduction are constant, so that (after an equilibration time), the spatial distribution of temperatures (temperature field) in the conducting object does not change any further. Thus, all partial derivatives of temperature concerning space may either be zero or have nonzero values, but all derivatives of temperature at any point concerning time are uniformly zero. In steady-state conduction, the amount of heat entering any region of an object is equal to the amount of heat coming out (if this were not so, the temperature would be rising or falling, as thermal energy was tapped or trapped in a region).
For example, a bar may be cold at one end and hot at the other, but after a state of steady-state conduction is reached, the spatial gradient of temperatures along the bar does not change any further, as time proceeds. Instead, the temperature remains constant at any given cross-section of the rod normal to the direction of heat transfer, and this temperature varies linearly in space in the case where there is no heat generation in the rod.
In steady-state conduction, all the laws of direct current electrical conduction can be applied to "heat currents". In such cases, it is possible to take "thermal resistances" as the analog to electrical resistances. In such cases, temperature plays the role of voltage, and heat transferred per unit time (heat power) is the analog of electric current. Steady-state systems can be modeled by networks of such thermal resistances in series and parallel, in exact analogy to electrical networks of resistors. See purely resistive thermal circuits for an example of such a network.
Transient conduction
During any period in which temperatures changes in time at any place within an object, the mode of thermal energy flow is termed transient conduction. Another term is "non-steady-state" conduction, referring to the time-dependence of temperature fields in an object. Non-steady-state situations appear after an imposed change in temperature at a boundary of an object. They may also occur with temperature changes inside an object, as a result of a new source or sink of heat suddenly introduced within an object, causing temperatures near the source or sink to change in time.
When a new perturbation of temperature of this type happens, temperatures within the system change in time toward a new equilibrium with the new conditions, provided that these do not change. After equilibrium, heat flow into the system once again equals the heat flow out, and temperatures at each point inside the system no longer change. Once this happens, transient conduction is ended, although steady-state conduction may continue if heat flow continues.
If changes in external temperatures or internal heat generation changes are too rapid for the equilibrium of temperatures in space to take place, then the system never reaches a state of unchanging temperature distribution in time, and the system remains in a transient state.
An example of a new source of heat "turning on" within an object, causing transient conduction, is an engine starting in an automobile. In this case, the transient thermal conduction phase for the entire machine is over, and the steady-state phase appears, as soon as the engine reaches steady-state operating temperature. In this state of steady-state equilibrium, temperatures vary greatly from the engine cylinders to other parts of the automobile, but at no point in space within the automobile does temperature increase or decrease. After establishing this state, the transient conduction phase of heat transfer is over.
New external conditions also cause this process: for example, the copper bar in the example steady-state conduction experiences transient conduction as soon as one end is subjected to a different temperature from the other. Over time, the field of temperatures inside the bar reaches a new steady-state, in which a constant temperature gradient along the bar is finally set up, and this gradient then stays constant in time. Typically, such a new steady-state gradient is approached exponentially with time after a new temperature-or-heat source or sink, has been introduced. When a "transient conduction" phase is over, heat flow may continue at high power, so long as temperatures do not change.
An example of transient conduction that does not end with steady-state conduction, but rather no conduction, occurs when a hot copper ball is dropped into oil at a low temperature. Here, the temperature field within the object begins to change as a function of time, as the heat is removed from the metal, and the interest lies in analyzing this spatial change of temperature within the object over time until all gradients disappear entirely (the ball has reached the same temperature as the oil). Mathematically, this condition is also approached exponentially; in theory, it takes infinite time, but in practice, it is over, for all intents and purposes, in a much shorter period. At the end of this process with no heat sink but the internal parts of the ball (which are finite), there is no steady-state heat conduction to reach. Such a state never occurs in this situation, but rather the end of the process is when there is no heat conduction at all.
The analysis of non-steady-state conduction systems is more complex than that of steady-state systems. If the conducting body has a simple shape, then exact analytical mathematical expressions and solutions may be possible (see heat equation for the analytical approach). However, most often, because of complicated shapes with varying thermal conductivities within the shape (i.e., most complex objects, mechanisms or machines in engineering) often the application of approximate theories is required, and/or numerical analysis by computer. One popular graphical method involves the use of Heisler Charts.
Occasionally, transient conduction problems may be considerably simplified if regions of the object being heated or cooled can be identified, for which thermal conductivity is very much greater than that for heat paths leading into the region. In this case, the region with high conductivity can often be treated in the lumped capacitance model, as a "lump" of material with a simple thermal capacitance consisting of its aggregate heat capacity. Such regions warm or cool, but show no significant temperature variation across their extent, during the process (as compared to the rest of the system). This is due to their far higher conductance. During transient conduction, therefore, the temperature across their conductive regions changes uniformly in space, and as a simple exponential in time. An example of such systems is those that follow Newton's law of cooling during transient cooling (or the reverse during heating). The equivalent thermal circuit consists of a simple capacitor in series with a resistor. In such cases, the remainder of the system with a high thermal resistance (comparatively low conductivity) plays the role of the resistor in the circuit.
Relativistic conduction
The theory of relativistic heat conduction is a model that is compatible with the theory of special relativity. For most of the last century, it was recognized that the Fourier equation is in contradiction with the theory of relativity because it admits an infinite speed of propagation of heat signals. For example, according to the Fourier equation, a pulse of heat at the origin would be felt at infinity instantaneously. The speed of information propagation is faster than the speed of light in vacuum, which is physically inadmissible within the framework of relativity.
Quantum conduction
Second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave-like motion, rather than by the more usual mechanism of diffusion. Heat takes the place of pressure in normal sound waves. This leads to a very high thermal conductivity. It is known as "second sound" because the wave motion of heat is similar to the propagation of sound in air.this is called Quantum conduction
Fourier's law
The law of heat conduction, also known as Fourier's law (compare Fourier's heat equation), states that the rate of heat transfer through a material is proportional to the negative gradient in the temperature and to the area, at right angles to that gradient, through which the heat flows. We can state this law in two equivalent forms: the integral form, in which we look at the amount of energy flowing into or out of a body as a whole, and the differential form, in which we look at the flow rates or fluxes of energy locally.
Newton's law of cooling is a discrete analogue of Fourier's law, while Ohm's law is the electrical analogue of Fourier's law and Fick's laws of diffusion is its chemical analogue.
Differential form
The differential form of Fourier's law of thermal conduction shows that the local heat flux density is equal to the product of thermal conductivity and the negative local temperature gradient . The heat flux density is the amount of energy that flows through a unit area per unit time.
where (including the SI units)
is the local heat flux density, W/m2,
is the material's conductivity, W/(m·K),
is the temperature gradient, K/m.
The thermal conductivity is often treated as a constant, though this is not always true. While the thermal conductivity of a material generally varies with temperature, the variation can be small over a significant range of temperatures for some common materials. In anisotropic materials, the thermal conductivity typically varies with orientation; in this case is represented by a second-order tensor. In non-uniform materials, varies with spatial location.
For many simple applications, Fourier's law is used in its one-dimensional form, for example, in the direction:
In an isotropic medium, Fourier's law leads to the heat equation
with a fundamental solution famously known as the heat kernel.
Integral form
By integrating the differential form over the material's total surface , we arrive at the integral form of Fourier's law:
where (including the SI units):
is the thermal power transferred by conduction (in W), time derivative of the transferred heat (in J),
is an oriented surface area element (in m2).
The above differential equation, when integrated for a homogeneous material of 1-D geometry between two endpoints at constant temperature, gives the heat flow rate as
where
is the time interval during which the amount of heat flows through a cross-section of the material,
is the cross-sectional surface area,
is the temperature difference between the ends,
is the distance between the ends.
One can define the (macroscopic) thermal resistance of the 1-D homogeneous material:
With a simple 1-D steady heat conduction equation which is analogous to Ohm's law for a simple electric resistance:
This law forms the basis for the derivation of the heat equation.
Conductance
Writing
where is the conductance, in W/(m2 K).
Fourier's law can also be stated as:
The reciprocal of conductance is resistance, is given by:
Resistance is additive when several conducting layers lie between the hot and cool regions, because and are the same for all layers. In a multilayer partition, the total conductance is related to the conductance of its layers by:
or equivalently
So, when dealing with a multilayer partition, the following formula is usually used:
For heat conduction from one fluid to another through a barrier, it is sometimes important to consider the conductance of the thin film of fluid that remains stationary next to the barrier. This thin film of fluid is difficult to quantify because its characteristics depend upon complex conditions of turbulence and viscosity—but when dealing with thin high-conductance barriers it can sometimes be quite significant.
Intensive-property representation
The previous conductance equations, written in terms of extensive properties, can be reformulated in terms of intensive properties. Ideally, the formulae for conductance should produce a quantity with dimensions independent of distance, like Ohm's law for electrical resistance, , and conductance, .
From the electrical formula: , where ρ is resistivity, x is length, and A is cross-sectional area, we have , where G is conductance, k is conductivity, x is length, and A is cross-sectional area.
For heat,
where is the conductance.
Fourier's law can also be stated as:
analogous to Ohm's law, or
The reciprocal of conductance is resistance, R, given by:
analogous to Ohm's law,
The rules for combining resistances and conductances (in series and parallel) are the same for both heat flow and electric current.
Cylindrical shells
Conduction through cylindrical shells (e.g. pipes) can be calculated from the internal radius, , the external radius, , the length, , and the temperature difference between the inner and outer wall, .
The surface area of the cylinder is
When Fourier's equation is applied:
and rearranged:
then the rate of heat transfer is:
the thermal resistance is:
and , where . It is important to note that this is the log-mean radius.
Spherical
The conduction through a spherical shell with internal radius, , and external radius, , can be calculated in a similar manner as for a cylindrical shell.
The surface area of the sphere is:
Solving in a similar manner as for a cylindrical shell (see above) produces:
Transient thermal conduction
Interface heat transfer
The heat transfer at an interface is considered a transient heat flow. To analyze this problem, the Biot number is important to understand how the system behaves. The Biot number is determined by:
The heat transfer coefficient , is introduced in this formula, and is measured in . If the system has a Biot number of less than 0.1, the material behaves according to Newtonian cooling, i.e. with negligible temperature gradient within the body. If the Biot number is greater than 0.1, the system behaves as a series solution. The temperature profile in terms of time can be derived from the equation
which becomes
The heat transfer coefficient, , is measured in , and represents the transfer of heat at an interface between two materials. This value is different at every interface and is an important concept in understanding heat flow at an interface.
The series solution can be analyzed with a nomogram. A nomogram has a relative temperature as the coordinate and the Fourier number, which is calculated by
The Biot number increases as the Fourier number decreases. There are five steps to determine a temperature profile in terms of time.
Calculate the Biot number
Determine which relative depth matters, either x or L.
Convert time to the Fourier number.
Convert to relative temperature with the boundary conditions.
Compared required to point to trace specified Biot number on the nomogram.
Applications
Splat cooling
Splat cooling is a method for quenching small droplets of molten materials by rapid contact with a cold surface. The particles undergo a characteristic cooling process, with the heat profile at for initial temperature as the maximum at and at and , and the heat profile at for as the boundary conditions. Splat cooling rapidly ends in a steady state temperature, and is similar in form to the Gaussian diffusion equation. The temperature profile, with respect to the position and time of this type of cooling, varies with:
Splat cooling is a fundamental concept that has been adapted for practical use in the form of thermal spraying. The thermal diffusivity coefficient, represented as , can be written as . This varies according to the material.
Metal quenching
Metal quenching is a transient heat transfer process in terms of the time temperature transformation (TTT). It is possible to manipulate the cooling process to adjust the phase of a suitable material. For example, appropriate quenching of steel can convert a desirable proportion of its content of austenite to martensite, creating a very hard and strong product. To achieve this, it is necessary to quench at the "nose" (or eutectic) of the TTT diagram. Since materials differ in their Biot numbers, the time it takes for the material to quench, or the Fourier number, varies in practice. In steel, the quenching temperature range is generally from 600 °C to 200 °C. To control the quenching time and to select suitable quenching media, it is necessary to determine the Fourier number from the desired quenching time, the relative temperature drop, and the relevant Biot number. Usually, the correct figures are read from a standard nomogram. By calculating the heat transfer coefficient from this Biot number, one can find a liquid medium suitable for the application.
Zeroth law of thermodynamics
One statement of the so-called zeroth law of thermodynamics is directly focused on the idea of conduction of heat. Bailyn (1994) writes that "the zeroth law may be stated: All diathermal walls are equivalent".
A diathermal wall is a physical connection between two bodies that allows the passage of heat between them. Bailyn is referring to diathermal walls that exclusively connect two bodies, especially conductive walls.
This statement of the "zeroth law" belongs to an idealized theoretical discourse, and actual physical walls may have peculiarities that do not conform to its generality.
For example, the material of the wall must not undergo a phase transition, such as evaporation or fusion, at the temperature at which it must conduct heat. But when only thermal equilibrium is considered and time is not urgent, so that the conductivity of the material does not matter too much, one suitable heat conductor is as good as another. Conversely, another aspect of the zeroth law is that, subject again to suitable restrictions, a given diathermal wall is indifferent to the nature of the heat bath to which it is connected. For example, the glass bulb of a thermometer acts as a diathermal wall whether exposed to a gas or a liquid, provided that they do not corrode or melt it.
These differences are among the defining characteristics of heat transfer. In a sense, they are symmetries of heat transfer.
Instruments
Thermal conductivity analyzer
Thermal conduction property of any gas under standard conditions of pressure and temperature is a fixed quantity. This property of a known reference gas or known reference gas mixtures can, therefore, be used for certain sensory applications, such as the thermal conductivity analyzer.
The working of this instrument is by principle based on the Wheatstone bridge containing four filaments whose resistances are matched. Whenever a certain gas is passed over such network of filaments, their resistance changes due to the altered thermal conductivity of the filaments and thereby changing the net voltage output from the Wheatstone Bridge. This voltage output will be correlated with the database to identify the gas sample.
Gas sensor
The principle of thermal conductivity of gases can also be used to measure the concentration of a gas in a binary mixture of gases.
Working: if the same gas is present around all the Wheatstone bridge filaments, then the same temperature is maintained in all the filaments and hence same resistances are also maintained; resulting in a balanced Wheatstone bridge. However, If the dissimilar gas sample (or gas mixture) is passed over one set of two filaments and the reference gas on the other set of two filaments, then the Wheatstone bridge becomes unbalanced. And the resulting net voltage output of the circuit will be correlated with the database to identify the constituents of the sample gas.
Using this technique many unknown gas samples can be identified by comparing their thermal conductivity with other reference gas of known thermal conductivity. The most commonly used reference gas is nitrogen; as the thermal conductivity of most common gases (except hydrogen and helium) are similar to that of nitrogen.
See also
List of thermal conductivities
Electrical conduction
Convection diffusion equation
R-value (insulation)
Heat pipe
Fick's law of diffusion
Relativistic heat conduction
Churchill–Bernstein equation
Fourier number
Biot number
False diffusion
General equation of heat transfer
References
H. S. Carslaw and J. C. Jaeger 'Conduction of heat in solids' Oxford University Press, USA 1959
F. Dehghani, CHNG2801 – 'Conservation and Transport Processes: Course Notes', University of Sydney, Sydney 2007
Jan Taler, Piotr Duda, 'Solving Direct and Inverse Heat Conduction Problems' Springer-Verlag Berlin Heidelberg 2005
Liqiu Wang, Xuesheng Zhou, Xiaohao Wei, 'Heat Conduction: Mathematical Models and Analytical Solutions' Springer 2008
W. Kelly, 'Understanding Heat Conduction' Nova Science Publischer, 2010
Latif M. Jiji, Amir H. Danesh-Yazdi, 'Heat Conduction' Springer, Fourth Edition 2024
John H Lienhard IV and John H Lienhard V, 'A Heat Transfer Textbook', Fifth Edition, Dover Pub., Mineola, NY, 2019
External links
Heat conduction – Thermal-FluidsPedia
Newton's Law of Cooling by Jeff Bryant based on a program by Stephen Wolfram, Wolfram Demonstrations Project.
be-x-old:Цеплаправоднасьць
Heat conduction
Heat transfer
Physical quantities
Transport phenomena
be:Цеплаправоднасць
bg:Топлопроводимост | 0.778048 | 0.998036 | 0.77652 |
Enthalpy | Enthalpy is the sum of a thermodynamic system's internal energy and the product of its pressure and volume. It is a state function in thermodynamics used in many measurements in chemical, biological, and physical systems at a constant external pressure, which is conveniently provided by the large ambient atmosphere. The pressure–volume term expresses the work that was done against constant external pressure to establish the system's physical dimensions from to some final volume (as ), i.e. to make room for it by displacing its surroundings.
The pressure-volume term is very small for solids and liquids at common conditions, and fairly small for gases. Therefore, enthalpy is a stand-in for energy in chemical systems; bond, lattice, solvation, and other chemical "energies" are actually enthalpy differences. As a state function, enthalpy depends only on the final configuration of internal energy, pressure, and volume, not on the path taken to achieve it.
In the International System of Units (SI), the unit of measurement for enthalpy is the joule. Other historical conventional units still in use include the calorie and the British thermal unit (BTU).
The total enthalpy of a system cannot be measured directly because the internal energy contains components that are unknown, not easily accessible, or are not of interest for the thermodynamic problem at hand. In practice, a change in enthalpy is the preferred expression for measurements at constant pressure, because it simplifies the description of energy transfer. When transfer of matter into or out of the system is also prevented and no electrical or mechanical (stirring shaft or lift pumping) work is done, at constant pressure the enthalpy change equals the energy exchanged with the environment by heat.
In chemistry, the standard enthalpy of reaction is the enthalpy change when reactants in their standard states ( usually ) change to products in their standard states.
This quantity is the standard heat of reaction at constant pressure and temperature, but it can be measured by calorimetric methods even if the temperature does vary during the measurement, provided that the initial and final pressure and temperature correspond to the standard state. The value does not depend on the path from initial to final state because enthalpy is a state function.
Enthalpies of chemical substances are usually listed for pressure as a standard state. Enthalpies and enthalpy changes for reactions vary as a function of temperature,
but tables generally list the standard heats of formation of substances at . For endothermic (heat-absorbing) processes, the change is a positive value; for exothermic (heat-releasing) processes it is negative.
The enthalpy of an ideal gas is independent of its pressure or volume, and depends only on its temperature, which correlates to its thermal energy. Real gases at common temperatures and pressures often closely approximate this behavior, which simplifies practical thermodynamic design and analysis.
The word "enthalpy" is derived from the Greek word enthalpein, which means to heat.
Definition
The enthalpy of a thermodynamic system is defined as the sum of its internal energy and the product of its pressure and volume:
where is the internal energy, is pressure, and is the volume of the system; is sometimes referred to as the pressure energy .
Enthalpy is an extensive property; it is proportional to the size of the system (for homogeneous systems). As intensive properties, the specific enthalpy, is referenced to a unit of mass of the system, and the molar enthalpy, where is the number of moles. For inhomogeneous systems the enthalpy is the sum of the enthalpies of the component subsystems:
where
is the total enthalpy of all the subsystems,
refers to the various subsystems,
refers to the enthalpy of each subsystem.
A closed system may lie in thermodynamic equilibrium in a static gravitational field, so that its pressure varies continuously with altitude, while, because of the equilibrium requirement, its temperature is invariant with altitude. (Correspondingly, the system's gravitational potential energy density also varies with altitude.) Then the enthalpy summation becomes an integral:
where
("rho") is density (mass per unit volume),
is the specific enthalpy (enthalpy per unit mass),
represents the enthalpy density (enthalpy per unit volume),
denotes an infinitesimally small element of volume within the system, for example, the volume of an infinitesimally thin horizontal layer.
The integral therefore represents the sum of the enthalpies of all the elements of the volume.
The enthalpy of a closed homogeneous system is its energy function with its entropy and its pressure as natural state variables which provide a differential relation for of the simplest form, derived as follows. We start from the first law of thermodynamics for closed systems for an infinitesimal process:
where
is a small amount of heat added to the system,
is a small amount of work performed by the system.
In a homogeneous system in which only reversible processes or pure heat transfer are considered, the second law of thermodynamics gives with the absolute temperature and the infinitesimal change in entropy of the system. Furthermore, if only work is done, As a result,
Adding to both sides of this expression gives
or
So
and the coefficients of the natural variable differentials and are just the single variables and .
Other expressions
The above expression of in terms of entropy and pressure may be unfamiliar to some readers. There are also expressions in terms of more directly measurable variables such as temperature and pressure:
Here is the heat capacity at constant pressure and is the coefficient of (cubic) thermal expansion:
With this expression one can, in principle, determine the enthalpy if and are known as functions of and . However the expression is more complicated than because is not a natural variable for the enthalpy .
At constant pressure, so that For an ideal gas, reduces to this form even if the process involves a pressure change, because
In a more general form, the first law describes the internal energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for then becomes
where is the chemical potential per particle for a type particle, and is the number of such particles. The last term can also be written as (with the number of moles of component added to the system and, in this case, the molar chemical potential) or as (with the mass of component added to the system and, in this case, the specific chemical potential).
Characteristic functions and natural state variables
The enthalpy, expresses the thermodynamics of a system in the energy representation. As a function of state, its arguments include both one intensive and several extensive state variables. The state variables , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are determined by factors in the surroundings. For example, when a virtual parcel of atmospheric air moves to a different altitude, the pressure surrounding it changes, and the process is often so rapid that there is too little time for heat transfer. This is the basis of the so-called adiabatic approximation that is used in meteorology.
Conjugate with the enthalpy, with these arguments, the other characteristic function of state of a thermodynamic system is its entropy, as a function, of the same list of variables of state, except that the entropy, , is replaced in the list by the enthalpy, . It expresses the entropy representation. The state variables , , and are said to be the natural state variables in this representation. They are suitable for describing processes in which they are experimentally controlled. For example, and can be controlled by allowing heat transfer, and by varying only the external pressure on the piston that sets the volume of the system.
Physical interpretation
The term is the energy of the system, and the term can be interpreted as the work that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, moles of a gas of volume at pressure and temperature , is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy plus , where is the work done in pushing against the ambient (atmospheric) pressure.
In physics and statistical mechanics it may be more interesting to study the internal properties of a constant-volume system and therefore the internal energy is used.
In chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure–volume work represents a small, well-defined energy exchange with the atmosphere, so that is the appropriate expression for the heat of reaction. For a heat engine, the change in its enthalpy after a full cycle is equal to zero, since the final and initial state are equal.
Relationship to heat
In order to discuss the relation between the enthalpy increase and heat supply, we return to the first law for closed systems, with the physics sign convention: , where the heat is supplied by conduction, radiation, Joule heating. We apply it to the special case with a constant pressure at the surface. In this case the work is given by (where is the pressure at the surface, is the increase of the volume of the system). Cases of long range electromagnetic interaction require further state variables in their formulation, and are not considered here. In this case the first law reads:
Now,
So
If the system is under constant pressure, and consequently, the increase in enthalpy of the system is equal to the heat added:
This is why the
now-obsolete term heat content was used for enthalpy in the 19th century.
Applications
In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from "nothingness"; the mechanical work required, differs based upon the conditions that obtain during the creation of the thermodynamic system.
Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure remains constant; this is the The supplied energy must also provide the change in internal energy, which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy For systems at constant pressure, with no external work done other than the work, the change in enthalpy is the heat received by the system.
For a simple system with a constant number of particles at constant pressure, the difference in enthalpy is the maximum amount of thermal energy derivable from an isobaric thermodynamic process.
Heat of reaction
The total enthalpy of a system cannot be measured directly; the enthalpy change of a system is measured instead. Enthalpy change is defined by the following equation:
where
is the "enthalpy change",
is the final enthalpy of the system (in a chemical reaction, the enthalpy of the products or the system at equilibrium),
is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants).
For an exothermic reaction at constant pressure, the system's change in enthalpy, , is negative due to the products of the reaction having a smaller enthalpy than the reactants, and equals the heat released in the reaction if no electrical or shaft work is done. In other words, the overall decrease in enthalpy is achieved by the generation of heat.
Conversely, for a constant-pressure endothermic reaction, is positive and equal to the heat absorbed in the reaction.
From the definition of enthalpy as the enthalpy change at constant pressure is However, for most chemical reactions, the work term is much smaller than the internal energy change , which is approximately equal to . As an example, for the combustion of carbon monoxide and
Since the differences are so small, reaction enthalpies are often described as reaction energies and analyzed in terms of bond energies.
Specific enthalpy
The specific enthalpy of a uniform system is defined as where is the mass of the system. The SI unit for specific enthalpy is joule per kilogram. It can be expressed in other specific quantities by where is the specific internal energy, is the pressure, and is specific volume, which is equal to , where is the density.
Enthalpy changes
An enthalpy change describes the change in enthalpy observed in the constituents of a thermodynamic system when undergoing a transformation or chemical reaction. It is the difference between the enthalpy after the process has completed, i.e. the enthalpy of the products assuming that the reaction goes to completion, and the initial enthalpy of the system, namely the reactants. These processes are specified solely by their initial and final states, so that the enthalpy change for the reverse is the negative of that for the forward process.
A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and compiled in chemical and physical reference works, such as the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized in thermodynamics.
When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including:
A pressure of one atmosphere (1 atm or 1013.25 hPa) or 1 bar
A temperature of 25 °C or 298.15 K
A concentration of 1.0 M when the element or compound is present in solution
Elements or compounds in their normal physical states, i.e. standard state
For such standardized values the name of the enthalpy is commonly prefixed with the term standard, e.g. standard enthalpy of formation.
Chemical properties
Enthalpy of reaction - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely.
Enthalpy of formation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents.
Enthalpy of combustion - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen.
Enthalpy of hydrogenation - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of an unsaturated compound reacts completely with an excess of hydrogen to form a saturated compound.
Enthalpy of atomization - is defined as the enthalpy change required to separate one mole of a substance into its constituent atoms completely.
Enthalpy of neutralization - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of water is formed when an acid and a base react.
Standard Enthalpy of solution - is defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution.
Standard enthalpy of Denaturation (biochemistry) - is defined as the enthalpy change required to denature one mole of compound.
Enthalpy of hydration - is defined as the enthalpy change observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous ions.
Physical properties
Enthalpy of fusion - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to liquid.
Enthalpy of vaporization - is defined as the enthalpy change required to completely change the state of one mole of substance from liquid to gas.
Enthalpy of sublimation - is defined as the enthalpy change required to completely change the state of one mole of substance from solid to gas.
Lattice enthalpy - is defined as the energy required to separate one mole of an ionic compound into separated gaseous ions to an infinite distance apart (meaning no force of attraction).
Enthalpy of mixing - is defined as the enthalpy change upon mixing of two (non-reacting) chemical substances.
Open systems
In thermodynamic open systems, mass (of substances) may flow in and out of the system boundaries. The first law of thermodynamics for open systems states: The increase in the internal energy of a system is equal to the amount of energy added to the system by mass flowing in and by heating, minus the amount lost by mass flowing out and in the form of work done by the system:
where is the average internal energy entering the system, and is the average internal energy leaving the system.
The region of space enclosed by the boundaries of the open system is usually called a control volume, and it may or may not correspond to physical walls. If we choose the shape of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of mass into the system performs work as if it were a piston of fluid pushing mass into the system, and the system performs work on the flow of mass out as if it were driving a piston of fluid. There are then two types of work performed: Flow work described above, which is performed on the fluid (this is also often called work), and mechanical work (shaft work), which may be performed on some mechanical device such as a turbine or pump.
These two types of work are expressed in the equation
Substitution into the equation above for the control volume (cv) yields:
The definition of enthalpy, , permits us to use this thermodynamic potential to account for both internal energy and work in fluids for open systems:
If we allow also the system boundary to move (e.g. due to moving pistons), we get a rather general form of the first law for open systems.
In terms of time derivatives, using Newton's dot notation for time derivatives, it reads:
with sums over the various places where heat is supplied, mass flows into the system, and boundaries are moving. The terms represent enthalpy flows, which can be written as
with the mass flow and the molar flow at position respectively. The term represents the rate of change of the system volume at position that results in power done by the system. The parameter represents all other forms of power done by the system such as shaft power, but it can also be, say, electric power produced by an electrical power plant.
Note that the previous expression holds true only if the kinetic energy flow rate is conserved between system inlet and outlet. Otherwise, it has to be included in the enthalpy balance. During steady-state operation of a device (see turbine, pump, and engine), the average may be set equal to zero. This yields a useful expression for the average power generation for these devices in the absence of chemical reactions:
where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its presence in the first law for open systems, as formulated above.
Diagrams
The enthalpy values of important substances can be obtained using commercial software. Practically all relevant material properties can be obtained either in tabular or in graphical form. There are many types of diagrams, such as diagrams, which give the specific enthalpy as function of temperature for various pressures, and diagrams, which give as function of for various . One of the most common diagrams is the temperature–specific entropy diagram ( diagram). It gives the melting curve and saturated liquid and vapor values together with isobars and isenthalps. These diagrams are powerful tools in the hands of the thermal engineer.
Some basic applications
The points through in the figure play a role in the discussion in this section.
{| class="wikitable" style="text-align:center"
|-
!rowspan=2|Point
! !! !! !!
|- style="background:#EEEEEE;"
| K || bar || ||
|-
| || 300 || 1 || 6.85 || 461
|-
| || 380 || 2 || 6.85 || 530
|-
| || 300 || 200 || 5.16 || 430
|-
| || 270 || 1 || 6.79 || 430
|-
| || 108 || 13 || 3.55 || 100
|-
| || 77.2 || 1 || 3.75 || 100
|-
| || 77.2 || 1 || 2.83 || 28
|-
| || 77.2 || 1 || 5.41 || 230
|}
Points and are saturated liquids, and point is a saturated gas.
Throttling
One of the simple applications of the concept of enthalpy is the so-called throttling process, also known as Joule–Thomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in the figure. This process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature drop between ambient temperature and the interior of the refrigerator. It is also the final stage in many types of liquefiers.
For a steady state flow regime, the enthalpy of the system (dotted rectangle) has to be constant. Hence
Since the mass flow is constant, the specific enthalpies at the two sides of the flow resistance are the same:
that is, the enthalpy per unit mass does not change during the throttling. The consequences of this relation can be demonstrated using the diagram above.
Example 1
Point c is at 200 bar and room temperature (300 K). A Joule–Thomson expansion from 200 bar to 1 bar follows a curve of constant enthalpy of roughly 425 (not shown in the diagram) lying between the 400 and 450 isenthalps and ends in point d, which is at a temperature of about 270 K . Hence the expansion from 200 bar to 1 bar cools nitrogen from 300 K to 270 K . In the valve, there is a lot of friction, and a lot of entropy is produced, but still the final temperature is below the starting value.
Example 2
Point e is chosen so that it is on the saturated liquid line with It corresponds roughly with and Throttling from this point to a pressure of 1 bar ends in the two-phase region (point f). This means that a mixture of gas and liquid leaves the throttling valve. Since the enthalpy is an extensive parameter, the enthalpy in is equal to the enthalpy in multiplied by the liquid fraction in plus the enthalpy in multiplied by the gas fraction in So
With numbers:
so
This means that the mass fraction of the liquid in the liquid–gas mixture that leaves the throttling valve is 64%.
Compressors
A power is applied e.g. as electrical power. If the compression is adiabatic, the gas temperature goes up. In the reversible case it would be at constant entropy, which corresponds with a vertical line in the diagram. For example, compressing nitrogen from 1 bar (point a) to 2 bar (point b''') would result in a temperature increase from 300 K to 380 K. In order to let the compressed gas exit at ambient temperature , heat exchange, e.g. by cooling water, is necessary. In the ideal case the compression is isothermal. The average heat flow to the surroundings is . Since the system is in the steady state the first law gives
The minimal power needed for the compression is realized if the compression is reversible. In that case the second law of thermodynamics for open systems gives
Eliminating gives for the minimal power
For example, compressing 1 kg of nitrogen from 1 bar to 200 bar costs at least :
With the data, obtained with the diagram, we find a value of
The relation for the power can be further simplified by writing it as
With
this results in the final relation
History and etymology
The term enthalpy was coined relatively late in the history of thermodynamics, in the early 20th century. Energy was introduced in a modern sense by Thomas Young in 1802, while entropy by Rudolf Clausius in 1865. Energy uses the root of the Greek word (ergon), meaning "work", to express the idea of capacity to perform work. Entropy uses the Greek word (tropē) meaning transformation or turning. Enthalpy uses the root of the Greek word (thalpos) "warmth, heat".
The term expresses the obsolete concept of heat content, as refers to the amount of heat gained in a process at constant pressure only, but not in the general case when pressure is variable. J. W. Gibbs used the term "a heat function for constant pressure" for clarity.
Introduction of the concept of "heat content" is associated with Benoît Paul Émile Clapeyron and Rudolf Clausius (Clausius–Clapeyron relation, 1850).
The term enthalpy first appeared in print in 1909. It is attributed to Heike Kamerlingh Onnes, who most likely introduced it orally the year before, at the first meeting of the Institute of Refrigeration in Paris. It gained currency only in the 1920s, notably with the Mollier Steam Tables and Diagrams'', published in 1927.
Until the 1920s, the symbol was used, somewhat inconsistently, for "heat" in general. The definition of as strictly limited to enthalpy or "heat content at constant pressure" was formally proposed by A. W. Porter in 1922.
Notes
See also
Calorimetry
Calorimeter
Departure function
Hess's law
Isenthalpic process
Laws of thermodynamics
Stagnation enthalpy
Standard enthalpy of formation
Thermodynamic databases for pure substances
References
Bibliography
External links
State functions
Energy (physics)
Physical quantities | 0.777308 | 0.998962 | 0.776501 |