title
stringlengths 3
49
| text
stringlengths 509
117k
| relevans
float64 0.76
0.83
| popularity
float64 0.95
1
| ranking
float64 0.76
0.82
|
---|---|---|---|---|
Quantization (physics) | Quantization (in British English quantisation) is the systematic transition procedure from a classical understanding of physical phenomena to a newer understanding known as quantum mechanics. It is a procedure for constructing quantum mechanics from classical mechanics. A generalization involving infinite degrees of freedom is field quantization, as in the "quantization of the electromagnetic field", referring to photons as field "quanta" (for instance as light quanta). This procedure is basic to theories of atomic physics, chemistry, particle physics, nuclear physics, condensed matter physics, and quantum optics.
Historical overview
In 1901, when Max Planck was developing the distribution function of statistical mechanics to solve the ultraviolet catastrophe problem, he realized that the properties of blackbody radiation can be explained by the assumption that the amount of energy must be in countable fundamental units, i.e. amount of energy is not continuous but discrete. That is, a minimum unit of energy exists and the following relationship holds
for the frequency . Here, is called the Planck constant, which represents the amount of the quantum mechanical effect. It means a fundamental change of mathematical model of physical quantities.
In 1905, Albert Einstein published a paper, "On a heuristic viewpoint concerning the emission and transformation of light", which explained the photoelectric effect on quantized electromagnetic waves. The energy quantum referred to in this paper was later called "photon". In July 1913, Niels Bohr used quantization to describe the spectrum of a hydrogen atom in his paper "On the constitution of atoms and molecules".
The preceding theories have been successful, but they are very phenomenological theories. However, the French mathematician Henri Poincaré first gave a systematic and rigorous definition of what quantization is in his 1912 paper "Sur la théorie des quanta".
The term "quantum physics" was first used in Johnston's Planck's Universe in Light of Modern Physics. (1931).
Canonical quantization
Canonical quantization develops quantum mechanics from classical mechanics. One introduces a commutation relation among canonical coordinates. Technically, one converts coordinates to operators, through combinations of creation and annihilation operators. The operators act on quantum states of the theory. The lowest energy state is called the vacuum state.
Quantization schemes
Even within the setting of canonical quantization, there is difficulty associated to quantizing arbitrary observables on the classical phase space. This is the ordering ambiguity: classically, the position and momentum variables x and p commute, but their quantum mechanical operator counterparts do not. Various quantization schemes have been proposed to resolve this ambiguity, of which the most popular is the Weyl quantization scheme. Nevertheless, the Groenewold–van Hove theorem dictates that no perfect quantization scheme exists. Specifically, if the quantizations of x and p are taken to be the usual position and momentum operators, then no quantization scheme can perfectly reproduce the Poisson bracket relations among the classical observables. See Groenewold's theorem for one version of this result.
Covariant canonical quantization
There is a way to perform a canonical quantization without having to resort to the non covariant approach of foliating spacetime and choosing a Hamiltonian. This method is based upon a classical action, but is different from the functional integral approach.
The method does not apply to all possible actions (for instance, actions with a noncausal structure or actions with gauge "flows"). It starts with the classical algebra of all (smooth) functionals over the configuration space. This algebra is quotiented over by the ideal generated by the Euler–Lagrange equations. Then, this quotient algebra is converted into a Poisson algebra by introducing a Poisson bracket derivable from the action, called the Peierls bracket. This Poisson algebra is then ℏ -deformed in the same way as in canonical quantization.
In quantum field theory, there is also a way to quantize actions with gauge "flows". It involves the Batalin–Vilkovisky formalism, an extension of the BRST formalism.
Deformation quantization
One of the earliest attempts at a natural quantization was Weyl quantization, proposed by Hermann Weyl in 1927. Here, an attempt is made to associate a quantum-mechanical observable (a self-adjoint operator on a Hilbert space) with a real-valued function on classical phase space. The position and momentum in this phase space are mapped to the generators of the Heisenberg group, and the Hilbert space appears as a group representation of the Heisenberg group. In 1946, H. J. Groenewold considered the product of a pair of such observables and asked what the corresponding function would be on the classical phase space. This led him to discover the phase-space star-product of a pair of functions.
More generally, this technique leads to deformation quantization, where the ★-product is taken to be a deformation of the algebra of functions on a symplectic manifold or Poisson manifold. However, as a natural quantization scheme (a functor), Weyl's map is not satisfactory.
For example, the Weyl map of the classical angular-momentum-squared is not just the quantum angular momentum squared operator, but it further contains a constant term . (This extra term offset is pedagogically significant, since it accounts for the nonvanishing angular momentum of the ground-state Bohr orbit in the hydrogen atom, even though the standard QM ground state of the atom has vanishing .)
As a mere representation change, however, Weyl's map is useful and important, as it underlies the alternate equivalent phase space formulation of conventional quantum mechanics.
Geometric quantization
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.
A more geometric approach to quantization, in which the classical phase space can be a general symplectic manifold, was developed in the 1970s by Bertram Kostant and Jean-Marie Souriau. The method proceeds in two stages. First, once constructs a "prequantum Hilbert space" consisting of square-integrable functions (or, more properly, sections of a line bundle) over the phase space. Here one can construct operators satisfying commutation relations corresponding exactly to the classical Poisson-bracket relations. On the other hand, this prequantum Hilbert space is too big to be physically meaningful. One then restricts to functions (or sections) depending on half the variables on the phase space, yielding the quantum Hilbert space.
Path integral quantization
A classical mechanical theory is given by an action with the permissible configurations being the ones which are extremal with respect to functional variations of the action. A quantum-mechanical description of the classical system can also be constructed from the action of the system by means of the path integral formulation.
Other types
Loop quantum gravity (loop quantization)
Uncertainty principle (quantum statistical mechanics approach)
Schwinger's quantum action principle
See also
First quantization
Feynman path integral
Light front quantization
Photon polarization
Quantum Hall effect
Quantum number
Stochastic quantization
References
Ali, S. T., & Engliš, M. (2005). "Quantization methods: a guide for physicists and analysts". Reviews in Mathematical Physics 17 (04), 391-490.
Abraham, R. & Marsden (1985): Foundations of Mechanics, ed. Addison–Wesley,
M. Peskin, D. Schroeder, An Introduction to Quantum Field Theory (Westview Press, 1995)
Weinberg, Steven, The Quantum Theory of Fields (3 volumes)
G. Giachetta, L. Mangiarotti, G. Sardanashvily, Geometric and Algebraic Topological Methods in Quantum Mechanics (World Scientific, 2005)
Notes
Physical phenomena
Quantum field theory
Mathematical quantization
Mathematical physics | 0.784773 | 0.990647 | 0.777433 |
Chemical database | A chemical database is a database specifically designed to store chemical information. This information is about chemical and crystal structures, spectra, reactions and syntheses, and thermophysical data.
Types of chemical databases
Bioactivity database
Bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs.
Chemical structures
Chemical structures are traditionally represented using lines indicating chemical bonds between atoms and drawn on paper (2D structural formulae). While these are ideal visual representations for the chemist, they are unsuitable for computational use and especially for search and storage. Small molecules (also called ligands in drug design applications), are usually represented using lists of atoms and their connections. Large molecules such as proteins are however more compactly represented using the sequences of their amino acid building blocks. Radioactive isotopes are also represented, which is an important attribute for some applications. Large chemical databases for structures are expected to handle the storage and searching of information on millions of molecules taking terabytes of physical memory.
Literature database
Chemical literature databases correlate structures or other chemical information to relevant references such as academic papers or patents. This type of database includes STN, Scifinder, and Reaxys. Links to literature are also included in many databases that focus on chemical characterization.
Crystallographic database
Crystallographic databases store X-ray crystal structure data. Common examples include Protein Data Bank and Cambridge Structural Database.
NMR spectra database
NMR spectra databases correlate chemical structure with NMR data. These databases often include other characterization data such as FTIR and mass spectrometry.
Reactions database
Most chemical databases store information on stable molecules but in databases for reactions also intermediates and temporarily created unstable molecules are stored. Reaction databases contain information about products, educts, and reaction mechanisms.
A popular example that lists chemical reaction data, among others, would be the Beilstein database.
Thermophysical database
Thermophysical data are information about
phase equilibria including vapor–liquid equilibrium, solubility of gases in liquids, liquids in solids (SLE), heats of mixing, vaporization, and fusion.
caloric data like heat capacity, heat of formation and combustion,
transport properties like viscosity and thermal conductivity
Chemical structure representation
There are two principal techniques for representing chemical structures in digital databases
As connection tables / adjacency matrices / lists with additional information on bond (edges) and atom attributes (nodes), such as:
MDL Molfile, PDB, CML
As a linear string notation based on depth first or breadth first traversal, such as:
SMILES/SMARTS, SLN, WLN, InChI
These approaches have been refined to allow representation of stereochemical differences and charges as well as special kinds of bonding such as those seen in organo-metallic compounds. The principal advantage of a computer representation is the possibility for increased storage and fast, flexible search.
Search
Substructure
Chemists can search databases using parts of structures, parts of their IUPAC names as well as based on constraints on properties. Chemical databases are different from other general purpose databases in their support for substructure search, a method to retrieve chemicals matching a pattern of atoms and bonds which a user specifies. This kind of search is achieved by looking for subgraph isomorphism (sometimes also called a monomorphism) and is a widely studied application of graph theory.
Query structures may contain bonding patterns such as "single/aromatic" or "any" to provide flexibility. Similarly, the vertices which in an actual compound would be a specific atom may be replaced with an atom list in the query. Cis–trans isomerism at double bonds is catered for by giving a choice of retrieving only the E form, the Z form, or both.
Conformation
Search by matching 3D conformation of molecules or by specifying spatial constraints is another feature that is particularly of use in drug design. Searches of this kind can be computationally very expensive. Many approximate methods have been proposed, for instance BCUTS, special function representations, moments of inertia, ray-tracing histograms, maximum distance histograms, shape multipoles to name a few.
Examples
Large databases, such as PubChem and ChemSpider, have graphical interfaces for search. The Chemical Abstracts Service provides tools to search the chemical literature and Reaxys supplied by Elsevier covers both chemicals and reaction information, including that originally held in the Beilstein database. PATENTSCOPE makes chemical patents accessible by substructure and Wikipedia's articles describing individual chemicals can also be searched that way.
Suppliers of chemicals as synthesis intermediates or for high-throughput screening routinely provide search interfaces. Currently, the largest database that can be freely searched by the public is the ZINC database, which is claimed to contain over 37 billion commercially available molecules.
Descriptors
All properties of molecules beyond their structure can be split up into either physico-chemical or pharmacological attributes also called descriptors. On top of that, there exist various artificial and more or less standardized naming systems for molecules that supply more or less ambiguous names and synonyms. The IUPAC name is usually a good choice for representing a molecule's structure in a both human-readable and unique string although it becomes unwieldy for larger molecules. Trivial names on the other hand abound with homonyms and synonyms and are therefore a bad choice as a defining database key. While physico-chemical descriptors like molecular weight, (partial) charge, solubility, etc. can mostly be computed directly based on the molecule's structure, pharmacological descriptors can be derived only indirectly using involved multivariate statistics or experimental (screening, bioassay) results. All of those descriptors can for reasons of computational effort be stored along with the molecule's representation and usually are.
Similarity
There is no single definition of molecular similarity, however the concept may be defined according to the application and is often described as an inverse of a measure of distance in descriptor space. Two molecules might be considered more similar for instance if their difference in molecular weights is lower than when compared with others. A variety of other measures could be combined to produce a multi-variate distance measure. Distance measures are often classified into Euclidean measures and non-Euclidean measures depending on whether the triangle inequality holds. Maximum Common Subgraph (MCS) based substructure search (similarity or distance measure) is also very common. MCS is also used for screening drug like compounds by hitting molecules, which share common subgraph (substructure).
Chemicals in the databases may be clustered into groups of 'similar' molecules based on similarities. Both hierarchical and non-hierarchical clustering approaches can be applied to chemical entities with multiple attributes. These attributes or molecular properties may either be determined empirically or computationally derived descriptors. One of the most popular clustering approaches is the Jarvis-Patrick algorithm.
In pharmacologically oriented chemical repositories, similarity is usually defined in terms of the biological effects of compounds (ADME/tox) that can in turn be semiautomatically inferred from similar combinations of physico-chemical descriptors using QSAR methods.
Registration systems
Databases systems for maintaining unique records on chemical compounds are termed as Registration systems. These are often used for chemical indexing, patent systems and industrial databases.
Registration systems usually enforce uniqueness of the chemical represented in the database through the use of unique representations. By applying rules of precedence for the generation of stringified notations, one can obtain unique/'canonical' string representations such as 'canonical SMILES'. Some registration systems such as the CAS system make use of algorithms to generate unique hash codes to achieve the same objective.
A key difference between a registration system and a simple chemical database is the ability to accurately represent that which is known, unknown, and partially known. For example, a chemical database might store a molecule with stereochemistry unspecified, whereas a chemical registry system requires the registrar to specify whether the stereo configuration is unknown, a specific (known) mixture, or racemic. Each of these would be considered a different record in a chemical registry system.
Registration systems also preprocess molecules to avoid considering trivial differences such as differences in halogen ions in chemicals.
An example is the Chemical Abstracts Service (CAS) registration system. See also CAS registry number.
List of Chemical Cartridges
Accord
Direct
J Chem
CambridgeSoft
Bingo
Pinpoint
List of Chemical Registration Systems
ChemReg
Register
RegMol
Compound-Registration
Ensemble
Web-based
Tools
The computational representations are usually made transparent to chemists by graphical display of the data. Data entry is also simplified through the use of chemical structure editors. These editors internally convert the graphical data into computational representations.
There are also numerous algorithms for the interconversion of various formats of representation. An open-source utility for conversion is OpenBabel. These search and conversion algorithms are implemented either within the database system itself or as is now the trend is implemented as external components that fit into standard relational database systems. Both Oracle and PostgreSQL based systems make use of cartridge technology that allows user defined datatypes. These allow the user to make SQL queries with chemical search conditions (For example, a query to search for records having a phenyl ring in their structure represented as a SMILES string in a SMILESCOL column could be
SELECT * FROM CHEMTABLE WHERE SMILESCOL.CONTAINS('c1ccccc1')
Algorithms for the conversion of IUPAC names to structure representations and vice versa are also used for extracting structural information from text. However, there are difficulties due to the existence of multiple dialects of IUPAC. Work is on to establish a unique IUPAC standard (See InChI).
See also
References
External links
Wikipedia Chemical Structure Explorer to search Wikipedia chemistry articles by substructure
Computational chemistry
Cheminformatics | 0.803427 | 0.967629 | 0.777419 |
Molecular geometry | Molecular geometry is the three-dimensional arrangement of the atoms that constitute a molecule. It includes the general shape of the molecule as well as bond lengths, bond angles, torsional angles and any other geometrical parameters that determine the position of each atom.
Molecular geometry influences several properties of a substance including its reactivity, polarity, phase of matter, color, magnetism and biological activity. The angles between bonds that an atom forms depend only weakly on the rest of molecule, i.e. they can be understood as approximately local and hence transferable properties.
Determination
The molecular geometry can be determined by various spectroscopic methods and diffraction methods. IR, microwave and Raman spectroscopy can give information about the molecule geometry from the details of the vibrational and rotational absorbance detected by these techniques. X-ray crystallography, neutron diffraction and electron diffraction can give molecular structure for crystalline solids based on the distance between nuclei and concentration of electron density. Gas electron diffraction can be used for small molecules in the gas phase. NMR and FRET methods can be used to determine complementary information including relative distances,
dihedral angles,
angles, and connectivity. Molecular geometries are best determined at low temperature because at higher temperatures the molecular structure is averaged over more accessible geometries (see next section). Larger molecules often exist in multiple stable geometries (conformational isomerism) that are close in energy on the potential energy surface. Geometries can also be computed by ab initio quantum chemistry methods to high accuracy. The molecular geometry can be different as a solid, in solution, and as a gas.
The position of each atom is determined by the nature of the chemical bonds by which it is connected to its neighboring atoms. The molecular geometry can be described by the positions of these atoms in space, evoking bond lengths of two joined atoms, bond angles of three connected atoms, and torsion angles (dihedral angles) of three consecutive bonds.
Influence of thermal excitation
Since the motions of the atoms in a molecule are determined by quantum mechanics, "motion" must be defined in a quantum mechanical way. The overall (external) quantum mechanical motions translation and rotation hardly change the geometry of the molecule. (To some extent rotation influences the geometry via Coriolis forces and centrifugal distortion, but this is negligible for the present discussion.) In addition to translation and rotation, a third type of motion is molecular vibration, which corresponds to internal motions of the atoms such as bond stretching and bond angle variation. The molecular vibrations are harmonic (at least to good approximation), and the atoms oscillate about their equilibrium positions, even at the absolute zero of temperature. At absolute zero all atoms are in their vibrational ground state and show zero point quantum mechanical motion, so that the wavefunction of a single vibrational mode is not a sharp peak, but approximately a Gaussian function (the wavefunction for n = 0 depicted in the article on the quantum harmonic oscillator). At higher temperatures the vibrational modes may be thermally excited (in a classical interpretation one expresses this by stating that "the molecules will vibrate faster"), but they oscillate still around the recognizable geometry of the molecule.
To get a feeling for the probability that the vibration of molecule may be thermally excited,
we inspect the Boltzmann factor , where ΔE is the excitation energy of the vibrational mode, k the Boltzmann constant and T the absolute temperature. At 298 K (25 °C), typical values for the Boltzmann factor β are:
β = 0.089 for ΔE = 500 cm−1
β = 0.008 for ΔE = 1000 cm−1
β = 0.0007 for ΔE = 1500 cm−1.
(The reciprocal centimeter is an energy unit that is commonly used in infrared spectroscopy; 1 cm−1 corresponds to ). When an excitation energy is 500 cm−1, then about 8.9 percent of the molecules are thermally excited at room temperature. To put this in perspective: the lowest excitation vibrational energy in water is the bending mode (about 1600 cm−1). Thus, at room temperature less than 0.07 percent of all the molecules of a given amount of water will vibrate faster than at absolute zero.
As stated above, rotation hardly influences the molecular geometry. But, as a quantum mechanical motion, it is thermally excited at relatively (as compared to vibration) low temperatures. From a classical point of view it can be stated that at higher temperatures more molecules will rotate faster,
which implies that they have higher angular velocity and angular momentum. In quantum mechanical language: more eigenstates of higher angular momentum become thermally populated with rising temperatures. Typical rotational excitation energies are on the order of a few cm−1. The results of many spectroscopic experiments are broadened because they involve an averaging over rotational states. It is often difficult to extract geometries from spectra at high temperatures, because the number of rotational states probed in the experimental averaging increases with increasing temperature. Thus, many spectroscopic observations can only be expected to yield reliable molecular geometries at temperatures close to absolute zero, because at higher temperatures too many higher rotational states are thermally populated.
Bonding
Molecules, by definition, are most often held together with covalent bonds involving single, double, and/or triple bonds, where a "bond" is a shared pair of electrons (the other method of bonding between atoms is called ionic bonding and involves a positive cation and a negative anion).
Molecular geometries can be specified in terms of 'bond lengths', 'bond angles' and 'torsional angles'. The bond length is defined to be the average distance between the nuclei of two atoms bonded together in any given molecule. A bond angle is the angle formed between three atoms across at least two bonds. For four atoms bonded together in a chain, the torsional angle is the angle between the plane formed by the first three atoms and the plane formed by the last three atoms.
There exists a mathematical relationship among the bond angles for one central atom and four peripheral atoms (labeled 1 through 4) expressed by the following determinant. This constraint removes one degree of freedom from the choices of (originally) six free bond angles to leave only five choices of bond angles. (The angles θ11, θ22, θ33, and θ44 are always zero and that this relationship can be modified for a different number of peripheral atoms by expanding/contracting the square matrix.)
Molecular geometry is determined by the quantum mechanical behavior of the electrons. Using the valence bond approximation this can be understood by the type of bonds between the atoms that make up the molecule. When atoms interact to form a chemical bond, the atomic orbitals of each atom are said to combine in a process called orbital hybridisation. The two most common types of bonds are sigma bonds (usually formed by hybrid orbitals) and pi bonds (formed by unhybridized p orbitals for atoms of main group elements). The geometry can also be understood by molecular orbital theory where the electrons are delocalised.
An understanding of the wavelike behavior of electrons in atoms and molecules is the subject of quantum chemistry.
Isomers
Isomers are types of molecules that share a chemical formula but have difference geometries, resulting in different properties:
A pure substance is composed of only one type of isomer of a molecule (all have the same geometrical structure).
Structural isomers have the same chemical formula but different physical arrangements, often forming alternate molecular geometries with very different properties. The atoms are not bonded (connected) together in the same orders.
Functional isomers are special kinds of structural isomers, where certain groups of atoms exhibit a special kind of behavior, such as an ether or an alcohol.
Stereoisomers may have many similar physicochemical properties (melting point, boiling point) and at the same time very different biochemical activities. This is because they exhibit a handedness that is commonly found in living systems. One manifestation of this chirality or handedness is that they have the ability to rotate polarized light in different directions.
Protein folding concerns the complex geometries and different isomers that proteins can take.
Types of molecular structure
A bond angle is the geometric angle between two adjacent bonds. Some common shapes of simple molecules include:
Linear: In a linear model, atoms are connected in a straight line. The bond angles are set at 180°. For example, carbon dioxide and nitric oxide have a linear molecular shape.
Trigonal planar: Molecules with the trigonal planar shape are somewhat triangular and in one plane (flat). Consequently, the bond angles are set at 120°. For example, boron trifluoride.
Angular: Angular molecules (also called bent or V-shaped) have a non-linear shape. For example, water (H2O), which has an angle of about 105°. A water molecule has two pairs of bonded electrons and two unshared lone pairs.
Tetrahedral: Tetra- signifies four, and -hedral relates to a face of a solid, so "tetrahedral" literally means "having four faces". This shape is found when there are four bonds all on one central atom, with no extra unshared electron pairs. In accordance with the VSEPR (valence-shell electron pair repulsion theory), the bond angles between the electron bonds are arccos(−) = 109.47°. For example, methane (CH4) is a tetrahedral molecule.
Octahedral: Octa- signifies eight, and -hedral relates to a face of a solid, so "octahedral" means "having eight faces". The bond angle is 90 degrees. For example, sulfur hexafluoride (SF6) is an octahedral molecule.
Trigonal pyramidal: A trigonal pyramidal molecule has a pyramid-like shape with a triangular base. Unlike the linear and trigonal planar shapes but similar to the tetrahedral orientation, pyramidal shapes require three dimensions in order to fully separate the electrons. Here, there are only three pairs of bonded electrons, leaving one unshared lone pair. Lone pair – bond pair repulsions change the bond angle from the tetrahedral angle to a slightly lower value. For example, ammonia (NH3).
VSEPR table
The bond angles in the table below are ideal angles from the simple VSEPR theory (pronounced "Vesper Theory"), followed by the actual angle for the example given in the following column where this differs. For many cases, such as trigonal pyramidal and bent, the actual angle for the example differs from the ideal angle, and examples differ by different amounts. For example, the angle in H2S (92°) differs from the tetrahedral angle by much more than the angle for H2O (104.48°) does.
3D representations
Line or stick – atomic nuclei are not represented, just the bonds as sticks or lines. As in 2D molecular structures of this type, atoms are implied at each vertex.
Electron density plot – shows the electron density determined either crystallographically or using quantum mechanics rather than distinct atoms or bonds.
Ball and stick – atomic nuclei are represented by spheres (balls) and the bonds as sticks.
Spacefilling models or CPK models (also an atomic coloring scheme in representations) – the molecule is represented by overlapping spheres representing the atoms.
Cartoon – a representation used for proteins where loops, beta sheets, and alpha helices are represented diagrammatically and no atoms or bonds are explicitly represented (e.g. the protein backbone is represented as a smooth pipe).
The greater the amount of lone pairs contained in a molecule, the smaller the angles between the atoms of that molecule. The VSEPR theory predicts that lone pairs repel each other, thus pushing the different atoms away from them.
See also
Jemmis mno rules
Lewis structure
Molecular design software
Molecular graphics
Molecular mechanics
Molecular modelling
Molecular symmetry
Molecule editor
Polyhedral skeletal electron pair theory
Quantum chemistry
Ribbon diagram
Styx rule (for boranes)
Topology (chemistry)
References
External links
Molecular Geometry & Polarity Tutorial 3D visualization of molecules to determine polarity.
Molecular Geometry using Crystals 3D structure visualization of molecules using Crystallography. | 0.783515 | 0.992152 | 0.777367 |
Philosophy of chemistry | The philosophy of chemistry considers the methodology and underlying assumptions of the science of chemistry. It is explored by philosophers, chemists, and philosopher-chemist teams. For much of its history, philosophy of science has been dominated by the philosophy of physics, but the philosophical questions that arise from chemistry have received increasing attention since the latter part of the 20th century.
Foundations of chemistry
Major philosophical questions arise as soon as one attempts to define chemistry and what it studies. Atoms and molecules are often assumed to be the fundamental units of chemical theory, but traditional descriptions of molecular structure and chemical bonding fail to account for the properties of many substances, including metals and metal complexes and aromaticity.
Additionally, chemists frequently use non-existent chemical entities like resonance structures to explain the structure and reactions of different substances; these explanatory tools use the language and graphical representations of molecules to describe the behavior of chemicals and chemical reactions that in reality do not behave as straightforward molecules.
Some chemists and philosophers of chemistry prefer to think of substances, rather than microstructures, as the fundamental units of study in chemistry. There is not always a one-to-one correspondence between the two methods of classifying substances. For example, many rocks exist as mineral complexes composed of multiple ions that do not occur in fixed proportions or spatial relationships to one another.
A related philosophical problem is whether chemistry is the study of substances or reactions. Atoms, even in a solid, are in perpetual motion and under the right conditions many chemicals react spontaneously to form new products. A variety of environmental variables contribute to a substance's properties, including temperature and pressure, proximity to other molecules and the presence of a magnetic field. As Schummer puts it, "Substance philosophers define a chemical reaction by the change of certain substances, whereas process philosophers define a substance by its characteristic chemical reactions."
Philosophers of chemistry discuss issues of symmetry and chirality in nature. Organic (i.e., carbon-based) molecules are those most often chiral. Amino acids, nucleic acids and sugars, all of which are found exclusively as a single enantiomer in organisms, are the basic chemical units of life. Chemists, biochemists, and biologists alike debate the origins of this homochirality. Philosophers debate facts regarding the origin of this phenomenon, namely whether it emerged contingently, amid a lifeless racemic environment or if other processes were at play. Some speculate that answers can only be found in comparison to extraterrestrial life, if it is ever found. Other philosophers question whether there exists a bias toward assumptions of nature as symmetrical, thereby causing resistance to any evidence to the contrary.
One of the most topical issues is determining to what extent physics, specifically, quantum mechanics, explains chemical phenomena. Can chemistry, in fact, be reduced to physics as has been assumed by many, or are there inexplicable gaps? Some authors, for example, Roald Hoffmann, have recently suggested that a number of difficulties exist in the reductionist program with concepts like aromaticity, pH, reactivity, nucleophilicity, for example.
Philosophers of chemistry
Friedrich Wilhelm Joseph Schelling was among the first philosophers to use the term "philosophy of chemistry".
Several philosophers and scientists have focused on the philosophy of chemistry in recent years, notably, the Dutch philosopher Jaap van Brakel, who wrote The Philosophy of Chemistry in 2000, and the Maltese-born philosopher-chemist Eric Scerri, founder and editor of the journal Foundations of Chemistry. Scerri is also the author of "Normative and Descriptive Philosophy of Science and the Role of Chemistry," published in Philosophy of Chemistry in 2004, among other articles, many of which are collected in Collected Papers on the Philosophy of Chemistry. Scerri is especially interested in the philosophical foundations of the periodic table, and how physics and chemistry intersect in relation to it, which he contends is not merely a matter for science, but for philosophy.
Although in other fields of science students of the method are generally not practitioners in the field, in chemistry (particularly in synthetic organic chemistry) intellectual method and philosophical foundations are often explored by investigators with active research programmes. Elias James Corey developed the concept of "retrosynthesis" published a seminal work "The logic of chemical synthesis" which deconstructs these thought processes and speculates on computer-assisted synthesis. Other chemists such as K. C. Nicolaou (co-author of Classics in Total Synthesis) have followed in his lead.
See also
History of chemistry
The central science
References
Further reading
Review articles
Philosophy of Chemistry article on the Stanford Encyclopedia of Philosophy
Journals
Foundations of Chemistry, an international peer-reviewed journal for History and Philosophy of Chemistry as well as Chemical Education published by Springer.
Hyle: International Journal for Philosophy of Chemistry, an English-language peer-reviewed journal associated with the University of Karlsruhe, Germany.
Books
Philosophy of Chemistry, J. van Brakel, Leuven University Press, 2000.
Philosophy of Chemistry: Synthesis of a New Discipline, Davis Baird, Eric Scerri, Lee McIntyre (eds.), Dordrecht: Springer, 2006.
The Periodic Table: Its Story and Its Significance, E.R. Scerri, Oxford University Press, New York, 2006.
Collected Papers on Philosophy of Chemistry, E.R. Scerri, Imperial College Press, London, 2008.
Of Minds and Molecules: New Philosophical Perspectives on Chemistry, Nalini Bhushan and Stuart Rosenfeld (eds.), Oxford University Press, 2000, Reviewed by Michael Weisberg
Philosophy of Chemistry : Growth of a New Discipline, Eric Scerri, Lee McIntyre (eds.), Heidelberg: Springer, 2015.
External links
Reduction and Emergence in Chemistry, Internet Encyclopedia of Philosophy
International Society for the Philosophy of Chemistry
International Society for the Philosophy of Chemistry Summer symposium 2011
International Society for the Philosophy of Chemistry Summer symposium 2016
Website for Eric Scerri, author and founder-editor of Foundations of Chemistry
Philosophy of science
Chemistry | 0.80883 | 0.961082 | 0.777351 |
DelPhi | DelPhi is a scientific application which calculates electrostatic potentials in and around macromolecules and the corresponding electrostatic energies. It incorporates the effects of ionic strength mediated screening by evaluating the Poisson-Boltzmann equation at a finite number of points within a three-dimensional grid box. DelPhi is commonly used in protein science to visualize variations in electrostatics along a protein or other macromolecular surface and to calculate the electrostatic components of various energies.
Development
One of the main problems in modeling the electrostatic potential of biological macromolecules is that they exist in water at a given ionic strength and that they have an irregular shape. Analytical solutions of the corresponding Poisson-Boltzmann Equation (PBE) are not available for such cases and the distribution of the potential can be found only numerically. DelPhi, developed in Professor Barry Honig's lab in 1986, was the first PBE solver used by many researchers. The widespread popularity of DelPhi is due to its speed, accuracy (calculation of the electrostatic free energy is only slightly dependent on the resolution of the grid) and the ability to handle extremely high grid dimensions.
Features
Additional features such as assigning different dielectric constants to different regions of space, smooth Gaussian-based dielectric distribution function, modeling geometric objects and charge distributions, and treating systems containing mixed salt solutions also attracted many researchers. In addition to the typical potential map, DelPhi can generate and output the calculated distribution of either the dielectric constant or ion concentration, providing the biomedical community with extra tools for their research.
Pdb files are typically used as input for DelPhi calculations. Other required inputs are an atomic radii file and a charge file
.
Binary Potential files as output from DelPhi can be viewed in most molecular viewers such as UCSF Chimera, Jmol, and VMD, and can either be mapped onto surfaces or visualized at a fixed cutoff.
Versions
Delphi distribution comes as a sequential as well as parallelized codes, runs on Linux, Mac OS X and Microsoft Windows systems and the source code is available in Fortran 95 and C++ programming languages. DELPHI is also implemented into an accessible web-server. DELPHI has also been utilized to build a server that predicts pKa's of biological macromolecules such as proteins, RNAs and DNAs which can be accessed via web.
DelPhi v.7 is distributed in four versions:
IRIX version, compiled under IRIX 6.5 Operating System, 32bits, using f77 and cc compilers.
IRIX version, compiled under IRIX 6.5 Operating System, 64bits, using f77 and cc compilers.
LINUX version, compiled under Red Hat 7.1, kernel 2.4.2 Operating System, using GNU gfortran compilers,
PC version, compiled under Windows Operating System, using Microsoft Developer Studio C++ and Fortran compilers.
Their way of working is very similar; however, unexpected differences may appear due to different numerical precision or to the porting of the software to different architectures. For example, the elapsed time in the PC version is not calculated at present.
Each distribution contains one executable (named delphi or delphi.exe), the source codes (with corresponding makefile when needed), and some worked examples.
See also
Anthony Nicholls (physicist)
External links
Barry Honig
DelPhi Development Team
References
Chemistry software
Physics software | 0.777922 | 0.999229 | 0.777322 |
Tinbergen's four questions | Tinbergen's four questions, named after 20th century biologist Nikolaas Tinbergen, are complementary categories of explanations for animal behaviour. These are also commonly referred to as levels of analysis. It suggests that an integrative understanding of behaviour must include ultimate (evolutionary) explanations, in particular:
behavioural adaptive functions
phylogenetic history; and the proximate explanations
underlying physiological mechanisms
ontogenetic/developmental history.
Four categories of questions and explanations
When asked about the purpose of sight in humans and animals, even elementary-school children can answer that animals have vision to help them find food and avoid danger (function/adaptation). Biologists have three additional explanations: sight is caused by a particular series of evolutionary steps (phylogeny), the mechanics of the eye (mechanism/causation), and even the process of an individual's development (ontogeny). This schema constitutes a basic framework of the overlapping behavioural fields of ethology, behavioural ecology, comparative psychology, sociobiology, evolutionary psychology, and anthropology. Julian Huxley identified the first three questions. Niko Tinbergen gave only the fourth question, as Huxley's questions failed to distinguish between survival value and evolutionary history; Tinbergen's fourth question helped resolve this problem.
Evolutionary (ultimate) explanations
First question: Function (adaptation)
Darwin's theory of evolution by natural selection is the only scientific explanation for why an animal's behaviour is usually well adapted for survival and reproduction in its environment. However, claiming that a particular mechanism is well suited to the present environment is different from claiming that this mechanism was selected for in the past due to its history of being adaptive.
The literature conceptualizes the relationship between function and evolution in two ways. On the one hand, function and evolution are often presented as separate and distinct explanations of behaviour. On the other hand, the common definition of adaptation is a central concept in evolution: a trait that was functional to the reproductive success of the organism and that is thus now present due to being selected for; that is, function and evolution are inseparable. However, a trait can have a current function that is adaptive without being an adaptation in this sense, if for instance the environment has changed. Imagine an environment in which having a small body suddenly conferred benefit on an organism when previously body size had had no effect on survival. A small body's function in the environment would then be adaptive, but it would not become an adaptation until enough generations had passed in which small bodies were advantageous to reproduction for small bodies to be selected for. Given this, it is best to understand that presently functional traits might not all have been produced by natural selection. The term "function" is preferable to "adaptation", because adaptation is often construed as implying that it was selected for due to past function. This corresponds to Aristotle's final cause.
Second question: Phylogeny (evolution)
Evolution captures both the history of an organism via its phylogeny, and the history of natural selection working on function to produce adaptations. There are several reasons why natural selection may fail to achieve optimal design (Mayr 2001:140–143; Buss et al. 1998). One entails random processes such as mutation and environmental events acting on small populations. Another entails the constraints resulting from early evolutionary development. Each organism harbors traits, both anatomical and behavioural, of previous phylogenetic stages, since many traits are retained as species evolve.
Reconstructing the phylogeny of a species often makes it possible to understand the "uniqueness" of recent characteristics: Earlier phylogenetic stages and (pre-) conditions which persist often also determine the form of more modern characteristics. For instance, the vertebrate eye (including the human eye) has a blind spot, whereas octopus eyes do not. In those two lineages, the eye was originally constructed one way or the other. Once the vertebrate eye was constructed, there were no intermediate forms that were both adaptive and would have enabled it to evolve without a blind spot.
It corresponds to Aristotle's formal cause.
Proximate explanations
Third question: Mechanism (causation)
Some prominent classes of Proximate causal mechanisms include:
The brain: For example, Broca's area, a small section of the human brain, has a critical role in linguistic capability.
Hormones: Chemicals used to communicate among cells of an individual organism. Testosterone, for instance, stimulates aggressive behaviour in a number of species.
Pheromones: Chemicals used to communicate among members of the same species. Some species (e.g., dogs and some moths) use pheromones to attract mates.
In examining living organisms, biologists are confronted with diverse levels of complexity (e.g. chemical, physiological, psychological, social). They therefore investigate causal and functional relations within and between these levels. A biochemist might examine, for instance, the influence of social and ecological conditions on the release of certain neurotransmitters and hormones, and the effects of such releases on behaviour, e.g. stress during birth has a tocolytic (contraction-suppressing) effect.
However, awareness of neurotransmitters and the structure of neurons is not by itself enough to understand higher levels of neuroanatomic structure or behaviour: "The whole is more than the sum of its parts." All levels must be considered as being equally important: cf. transdisciplinarity, Nicolai Hartmann's "Laws about the Levels of Complexity."
It corresponds to Aristotle's efficient cause.
Fourth question: Ontogeny (development)
Ontogeny is the process of development of an individual organism from the zygote through the embryo to the adult form.
In the latter half of the twentieth century, social scientists debated whether human behaviour was the product of nature (genes) or nurture (environment in the developmental period, including culture).
An example of interaction (as distinct from the sum of the components) involves familiarity from childhood. In a number of species, individuals prefer to associate with familiar individuals but prefer to mate with unfamiliar ones (Alcock 2001:85–89, Incest taboo, Incest). By inference, genes affecting living together interact with the environment differently from genes affecting mating behaviour. A simple example of interaction involves plants: Some plants grow toward the light (phototropism) and some away from gravity (gravitropism).
Many forms of developmental learning have a critical period, for instance, for imprinting among geese and language acquisition among humans. In such cases, genes determine the timing of the environmental impact.
A related concept is labeled "biased learning" (Alcock 2001:101–103) and "prepared learning" (Wilson, 1998:86–87). For instance, after eating food that subsequently made them sick, rats are predisposed to associate that food with smell, not sound (Alcock 2001:101–103). Many primate species learn to fear snakes with little experience (Wilson, 1998:86–87).
See developmental biology and developmental psychology.
It corresponds to Aristotle's material cause.
Causal relationships
The figure shows the causal relationships among the categories of explanations. The left-hand side represents the evolutionary explanations at the species level; the right-hand side represents the proximate explanations at the individual level. In the middle are those processes' end products—genes (i.e., genome) and behaviour, both of which can be analyzed at both levels.
Evolution, which is determined by both function and phylogeny, results in the genes of a population. The genes of an individual interact with its developmental environment, resulting in mechanisms, such as a nervous system. A mechanism (which is also an end-product in its own right) interacts with the individual's immediate environment, resulting in its behaviour.
Here we return to the population level. Over many generations, the success of the species' behaviour in its ancestral environment—or more technically, the environment of evolutionary adaptedness (EEA) may result in evolution as measured by a change in its genes.
In sum, there are two processes—one at the population level and one at the individual level—which are influenced by environments in three time periods.
Examples
Vision
Four ways of explaining visual perception:
Function: To find food and avoid danger.
Phylogeny: The vertebrate eye initially developed with a blind spot, but the lack of adaptive intermediate forms prevented the loss of the blind spot.
Mechanism: The lens of the eye focuses light on the retina.
Development: Neurons need the stimulation of light to wire the eye to the brain (Moore, 2001:98–99).
Westermarck effect
Four ways of explaining the Westermarck effect, the lack of sexual interest in one's siblings (Wilson, 1998:189–196):
Function: To discourage inbreeding, which decreases the number of viable offspring.
Phylogeny: Found in a number of mammalian species, suggesting initial evolution tens of millions of years ago.
Mechanism: Little is known about the neuromechanism.
Ontogeny: Results from familiarity with another individual early in life, especially in the first 30 months for humans. The effect is manifested in nonrelatives raised together, for instance, in kibbutzs.
Romantic love
Four ways of explaining romantic love have been used to provide a comprehensive biological definition (Bode & Kushnick, 2021):
Function: Mate choice, courtship, sex, pair-bonding.
Phylogeny: Evolved by co-opting mother-infant bonding mechanisms sometime in the recent evolutionary history of humans.
Mechanisms: Social, psychological mate choice, genetic, neurobiological, and endocrinological mechanisms cause romantic love.
Ontogeny: Romantic love can first manifest in childhood, manifests with all its characteristics following puberty, but can manifest across the lifespan.
Sleep
Sleep has been described using Tinbergen's four questions as a framework (Bode & Kuula, 2021):
Function: Energy restoration, metabolic regulation, thermoregulation, boosting immune system, detoxification, brain maturation, circuit reorganization, synaptic optimization, avoiding danger.
Phylogeny: Sleep exists in invertebrates, lower vertebrates, and higher vertebrates. NREM and REM sleep exist in eutheria, marsupialiformes, and also evolved in birds.
Mechanisms: Mechanisms regulate wakefulness, sleep onset, and sleep. Specific mechanisms involve neurotransmitters, genes, neural structures, and the circadian rhythm.
Ontogeny: Sleep manifests differently in babies, infants, children, adolescents, adults, and older adults. Differences include the stages of sleep, sleep duration, and sex differences.
Use of the four-question schema as "periodic table"
Konrad Lorenz, Julian Huxley and Niko Tinbergen were familiar with both conceptual categories (i.e. the central questions of biological research: 1. - 4. and the levels of inquiry: a. - g.), the tabulation was made by Gerhard Medicus. The tabulated schema is used as the central organizing device in many animal behaviour, ethology, behavioural ecology and evolutionary psychology textbooks (e.g., Alcock, 2001). One advantage of this organizational system, what might be called the "periodic table of life sciences," is that it highlights gaps in knowledge, analogous to the role played by the periodic table of elements in the early years of chemistry.
This "biopsychosocial" framework clarifies and classifies the associations between the various levels of the natural and social sciences, and it helps to integrate the social and natural sciences into a "tree of knowledge" (see also Nicolai Hartmann's "Laws about the Levels of Complexity"). Especially for the social sciences, this model helps to provide an integrative, foundational model for interdisciplinary collaboration, teaching and research (see The Four Central Questions of Biological Research Using Ethology as an Example – PDF).
References
Sources
Alcock, John (2001) Animal Behaviour: An Evolutionary Approach, Sinauer, 7th edition. .
Buss, David M., Martie G. Haselton, Todd K. Shackelford, et al. (1998) "Adaptations, Exaptations, and Spandrels," American Psychologist, 53:533–548. http://www.sscnet.ucla.edu/comm/haselton/webdocs/spandrels.html
Buss, David M. (2004) Evolutionary Psychology: The New Science of the Mind, Pearson Education, 2nd edition. .
Cartwright, John (2000) Evolution and Human Behaviour, MIT Press, .
Krebs, John R., Davies N.B. (1993) An Introduction to Behavioural Ecology, Blackwell Publishing, .
Lorenz, Konrad (1937) Biologische Fragestellungen in der Tierpsychologie (I.e. Biological Questions in Animal Psychology). Zeitschrift für Tierpsychologie, 1: 24–32.
Mayr, Ernst (2001) What Evolution Is, Basic Books. .
Gerhard Medicus (2017, chapter 1). Being Human – Bridging the Gap between the Sciences of Body and Mind, Berlin VWB
Medicus, Gerhard (2017) Being Human – Bridging the Gap between the Sciences of Body and Mind. Berlin: VWB 2015,
Nesse, Randolph M (2013) "Tinbergen's Four Questions, Organized," Trends in Ecology and Evolution, 28:681-682.
Moore, David S. (2001) The Dependent Gene: The Fallacy of 'Nature vs. Nurture''', Henry Holt. .
Pinker, Steven (1994) The Language Instinct: How the Mind Creates Language, Harper Perennial. .
Tinbergen, Niko (1963) "On Aims and Methods of Ethology," Zeitschrift für Tierpsychologie, 20: 410–433.
Wilson, Edward O. (1998) Consilience: The Unity of Knowledge'', Vintage Books. .
External links
Diagrams
The Four Areas of Biology pdf
The Four Areas and Levels of Inquiry pdf
Tinbergen's four questions within the "Fundamental Theory of Human Sciences" ppt
Tinbergen's Four Questions, organized pdf
Derivative works
On aims and methods of cognitive ethology (pdf) by Jamieson and Bekoff.
Behavioral ecology
Ethology
Evolutionary psychology
Sociobiology | 0.79081 | 0.982909 | 0.777294 |
Pleonasm | Pleonasm (; , ) is redundancy in linguistic expression, such as in "black darkness," "burning fire," "the man he said," or "vibrating with motion." It is a manifestation of tautology by traditional rhetorical criteria. Pleonasm may also be used for emphasis, or because the phrase has become established in a certain form. Tautology and pleonasm are not consistently differentiated in literature.
Usage
Most often, pleonasm is understood to mean a word or phrase which is useless, clichéd, or repetitive, but a pleonasm can also be simply an unremarkable use of idiom. It can aid in achieving a specific linguistic effect, be it social, poetic or literary. Pleonasm sometimes serves the same function as rhetorical repetition—it can be used to reinforce an idea, contention or question, rendering writing clearer and easier to understand. Pleonasm can serve as a redundancy check; if a word is unknown, misunderstood, misheard, or if the medium of communication is poor—a static-filled radio transmission or sloppy handwriting—pleonastic phrases can help ensure that the meaning is communicated even if some of the words are lost.
Idiomatic expressions
Some pleonastic phrases are part of a language's idiom, like tuna fish, chain mail and safe haven in American English. They are so common that their use is unremarkable for native speakers, although in many cases the redundancy can be dropped with no loss of meaning.
When expressing possibility, English speakers often use potentially pleonastic expressions such as It might be possible or perhaps it's possible, where both terms (verb might or adverb perhaps along with the adjective possible) have the same meaning under certain constructions. Many speakers of English use such expressions for possibility in general, such that most instances of such expressions by those speakers are in fact pleonastic. Others, however, use this expression only to indicate a distinction between ontological possibility and epistemic possibility, as in "Both the ontological possibility of X under current conditions and the ontological impossibility of X under current conditions are epistemically possible" (in logical terms, "I am not aware of any facts inconsistent with the truth of proposition X, but I am likewise not aware of any facts inconsistent with the truth of the negation of X"). The habitual use of the double construction to indicate possibility per se is far less widespread among speakers of most other languages (except in Spanish; see examples); rather, almost all speakers of those languages use one term in a single expression:
French: or .
Portuguese: , lit. "What is it that", a more emphatic way of saying "what is"; usually suffices.
Romanian: or .
Typical Spanish pleonasms
– I am going to go up upstairs, "" not being necessary.
– enter inside, "" not being necessary.
Turkish has many pleonastic constructs because certain verbs necessitate objects:
– to eat food.
– to write writing.
– to exit outside.
– to enter inside.
– to play a game.
In a satellite-framed language like English, verb phrases containing particles that denote direction of motion are so frequent that even when such a particle is pleonastic, it seems natural to include it (e.g. "enter into").
Professional and scholarly use
Some pleonastic phrases, when used in professional or scholarly writing, may reflect a standardized usage that has evolved or a meaning familiar to specialists but not necessarily to those outside that discipline. Such examples as "null and void", "terms and conditions", "each and every" are legal doublets that are part of legally operative language that is often drafted into legal documents. A classic example of such usage was that by the Lord Chancellor at the time (1864), Lord Westbury, in the English case of Gorely, when he described a phrase in an Act as "redundant and pleonastic". This type of usage may be favored in certain contexts. However, it may also be disfavored when used gratuitously to portray false erudition, obfuscate, or otherwise introduce verbiage, especially in disciplines where imprecision may introduce ambiguities (such as the natural sciences).
Of the aforementioned phrases, "terms and conditions" may not be pleonastic in some legal systems, as they refer not to a set provisions forming part of a contract, but rather to the specific terms conditioning the effect of the contract or a contractual provision to a future event. In these cases, terms and conditions imply respectively the certainty or uncertainty of said event (e.g., in Brazilian law, a testament has the initial term for coming into force the death of the testator, while a health insurance has the condition of the insured suffering a, or one of a set of, certain injurie(s) from a or one of a set of certain causes).
Stylistic preference
In addition, pleonasms can serve purposes external to meaning. For example, a speaker who is too terse is often interpreted as lacking ease or grace, because, in oral and sign language, sentences are spontaneously created without the benefit of editing. The restriction on the ability to plan often creates many redundancies. In written language, removing words that are not strictly necessary sometimes makes writing seem stilted or awkward, especially if the words are cut from an idiomatic expression.
On the other hand, as is the case with any literary or rhetorical effect, excessive use of pleonasm weakens writing and speech; words distract from the content. Writers who want to obfuscate a certain thought may obscure their meaning with excess verbiage. William Strunk Jr. advocated concision in The Elements of Style (1918):
Literary uses
Examples from Baroque, Mannerist, and Victorian provide a counterpoint to Strunk's advocacy of concise writing:
"This was the most unkindest cut of all." — William Shakespeare, Julius Caesar (Act 3, Scene 2, 183)
"I will be brief: your noble son is mad:/Mad call I it; for, to define true madness,/What is't but to be nothing else but mad?" — Hamlet (Act 2, Scene 2)
"Let me tell you this, when social workers offer you, free, gratis and for nothing, something to hinder you from swooning, which with them is an obsession, it is useless to recoil ..." — Samuel Beckett, Molloy
Types
There are various kinds of pleonasm, including bilingual tautological expressions, syntactic pleonasm, semantic pleonasm and morphological pleonasm:
Bilingual tautological expressions
A bilingual tautological expression is a phrase that combines words that mean the same thing in two different languages. An example of a bilingual tautological expression is the Yiddish expression mayim akhroynem vaser. It literally means "water last water" and refers to "water for washing the hands after meal, grace water". Its first element, mayim, derives from the Hebrew ['majim] "water". Its second element, vaser, derives from the Middle High German word "water".
According to Ghil'ad Zuckermann, Yiddish abounds with both bilingual tautological compounds and bilingual tautological first names.
The following are examples of bilingual tautological compounds in Yiddish:
fíntster khóyshekh "very dark", literally "dark darkness", traceable back to the Middle High German word "dark" and the Hebrew word חושך ħōshekh "darkness".
khameréyzļ "womanizer", literally "donkey-donkey", traceable back to the Hebrew word חמור [ħă'mōr] "donkey" and the Middle High German word "donkey".
The following are examples of bilingual tautological first names (anthroponyms) in Yiddish:
Dov-Ber, literally "bear-bear", traceable back to the Hebrew word dov "bear" and the Middle High German word "bear".
Tsvi-Hirsh, literally "deer-deer", traceable back to the Hebrew word tsvi "deer" and the Middle High German word "deer".
Ze'ev-Volf, literally "wolf-wolf", traceable back to the Hebrew word ze'ev "wolf" and the Middle High German word "wolf".
Arye-Leyb, literally "lion-lion", traceable back to the Hebrew word arye "lion" and the Middle High German word "lion".
Examples occurring in English-language contexts include:
River Avon, literally "River River", from Welsh.
the Sahara Desert, literally "the The Desert Desert", from Arabic.
the La Brea Tar Pits, literally "the The Tar Tar Pits", from Spanish.
the Los Angeles Angels, literally "the The Angels Angels", from Spanish.
the hoi polloi, literally "the the many", from Greek.
Carmarthen Castle, may actually have "castle" in it three times: In its Welsh form, Castell Caerfyrddin, "Caer" means fort, while "fyrddin" is thought to be derived from the Latin Moridunum ("sea fort") making Carmarthen Castle "fort sea-fort castle".
Mount Maunganui, Lake Rotoroa, and Motutapu Island in New Zealand are "Mount Mount Big", "Lake Lake Long", and "Island Sacred Island" respectively, from Māori.
Syntactic pleonasm
Syntactic pleonasm occurs when the grammar of a language makes certain function words optional. For example, consider the following English sentences:
"I know you're coming."
"I know that you're coming."
In this construction, the conjunction that is optional when joining a sentence to a verb phrase with know. Both sentences are grammatically correct, but the word that is pleonastic in this case. By contrast, when a sentence is in spoken form and the verb involved is one of assertion, the use of that makes clear that the present speaker is making an indirect rather than a direct quotation, such that he is not imputing particular words to the person he describes as having made an assertion; the demonstrative adjective that also does not fit such an example. Also, some writers may use "that" for technical clarity reasons. In some languages, such as French, the word is not optional and should therefore not be considered pleonastic.
The same phenomenon occurs in Spanish with subject pronouns. Since Spanish is a null-subject language, which allows subject pronouns to be deleted when understood, the following sentences mean the same:
""
""
In this case, the pronoun ('I') is grammatically optional; both sentences mean "I love you" (however, they may not have the same tone or intention—this depends on pragmatics rather than grammar). Such differing but syntactically equivalent constructions, in many languages, may also indicate a difference in register.
The process of deleting pronouns is called pro-dropping, and it also happens in many other languages, such as Korean, Japanese, Hungarian, Latin, Italian, Portuguese, Swahili, Slavic languages, and the Lao language.
In contrast, formal English requires an overt subject in each clause. A sentence may not need a subject to have valid meaning, but to satisfy the syntactic requirement for an explicit subject a pleonastic (or dummy pronoun) is used; only the first sentence in the following pair is acceptable English:
"It's raining."
"Is raining."
In this example the pleonastic "it" fills the subject function, but it contributes no meaning to the sentence. The second sentence, which omits the pleonastic it is marked as ungrammatical although no meaning is lost by the omission. Elements such as "it" or "there", serving as empty subject markers, are also called (syntactic) expletives, or dummy pronouns. Compare:
"There is rain."
"Today is rain."
The pleonastic , expressing uncertainty in formal French, works as follows:
""('I fear it may rain.')
""('These ideas are harder to understand than I thought.')
Two more striking examples of French pleonastic construction are and .
The word / is translated as 'today', but originally means "on the day of today" since the now obsolete means "today". The expression (translated as "on the day of today") is common in spoken language and demonstrates that the original construction of is lost. It is considered a pleonasm.
The phrase meaning 'What's that?' or 'What is it?', while literally, it means "What is it that it is?".
There are examples of the pleonastic, or dummy, negative in English, such as the construction, heard in the New England region of the United States, in which the phrase "So don't I" is intended to have the same positive meaning as "So do I."
When Robert South said, "It is a pleonasm, a figure usual in Scripture, by a multiplicity of expressions to signify one notable thing", he was observing the Biblical Hebrew poetic propensity to repeat thoughts in different words, since written Biblical Hebrew was a comparatively early form of written language and was written using oral patterning, which has many pleonasms. In particular, very many verses of the Psalms are split into two halves, each of which says much the same thing in different words. The complex rules and forms of written language as distinct from spoken language were not as well-developed as they are today when the books making up the Old Testament were written. See also parallelism (rhetoric).
This same pleonastic style remains very common in modern poetry and songwriting (e.g., "Anne, with her father / is out in the boat / riding the water / riding the waves / on the sea", from Peter Gabriel's "Mercy Street").
Types of syntactic pleonasm
Overinflection: Many languages with inflection, as a result of convention, tend to inflect more words in a given phrase than actually needed in order to express a single grammatical property. Take for example the German, ("The old women speak"). Even though the use of the plural form of the noun ("woman", plural ) shows the grammatical number of the noun phrase, agreement in the German language still dictates that the definite article , attributive adjective , and the verb must all also be in the plural. Not all languages are quite as redundant however, and will permit inflection for number when there is an obvious numerical marker, as is the case with Hungarian, which does have a plural proper, but would express two flowers as two flower. (The same is the case in Celtic languages, where numerical markers precede singular nouns.) The main contrast between Hungarian and other tongues such as German or even English (to a lesser extent) is that in either of the latter, expressing plurality when already evident is not optional, but mandatory; making the neglect of these rules result in an ungrammatical sentence. As well as for number, our aforementioned German phrase also overinflects for grammatical case.
Multiple negation: In some languages, repeated negation may be used for emphasis, as in the English sentence, "There ain't nothing wrong with that". While a literal interpretation of this sentence would be "There is not nothing wrong with that", i.e. "There is something wrong with that", the intended meaning is, in fact, the opposite: "There is nothing wrong with that" or "There isn't anything wrong with that." The repeated negation is used pleonastically for emphasis. However, this is not always the case. In the sentence "I don't not like it", the repeated negative may be used to convey ambivalence ("I neither like nor dislike it") or even affirmation ("I do like it"). (Rhetorically, this becomes the device of litotes; it can be difficult to distinguish litotes from pleonastic double negation, a feature which may be used for ironic effect.) Although the use of "double negatives" for emphatic purposes is sometimes discouraged in standard English, it is mandatory in other languages like Spanish or French. For example, the Spanish phrase ('It is nothing') contains both a negated verb ("") and another negative, the word for nothing ("").
Multiple affirmations: In English, repeated affirmation can be used to add emphasis to an affirmative statement, just as repeated negation can add emphasis to a negative one. A sentence like I do love you, with a stronger intonation on the do, uses double affirmation. This is because English, by default, automatically expresses its sentences in the affirmative and must then alter the sentence in one way or another to express the opposite. Therefore, the sentence I love you is already affirmative, and adding the extra do only adds emphasis and does not change the meaning of the statement.
Double possession: The double genitive of English, as with a friend of mine, is seemingly pleonastic, and therefore has been stigmatized, but it has a long history of use by careful writers and has been analyzed as either a partitive genitive or an appositive genitive.
Multiple quality gradation: In English, different degrees of comparison (comparatives and superlatives) are created through a morphological change to an adjective (e.g., "prettier", "fastest") or a syntactic construction (e.g., "more complex", "most impressive"). It is thus possible to combine both forms for additional emphasis: "more bigger" or "bestest". This may be considered ungrammatical but is common in informal speech for some English speakers. "The most unkindest cut of all" is from Shakespeare's Julius Caesar. Musical notation has a repeated Italian superlative in fortississimo and pianississimo.
Not all uses of constructions such as "more bigger" are pleonastic, however. Some speakers who use such utterances do so in an attempt, albeit a grammatically unconventional one, to create a non-pleonastic construction: A person who says "X is more bigger than Y" may, in the context of a conversation featuring a previous comparison of some object Z with Y, mean "The degree by which X exceeds Y in size is greater than the degree by which Z exceeds Y in size". This usage amounts to the treatment of "bigger than Y" as a single grammatical unit, namely an adjective itself admitting of degrees, such that "X is more bigger than Y" is equivalent to "X is more bigger-than-Y than Z is."[alternatively, "X is bigger than Y more than Z is."] Another common way to express this is: "X is even bigger than Z."
Semantic pleonasm
Semantic pleonasm is a question more of style and usage than of grammar. Linguists usually call this redundancy to avoid confusion with syntactic pleonasm, a more important phenomenon for theoretical linguistics. It usually takes one of two forms: Overlap or prolixity.
Overlap: One word's semantic component is subsumed by the other:
"Receive a free gift with every purchase."; a gift is usually already free.
"A tuna fish sandwich."
"The plumber fixed our hot water heater." (This pleonasm was famously attacked by American comedian George Carlin, but is not truly redundant; a device that increases the temperature of cold water to room temperature would also be a water heater.)
The Big Friendly Giant (title of a children's book by Roald Dahl); giants are inherently already "big".
Prolixity: A phrase may have words which add nothing, or nothing logical or relevant, to the meaning.
"I'm going down south."(South is not really "down", it is just drawn that way on maps by convention.)
"You can't seem to face up to the facts."
"He entered into the room."
"Every mother's child" (as in 'The Christmas Song' by Nat King Cole', also known as 'Chestnuts roasting...'). (Being a child, or a human at all, generally implies being the child of/to a mother. So the redundancy here is used to broaden the context of the child's curiosity regarding the sleigh of Santa Claus, including the concept of maternity. The full line goes: "And every mother's child is gonna spy, to see if reindeer really know how to fly". One can furthermore argue that the word "mother" is included for the purpose of lyrical flow, adding two syllables, which make the line sound complete, as "every child" would be too short to fit the lyrical/rhyme scheme.)
"What therefore God hath joined together, let no man put asunder."
"He raised up his hands in a gesture of surrender."
"Where are you at?"
"Located" or similar before a preposition: "the store is located on Main St." The preposition contains the idea of locatedness and does not need a servant.
"The house itself" for "the house", and similar: unnecessary re-specifiers.
"Actual fact": fact.
"On a daily basis": daily.
"This particular item": this item.
"Different" or "separate" after numbers: for example:
"Four different species" are merely "four species", as two non-different species are together one same species. (However, in "a discount if you buy ten different items", "different" has meaning, because if the ten items include two packets of frozen peas of the same weight and brand, those ten items are not all different.)
"Nine separate cars": cars are always separate.
"Despite the fact that": although.
An expression like "tuna fish", however, might elicit one of many possible responses, such as:
It will simply be accepted as synonymous with "tuna".
It will be perceived as redundant (and thus perhaps silly, illogical, ignorant, inefficient, dialectal, odd, and/or intentionally humorous).
It will imply a distinction. A reader of "tuna fish" could properly wonder: "Is there a kind of tuna which is not a fish? There is, after all, a dolphin mammal and a dolphin fish." This assumption turns out to be correct, as a "tuna" can also mean a prickly pear. Further, "tuna fish" is sometimes used to refer to the flesh of the animal as opposed to the animal itself (similar to the distinction between beef and cattle). Similarly, while all sound-making horns use air, an "air horn" has a special meaning: one that uses compressed air specifically; while most clocks tell time, a "time clock" specifically means one that keeps track of workers' presence at the workplace.
It will be perceived as a verbal clarification, since the word "tuna" is quite short, and may, for example, be misheard as "tune" followed by an aspiration, or (in dialects that drop the final -r sound) as "tuner".
Careful speakers, and writers, too, are aware of pleonasms, especially with cases such as "tuna fish", which is normally used only in some dialects of American English, and would sound strange in other variants of the language, and even odder in translation into other languages.
Similar situations are:
"Ink pen" instead of merely "pen" in the southern United States, where "pen" and "pin" are pronounced similarly.
"Extra accessories" which must be ordered separately for a new camera, as distinct from the accessories provided with the camera as sold.
Not all constructions that are typically pleonasms are so in all cases, nor are all constructions derived from pleonasms themselves pleonastic:
"Put that glass over there on the table." This could, depending on room layout, mean "Put that glass on the table across the room, not the table right in front of you"; if the room were laid out like that, most English speakers would intuitively understand that the distant, not immediate table was the one being referred to; however, if there were only one table in the room, the phrase would indeed be pleonastic. Also, it could mean, "Put that glass on the spot (on the table) which I am gesturing to"; thus, in this case, it is not pleonastic.
"I'm going way down South." This may imply "I'm going much farther south than you might think if I didn't stress the southerliness of my destination"; but such phrasing is also sometimes—and sometimes jokingly—used pleonastically when simply "south" would do; it depends upon the context, the intent of the speaker/writer, and ultimately even on the expectations of the listener/reader.
Morphemic pleonasm
Morphemes, not just words, can enter the realm of pleonasm: Some word-parts are simply optional in various languages and dialects. A familiar example to American English speakers would be the allegedly optional "-al-", probably most commonly seen in "" vs. "publicly"—both spellings are considered correct/acceptable in American English, and both pronounced the same, in this dialect, rendering the "" spelling pleonastic in US English; in other dialects it is "required", while it is quite conceivable that in another generation or so of American English it will be "forbidden". This treatment of words ending in "-ic", "-ac", etc., is quite inconsistent in US English—compare "maniacally" or "forensically" with "stoicly" or "heroicly"; "forensicly" doesn't look "right" in any dialect, but "heroically" looks internally redundant to many Americans. (Likewise, there are thousands of mostly American Google search results for "eroticly", some in reputable publications, but it does not even appear in the 23-volume, 23,000-page, 500,000-definition Oxford English Dictionary (OED), the largest in the world; and even American dictionaries give the correct spelling as "erotically".) In a more modern pair of words, Institute of Electrical and Electronics Engineers dictionaries say that "electric" and "electrical" mean the same thing. However, the usual adverb form is "electrically". (For example, "The glass rod is electrically charged by rubbing it with silk".)
Some (mostly US-based) prescriptive grammar pundits would say that the "-ly" not "-ally" form is "correct" in any case in which there is no "-ical" variant of the basic word, and vice versa; i.e. "maniacally", not "maniacly", is correct because "maniacal" is a word, while "publicly", not "", must be correct because "publical" is (arguably) not a real word (it does not appear in the OED). This logic is in doubt, since most if not all "-ical" constructions arguably are "real" words and most have certainly occurred more than once in "reputable" publications and are also immediately understood by any educated reader of English even if they "look funny" to some, or do not appear in popular dictionaries. Additionally, there are numerous examples of words that have very widely accepted extended forms that have skipped one or more intermediary forms, e.g., "disestablishmentarian" in the absence of "disestablishmentary" (which does not appear in the OED). At any rate, while some US editors might consider "-ally" vs. "-ly" to be pleonastic in some cases, the majority of other English speakers would not, and many "-ally" words are not pleonastic to anyone, even in American English.
The most common definitely pleonastic morphological usage in English is "irregardless", which is very widely criticized as being a non-word. The standard usage is "regardless", which is already negative; adding the additional negative ir- is interpreted by some as logically reversing the meaning to "with regard to/for", which is certainly not what the speaker intended to convey. (According to most dictionaries that include it, "irregardless" appears to derive from confusion between "regardless" and "irrespective", which have overlapping meanings.)
Morphemic pleonasm in Modern Standard Chinese
There are several instances in Chinese vocabulary where pleonasms and cognate objects are present. Their presence usually indicate the plural form of the noun or the noun in formal context.
('book(s)' – in general)
('paper, tissue, pieces of paper' – formal)
In some instances, the pleonasmic form of the verb is used with the intention as an emphasis to one meaning of the verb, isolating them from their idiomatic and figurative uses. But over time, the pseudo-object, which sometimes repeats the verb, is almost inherently coupled with the it.
For example, the word ('to sleep') is an intransitive verb, but may express different meaning when coupled with objects of prepositions as in "to sleep with". However, in Mandarin, is usually coupled with a pseudo-character , yet it is not entirely a cognate object, to express the act of resting.
('I want sleep'). Although such usage of is not found among native speakers of Mandarin and may sound awkward, this expression is grammatically correct and it is clear that means 'to sleep/to rest' in this context.
('I want to sleep') and ('I'm going to sleep'). In this context, ('to sleep') is a complete verb and native speakers often express themselves this way. Adding this particle clears any suspicion from using it with any direct object shown in the next example:
('I want to have sex with her') and ('I want to sleep with her'). When the verb follows an animate direct object the meaning changes dramatically. The first instance is mainly seen in colloquial speech. Note that the object of preposition of "to have sex with" is the equivalent of the direct object of in Mandarin.
One can also find a way around this verb, using another one which does not is used to express idiomatic expressions nor necessitate a pleonasm, because it only has one meaning:
('I want "to dorm)
Nevertheless, is a verb used in high-register diction, just like English verbs with Latin roots.
There is no relationship found between Chinese and English regarding verbs that can take pleonasms and cognate objects. Although the verb to sleep may take a cognate object as in "sleep a restful sleep", it is a pure coincidence, since verbs of this form are more common in Chinese than in English; and when the English verb is used without the cognate objects, its diction is natural and its meaning is clear in every level of diction, as in "I want to sleep" and "I want to have a rest".
Subtler redundancies
In some cases, the redundancy in meaning occurs at the syntactic level above the word, such as at the phrase level:
"It's déjà vu all over again."
"I never make predictions, especially about the future."
The redundancy of these two well-known statements is deliberate, for humorous effect. (See Yogi Berra#"Yogi-isms".) But one does hear educated people say "my predictions about the future of politics" for "my predictions about politics", which are equivalent in meaning. While predictions are necessarily about the future (at least in relation to the time the prediction was made), the nature of this future can be subtle (e.g., "I predict that he died a week ago"—the prediction is about future discovery or proof of the date of death, not about the death itself). Generally "the future" is assumed, making most constructions of this sort pleonastic. The latter humorous quote above about not making predictions—by Yogi Berra—is not really a pleonasm, but rather an ironic play on words.
Alternatively it could be an analogy between predict and guess.
However, "It's déjà vu all over again" could mean that there was earlier another déjà vu of the same event or idea, which has now arisen for a third time; or that the speaker had very recently experienced a déjà vu of a different idea.
Redundancy, and "useless" or "nonsensical" words (or phrases, or morphemes), can also be inherited by one language from the influence of another and are not pleonasms in the more critical sense but actual changes in grammatical construction considered to be required for "proper" usage in the language or dialect in question. Irish English, for example, is prone to a number of constructions that non-Irish speakers find strange and sometimes directly confusing or silly:
"I'm after putting it on the table."('I [have] put it on the table.') This example further shows that the effect, whether pleonastic or only pseudo-pleonastic, can apply to words and word-parts, and multi-word phrases, given that the fullest rendition would be "I am after putting it on the table".
"Have a look at your man there."('Have a look at that man there.') An example of word substitution, rather than addition, that seems illogical outside the dialect. This common possessive-seeming construction often confuses the non-Irish enough that they do not at first understand what is meant. Even "Have a look at that man there" is arguably further doubly redundant, in that a shorter "Look at that man" version would convey essentially the same meaning.
"She's my wife so she is."('She's my wife.') Duplicate subject and verb, post-complement, used to emphasize a simple factual statement or assertion.
All of these constructions originate from the application of Irish Gaelic grammatical rules to the English dialect spoken, in varying particular forms, throughout the island.
Seemingly "useless" additions and substitutions must be contrasted with similar constructions that are used for stress, humor, or other intentional purposes, such as:
"I abso-fuckin'-lutely agree!"(tmesis, for stress)
"Topless-shmopless—nudity doesn't distract me."(shm-reduplication, for humor)
The latter of these is a result of Yiddish influences on modern English, especially East Coast US English.
Sometimes editors and grammatical stylists will use "pleonasm" to describe simple wordiness. This phenomenon is also called prolixity or logorrhea. Compare:
"The sound of the loud music drowned out the sound of the burglary."
"The loud music drowned out the sound of the burglary."
or even:
"The music drowned out the burglary."
The reader or hearer does not have to be told that loud music has a sound, and in a newspaper headline or other abbreviated prose can even be counted upon to infer that "burglary" is a proxy for "sound of the burglary" and that the music necessarily must have been loud to drown it out, unless the burglary was relatively quiet (this is not a trivial issue, as it may affect the legal culpability of the person who played the music); the word "loud" may imply that the music should have been played quietly if at all. Many are critical of the excessively abbreviated constructions of "headline-itis" or "newsspeak", so "loud [music]" and "sound of the [burglary]" in the above example should probably not be properly regarded as pleonastic or otherwise genuinely redundant, but simply as informative and clarifying.
Prolixity is also used to obfuscate, confuse, or euphemize and is not necessarily redundant or pleonastic in such constructions, though it often is. "Post-traumatic stress disorder" (shell shock) and "pre-owned vehicle" (used car) are both tumid euphemisms but are not redundant. Redundant forms, however, are especially common in business, political, and academic language that is intended to sound impressive (or to be vague so as to make it hard to determine what is actually being promised, or otherwise misleading). For example: "This quarter, we are presently focusing with determination on an all-new, innovative integrated methodology and framework for rapid expansion of customer-oriented external programs designed and developed to bring the company's consumer-first paradigm into the marketplace as quickly as possible."
In contrast to redundancy, an oxymoron results when two seemingly contradictory words are adjoined.
Foreign words
Redundancies sometimes take the form of foreign words whose meaning is repeated in the context:
"We went to the El Restaurante restaurant."
"The La Brea tar pits are fascinating."
"Roast beef served with au jus sauce."
"Please R.S.V.P."
"The Schwarzwald Forest is deep and dark."
"The Drakensberg Mountains are in South Africa."
"We will vacation in Timor-Leste."
LibreOffice office suite.
The hoi polloi.
I'd like to have a chai tea.
"That delicious Queso cheese."
"Some salsa sauce on the side?."
These sentences use phrases which mean, respectively, "the restaurant restaurant", "the tar tar", "with juice sauce" and so on. However, many times these redundancies are necessary—especially when the foreign words make up a proper noun as opposed to a common one. For example, "We went to Il Ristorante" is acceptable provided the audience can infer that it is a restaurant. (If they understand Italian and English it might, if spoken, be misinterpreted as a generic reference and not a proper noun, leading the hearer to ask "Which ristorante do you mean?"—such confusions are common in richly bilingual areas like Montreal or the American Southwest when mixing phrases from two languages.) But avoiding the redundancy of the Spanish phrase in the second example would only leave an awkward alternative: "La Brea pits are fascinating".
Most find it best to not even drop articles when using proper nouns made from foreign languages:
"The movie is playing at the El Capitan theater."
However, there are some exceptions to this, for example:
"Jude Bellingham plays for Real Madrid in La Liga." ("La Liga" literally means "The League" in Spanish)
This is also similar to the treatment of definite and indefinite articles in titles of books, films, etc. where the article can—some would say must—be present where it would otherwise be "forbidden":
"Stephen King's The Shining is scary."(Normally, the article would be left off following a possessive.)
"I'm having an An American Werewolf in London movie night at my place."(Seemingly doubled article, which would be taken for a stutter or typographical error in other contexts.)
Some cross-linguistic redundancies, especially in placenames, occur because a word in one language became the title of a place in another (e.g., the Sahara Desert—"Sahara" is an English approximation of the word for "deserts" in Arabic). "The Los Angeles Angels" professional baseball team is literally "the The Angels Angels". A supposed extreme example is Torpenhow Hill in Cumbria, where some of the elements in the name likely mean "hill". See the List of tautological place names for many more examples.
The word tsetse means "fly" in the Tswana language, a Bantu language spoken in Botswana and South Africa. This word is the root of the English name for a biting fly found in Africa, the tsetse fly.
Acronyms and initialisms
Acronyms and initialisms can also form the basis for redundancies; this is known humorously as RAS syndrome (for Redundant Acronym Syndrome syndrome). In all the examples that follow, the word after the acronym repeats a word represented in the acronym. The full redundant phrase is stated in the parentheses that follow each example:
"I forgot my PIN number for the ATM machine." (Personal Identification Number number; Automated Teller Machine machine)
"I upgraded the RAM memory of my computer." (Random Access Memory memory)
"She is infected with the HIV virus." (Human Immunodeficiency Virus virus)
"I have installed a CMS system on my server." (Content Management System system)
"The SI system of units is the modern form of the metric system." (International System system)
(See RAS syndrome for many more examples.) The expansion of an acronym like PIN or HIV may be well known to English speakers, but the acronyms themselves have come to be treated as words, so little thought is given to what their expansion is (and "PIN" is also pronounced the same as the word "pin"; disambiguation is probably the source of "PIN number"; "SIN number" for "Social Insurance Number number" is a similar common phrase in Canada.) But redundant acronyms are more common with technical (e.g., computer) terms where well-informed speakers recognize the redundancy and consider it silly or ignorant, but mainstream users might not, since they may not be aware or certain of the full expansion of an acronym like "RAM".
Typographical
Some redundancies are simply typographical. For instance, when a short inflexional word like "the" occurs at the end of a line, it is very common to accidentally repeat it at the beginning of the following line, and a large number of readers would not even notice it.
Apparent redundancies that actually are not redundant
Carefully constructed expressions, especially in poetry and political language, but also some general usages in everyday speech, may appear to be redundant but are not. This is most common with cognate objects (a verb's object that is cognate with the verb):
"She slept a deep sleep."
Or, a classic example from Latin:
mutatis mutandis = "with change made to what needs to be changed" (an ablative absolute construction)
The words need not be etymologically related, but simply conceptually, to be considered an example of cognate object:
"We wept tears of joy."
Such constructions are not actually redundant (unlike "She slept a sleep" or "We wept tears") because the object's modifiers provide additional information. A rarer, more constructed form is polyptoton, the stylistic repetition of the same word or words derived from the same root:
"...[T]he only thing we have to fear is fear itself." — Franklin D. Roosevelt, "First Inaugural Address", March 1933.
"With eager feeding[,] food doth choke the feeder." — William Shakespeare, Richard II, II, i, 37.
As with cognate objects, these constructions are not redundant because the repeated words or derivatives cannot be removed without removing meaning or even destroying the sentence, though in most cases they could be replaced with non-related synonyms at the cost of style (e.g., compare "The only thing we have to fear is terror".)
Semantic pleonasm and context
In many cases of semantic pleonasm, the status of a word as pleonastic depends on context. The relevant context can be as local as a neighboring word, or as global as the extent of a speaker's knowledge. In fact, many examples of redundant expressions are not inherently redundant, but can be redundant if used one way, and are not redundant if used another way. The "up" in "climb up" is not always redundant, as in the example "He climbed up and then fell down the mountain." Many other examples of pleonasm are redundant only if the speaker's knowledge is taken into account. For example, most English speakers would agree that "tuna fish" is redundant because tuna is a kind of fish. However, given the knowledge that "tuna" can also refer to a kind of edible prickly pear, the "fish" in "tuna fish" can be seen as non-pleonastic, but rather a disambiguator between the fish and the prickly pear.
Conversely, to English speakers who do not know Spanish, there is nothing redundant about "the La Brea tar pits" because the name "La Brea" is opaque: the speaker does not know that it is Spanish for "the tar" and thus "the La Brea Tar Pits" translates to "the the tar tar pits". Similarly, even though scuba stands for "self-contained underwater breathing apparatus", a phrase like "the scuba gear" would probably not be considered pleonastic because "scuba" has been reanalyzed into English as a simple word, and not an acronym suggesting the pleonastic word sequence "apparatus gear". (Most do not even know that it is an acronym and do not spell it SCUBA or S.C.U.B.A. Similar examples are radar and laser.)
See also
Notes
References
Citations
Bibliography
External links
Figures of speech
Linguistics
Rhetoric
Semantics
Syntax | 0.781203 | 0.994977 | 0.777279 |
Spontaneous process | In thermodynamics, a spontaneous process is a process which occurs without any external input to the system. A more technical definition is the time-evolution of a system in which it releases free energy and it moves to a lower, more thermodynamically stable energy state (closer to thermodynamic equilibrium). The sign convention for free energy change follows the general convention for thermodynamic measurements, in which a release of free energy from the system corresponds to a negative change in the free energy of the system and a positive change in the free energy of the surroundings.
Depending on the nature of the process, the free energy is determined differently. For example, the Gibbs free energy change is used when considering processes that occur under constant pressure and temperature conditions, whereas the Helmholtz free energy change is used when considering processes that occur under constant volume and temperature conditions. The value and even the sign of both free energy changes can depend upon the temperature and pressure or volume.
Because spontaneous processes are characterized by a decrease in the system's free energy, they do not need to be driven by an outside source of energy.
For cases involving an isolated system where no energy is exchanged with the surroundings, spontaneous processes are characterized by an increase in entropy.
A spontaneous reaction is a chemical reaction which is a spontaneous process under the conditions of interest.
Overview
In general, the spontaneity of a process only determines whether or not a process can occur and makes no indication as to whether or not the process will occur. In other words, spontaneity is a necessary, but not sufficient, condition for a process to actually occur. Furthermore, spontaneity makes no implication as to the speed at which the spontaneous process may occur - just because a process is spontaneous does not mean it will happen quickly (or at all).
As an example, the conversion of a diamond into graphite is a spontaneous process at room temperature and pressure. Despite being spontaneous, this process does not occur since the energy to break the strong carbon-carbon bonds is larger than the release in free energy. Another way to explain this would be that even though the conversion of diamond into graphite is thermodynamically feasible and spontaneous even at room temperature, the high activation energy of this reaction renders it unspontaneous.
Using free energy to determine spontaneity
For a process that occurs at constant temperature and pressure, spontaneity can be determined using the change in Gibbs free energy, which is given by:
where the sign of ΔG depends on the signs of the changes in enthalpy (ΔH) and entropy (ΔS). If these two signs are the same (both positive or both negative), then the sign of ΔG will change from positive to negative (or vice versa) at the temperature
In cases where ΔG is:
negative, the process is spontaneous and may proceed in the forward direction as written.
positive, the process is non-spontaneous as written, but it may proceed spontaneously in the reverse direction.
zero, the process is at equilibrium, with no net change taking place over time.
This set of rules can be used to determine four distinct cases by examining the signs of the ΔS and ΔH.
When ΔS > 0 and ΔH < 0, the process is always spontaneous as written.
When ΔS < 0 and ΔH > 0, the process is never spontaneous, but the reverse process is always spontaneous.
When ΔS > 0 and ΔH > 0, the process will be spontaneous at high temperatures and non-spontaneous at low temperatures.
When ΔS < 0 and ΔH < 0, the process will be spontaneous at low temperatures and non-spontaneous at high temperatures.
For the latter two cases, the temperature at which the spontaneity changes will be determined by the relative magnitudes of ΔS and ΔH.
Using entropy to determine spontaneity
When using the entropy change of a process to assess spontaneity, it is important to carefully consider the definition of the system and surroundings. The second law of thermodynamics states that a process involving an isolated system will be spontaneous if the entropy of the system increases over time. For open or closed systems, however, the statement must be modified to say that the total entropy of the combined system and surroundings must increase, or,
This criterion can then be used to explain how it is possible for the entropy of an open or closed system to decrease during a spontaneous process. A decrease in system entropy can only occur spontaneously if the entropy change of the surroundings is both positive in sign and has a larger magnitude than the entropy change of the system:
and
In many processes, the increase in entropy of the surroundings is accomplished via heat transfer from the system to the surroundings (i.e. an exothermic process).
See also
Endergonic reaction reactions which are not spontaneous at standard temperature, pressure, and concentrations.
Diffusion spontaneous phenomenon that minimizes Gibbs free energy.
References
Thermodynamics
Chemical thermodynamics
Chemical processes | 0.787395 | 0.987111 | 0.777246 |
Thermodynamic process | Classical thermodynamics considers three main kinds of thermodynamic processes: (1) changes in a system, (2) cycles in a system, and (3) flow processes.
(1) A Thermodynamic process is a process in which the thermodynamic state of a system is changed. A change in a system is defined by a passage from an initial to a final state of thermodynamic equilibrium. In classical thermodynamics, the actual course of the process is not the primary concern, and often is ignored. A state of thermodynamic equilibrium endures unchangingly unless it is interrupted by a thermodynamic operation that initiates a thermodynamic process. The equilibrium states are each respectively fully specified by a suitable set of thermodynamic state variables, that depend only on the current state of the system, not on the path taken by the processes that produce the state. In general, during the actual course of a thermodynamic process, the system may pass through physical states which are not describable as thermodynamic states, because they are far from internal thermodynamic equilibrium. Non-equilibrium thermodynamics, however, considers processes in which the states of the system are close to thermodynamic equilibrium, and aims to describe the continuous passage along the path, at definite rates of progress.
As a useful theoretical but not actually physically realizable limiting case, a process may be imagined to take place practically infinitely slowly or smoothly enough to allow it to be described by a continuous path of equilibrium thermodynamic states, when it is called a "quasi-static" process. This is a theoretical exercise in differential geometry, as opposed to a description of an actually possible physical process; in this idealized case, the calculation may be exact.
A really possible or actual thermodynamic process, considered closely, involves friction. This contrasts with theoretically idealized, imagined, or limiting, but not actually possible, quasi-static processes which may occur with a theoretical slowness that avoids friction. It also contrasts with idealized frictionless processes in the surroundings, which may be thought of as including 'purely mechanical systems'; this difference comes close to defining a thermodynamic process.
(2) A cyclic process carries the system through a cycle of stages, starting and being completed in some particular state. The descriptions of the staged states of the system are not the primary concern. The primary concern is the sums of matter and energy inputs and outputs to the cycle. Cyclic processes were important conceptual devices in the early days of thermodynamical investigation, while the concept of the thermodynamic state variable was being developed.
(3) Defined by flows through a system, a flow process is a steady state of flows into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. Flow processes are of interest in engineering.
Kinds of process
Cyclic process
Defined by a cycle of transfers into and out of a system, a cyclic process is described by the quantities transferred in the several stages of the cycle. The descriptions of the staged states of the system may be of little or even no interest. A cycle is a sequence of a small number of thermodynamic processes that indefinitely often, repeatedly returns the system to its original state. For this, the staged states themselves are not necessarily described, because it is the transfers that are of interest. It is reasoned that if the cycle can be repeated indefinitely often, then it can be assumed that the states are recurrently unchanged. The condition of the system during the several staged processes may be of even less interest than is the precise nature of the recurrent states. If, however, the several staged processes are idealized and quasi-static, then the cycle is described by a path through a continuous progression of equilibrium states.
Flow process
Defined by flows through a system, a flow process is a steady state of flow into and out of a vessel with definite wall properties. The internal state of the vessel contents is not the primary concern. The quantities of primary concern describe the states of the inflow and the outflow materials, and, on the side, the transfers of heat, work, and kinetic and potential energies for the vessel. The states of the inflow and outflow materials consist of their internal states, and of their kinetic and potential energies as whole bodies. Very often, the quantities that describe the internal states of the input and output materials are estimated on the assumption that they are bodies in their own states of internal thermodynamic equilibrium. Because rapid reactions are permitted, the thermodynamic treatment may be approximate, not exact.
A cycle of quasi-static processes
A quasi-static thermodynamic process can be visualized by graphically plotting the path of idealized changes to the system's state variables. In the example, a cycle consisting of four quasi-static processes is shown. Each process has a well-defined start and end point in the pressure-volume state space. In this particular example, processes 1 and 3 are isothermal, whereas processes 2 and 4 are isochoric. The PV diagram is a particularly useful visualization of a quasi-static process, because the area under the curve of a process is the amount of work done by the system during that process. Thus work is considered to be a process variable, as its exact value depends on the particular path taken between the start and end points of the process. Similarly, heat may be transferred during a process, and it too is a process variable.
Conjugate variable processes
It is often useful to group processes into pairs, in which each variable held constant is one member of a conjugate pair.
Pressure – volume
The pressure–volume conjugate pair is concerned with the transfer of mechanical energy as the result of work.
An isobaric process occurs at constant pressure. An example would be to have a movable piston in a cylinder, so that the pressure inside the cylinder is always at atmospheric pressure, although it is separated from the atmosphere. In other words, the system is dynamically connected, by a movable boundary, to a constant-pressure reservoir.
An isochoric process is one in which the volume is held constant, with the result that the mechanical PV work done by the system will be zero. On the other hand, work can be done isochorically on the system, for example by a shaft that drives a rotary paddle located inside the system. It follows that, for the simple system of one deformation variable, any heat energy transferred to the system externally will be absorbed as internal energy. An isochoric process is also known as an isometric process or an isovolumetric process. An example would be to place a closed tin can of material into a fire. To a first approximation, the can will not expand, and the only change will be that the contents gain internal energy, evidenced by increase in temperature and pressure. Mathematically, . The system is dynamically insulated, by a rigid boundary, from the environment.
Temperature – entropy
The temperature-entropy conjugate pair is concerned with the transfer of energy, especially for a closed system.
An isothermal process occurs at a constant temperature. An example would be a closed system immersed in and thermally connected with a large constant-temperature bath. Energy gained by the system, through work done on it, is lost to the bath, so that its temperature remains constant.
An adiabatic process is a process in which there is no matter or heat transfer, because a thermally insulating wall separates the system from its surroundings. For the process to be natural, either (a) work must be done on the system at a finite rate, so that the internal energy of the system increases; the entropy of the system increases even though it is thermally insulated; or (b) the system must do work on the surroundings, which then suffer increase of entropy, as well as gaining energy from the system.
An isentropic process is customarily defined as an idealized quasi-static reversible adiabatic process, of transfer of energy as work. Otherwise, for a constant-entropy process, if work is done irreversibly, heat transfer is necessary, so that the process is not adiabatic, and an accurate artificial control mechanism is necessary; such is therefore not an ordinary natural thermodynamic process.
Chemical potential - particle number
The processes just above have assumed that the boundaries are also impermeable to particles. Otherwise, we may assume boundaries that are rigid, but are permeable to one or more types of particle. Similar considerations then hold for the chemical potential–particle number conjugate pair, which is concerned with the transfer of energy via this transfer of particles.
In a constant chemical potential process the system is particle-transfer connected, by a particle-permeable boundary, to a constant-μ reservoir.
The conjugate here is a constant particle number process. These are the processes outlined just above. There is no energy added or subtracted from the system by particle transfer. The system is particle-transfer-insulated from its environment by a boundary that is impermeable to particles, but permissive of transfers of energy as work or heat. These processes are the ones by which thermodynamic work and heat are defined, and for them, the system is said to be closed.
Thermodynamic potentials
Any of the thermodynamic potentials may be held constant during a process. For example:
An isenthalpic process introduces no change in enthalpy in the system.
Polytropic processes
A polytropic process is a thermodynamic process that obeys the relation:
where P is the pressure, V is volume, n is any real number (the "polytropic index"), and C is a constant. This equation can be used to accurately characterize processes of certain systems, notably the compression or expansion of a gas, but in some cases, liquids and solids.
Processes classified by the second law of thermodynamics
According to Planck, one may think of three main classes of thermodynamic process: natural, fictively reversible, and impossible or unnatural.
Natural process
Only natural processes occur in nature. For thermodynamics, a natural process is a transfer between systems that increases the sum of their entropies, and is irreversible. Natural processes may occur spontaneously upon the removal of a constraint, or upon some other thermodynamic operation, or may be triggered in a metastable or unstable system, as for example in the condensation of a supersaturated vapour. Planck emphasised the occurrence of friction as an important characteristic of natural thermodynamic processes that involve transfer of matter or energy between system and surroundings.
Effectively reversible process
To describe the geometry of graphical surfaces that illustrate equilibrium relations between thermodynamic functions of state, no one can fictively think of so-called "reversible processes". They are convenient theoretical objects that trace paths across graphical surfaces. They are called "processes" but do not describe naturally occurring processes, which are always irreversible. Because the points on the paths are points of thermodynamic equilibrium, it is customary to think of the "processes" described by the paths as fictively "reversible". Reversible processes are always quasistatic processes, but the converse is not always true.
Unnatural process
Unnatural processes are logically conceivable but do not occur in nature. They would decrease the sum of the entropies if they occurred.
Quasistatic process
A quasistatic process is an idealized or fictive model of a thermodynamic "process" considered in theoretical studies. It does not occur in physical reality. It may be imagined as happening infinitely slowly so that the system passes through a continuum of states that are infinitesimally close to equilibrium.
See also
Flow process
Heat
Phase transition
Work (thermodynamics)
References
Further reading
Physics for Scientists and Engineers - with Modern Physics (6th Edition), P. A. Tipler, G. Mosca, Freeman, 2008,
Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, (Verlagsgesellschaft), (VHC Inc.)
McGraw Hill Encyclopaedia of Physics (2nd Edition), C.B. Parker, 1994,
Physics with Modern Applications, L.H. Greenberg, Holt-Saunders International W.B. Saunders and Co, 1978,
Essential Principles of Physics, P.M. Whelan, M.J. Hodgeson, 2nd Edition, 1978, John Murray,
Thermodynamics, From Concepts to Applications (2nd Edition), A. Shavit, C. Gutfinger, CRC Press (Taylor and Francis Group, USA), 2009,
Chemical Thermodynamics, D.J.G. Ives, University Chemistry, Macdonald Technical and Scientific, 1971,
Elements of Statistical Thermodynamics (2nd Edition), L.K. Nash, Principles of Chemistry, Addison-Wesley, 1974,
Statistical Physics (2nd Edition), F. Mandl, Manchester Physics, John Wiley & Sons, 2008,
Equilibrium chemistry
Thermodynamic cycles
Thermodynamic systems
Thermodynamics | 0.786234 | 0.988498 | 0.77719 |
Transport phenomena | In engineering, physics, and chemistry, the study of transport phenomena concerns the exchange of mass, energy, charge, momentum and angular momentum between observed and studied systems. While it draws from fields as diverse as continuum mechanics and thermodynamics, it places a heavy emphasis on the commonalities between the topics covered. Mass, momentum, and heat transport all share a very similar mathematical framework, and the parallels between them are exploited in the study of transport phenomena to draw deep mathematical connections that often provide very useful tools in the analysis of one field that are directly derived from the others.
The fundamental analysis in all three subfields of mass, heat, and momentum transfer are often grounded in the simple principle that the total sum of the quantities being studied must be conserved by the system and its environment. Thus, the different phenomena that lead to transport are each considered individually with the knowledge that the sum of their contributions must equal zero. This principle is useful for calculating many relevant quantities. For example, in fluid mechanics, a common use of transport analysis is to determine the velocity profile of a fluid flowing through a rigid volume.
Transport phenomena are ubiquitous throughout the engineering disciplines. Some of the most common examples of transport analysis in engineering are seen in the fields of process, chemical, biological, and mechanical engineering, but the subject is a fundamental component of the curriculum in all disciplines involved in any way with fluid mechanics, heat transfer, and mass transfer. It is now considered to be a part of the engineering discipline as much as thermodynamics, mechanics, and electromagnetism.
Transport phenomena encompass all agents of physical change in the universe. Moreover, they are considered to be fundamental building blocks which developed the universe, and which are responsible for the success of all life on Earth. However, the scope here is limited to the relationship of transport phenomena to artificial engineered systems.
Overview
In physics, transport phenomena are all irreversible processes of statistical nature stemming from the random continuous motion of molecules, mostly observed in fluids. Every aspect of transport phenomena is grounded in two primary concepts : the conservation laws, and the constitutive equations. The conservation laws, which in the context of transport phenomena are formulated as continuity equations, describe how the quantity being studied must be conserved. The constitutive equations describe how the quantity in question responds to various stimuli via transport. Prominent examples include Fourier's law of heat conduction and the Navier–Stokes equations, which describe, respectively, the response of heat flux to temperature gradients and the relationship between fluid flux and the forces applied to the fluid. These equations also demonstrate the deep connection between transport phenomena and thermodynamics, a connection that explains why transport phenomena are irreversible. Almost all of these physical phenomena ultimately involve systems seeking their lowest energy state in keeping with the principle of minimum energy. As they approach this state, they tend to achieve true thermodynamic equilibrium, at which point there are no longer any driving forces in the system and transport ceases. The various aspects of such equilibrium are directly connected to a specific transport: heat transfer is the system's attempt to achieve thermal equilibrium with its environment, just as mass and momentum transport move the system towards chemical and mechanical equilibrium.
Examples of transport processes include heat conduction (energy transfer), fluid flow (momentum transfer), molecular diffusion (mass transfer), radiation and electric charge transfer in semiconductors.
Transport phenomena have wide application. For example, in solid state physics, the motion and interaction of electrons, holes and phonons are studied under "transport phenomena". Another example is in biomedical engineering, where some transport phenomena of interest are thermoregulation, perfusion, and microfluidics. In chemical engineering, transport phenomena are studied in reactor design, analysis of molecular or diffusive transport mechanisms, and metallurgy.
The transport of mass, energy, and momentum can be affected by the presence of external sources:
An odor dissipates more slowly (and may intensify) when the source of the odor remains present.
The rate of cooling of a solid that is conducting heat depends on whether a heat source is applied.
The gravitational force acting on a rain drop counteracts the resistance or drag imparted by the surrounding air.
Commonalities among phenomena
An important principle in the study of transport phenomena is analogy between phenomena.
Diffusion
There are some notable similarities in equations for momentum, energy, and mass transfer which can all be transported by diffusion, as illustrated by the following examples:
Mass: the spreading and dissipation of odors in air is an example of mass diffusion.
Energy: the conduction of heat in a solid material is an example of heat diffusion.
Momentum: the drag experienced by a rain drop as it falls in the atmosphere is an example of momentum diffusion (the rain drop loses momentum to the surrounding air through viscous stresses and decelerates).
The molecular transfer equations of Newton's law for fluid momentum, Fourier's law for heat, and Fick's law for mass are very similar. One can convert from one transport coefficient to another in order to compare all three different transport phenomena.
A great deal of effort has been devoted in the literature to developing analogies among these three transport processes for turbulent transfer so as to allow prediction of one from any of the others. The Reynolds analogy assumes that the turbulent diffusivities are all equal and that the molecular diffusivities of momentum (μ/ρ) and mass (DAB) are negligible compared to the turbulent diffusivities. When liquids are present and/or drag is present, the analogy is not valid. Other analogies, such as von Karman's and Prandtl's, usually result in poor relations.
The most successful and most widely used analogy is the Chilton and Colburn J-factor analogy. This analogy is based on experimental data for gases and liquids in both the laminar and turbulent regimes. Although it is based on experimental data, it can be shown to satisfy the exact solution derived from laminar flow over a flat plate. All of this information is used to predict transfer of mass.
Onsager reciprocal relations
In fluid systems described in terms of temperature, matter density, and pressure, it is known that temperature differences lead to heat flows from the warmer to the colder parts of the system; similarly, pressure differences will lead to matter flow from high-pressure to low-pressure regions (a "reciprocal relation"). What is remarkable is the observation that, when both pressure and temperature vary, temperature differences at constant pressure can cause matter flow (as in convection) and pressure differences at constant temperature can cause heat flow. The heat flow per unit of pressure difference and the density (matter) flow per unit of temperature difference are equal.
This equality was shown to be necessary by Lars Onsager using statistical mechanics as a consequence of the time reversibility of microscopic dynamics. The theory developed by Onsager is much more general than this example and capable of treating more than two thermodynamic forces at once.
Momentum transfer
In momentum transfer, the fluid is treated as a continuous distribution of matter. The study of momentum transfer, or fluid mechanics can be divided into two branches: fluid statics (fluids at rest), and fluid dynamics (fluids in motion).
When a fluid is flowing in the x-direction parallel to a solid surface, the fluid has x-directed momentum, and its concentration is υxρ. By random diffusion of molecules there is an exchange of molecules in the z-direction. Hence the x-directed momentum has been transferred in the z-direction from the faster- to the slower-moving layer.
The equation for momentum transfer is Newton's law of viscosity written as follows:
where τzx is the flux of x-directed momentum in the z-direction, ν is μ/ρ, the momentum diffusivity, z is the distance of transport or diffusion, ρ is the density, and μ is the dynamic viscosity. Newton's law of viscosity is the simplest relationship between the flux of momentum and the velocity gradient. It may be useful to note that this is an unconventional use of the symbol τzx; the indices are reversed as compared with standard usage in solid mechanics, and the sign is reversed.
Mass transfer
When a system contains two or more components whose concentration vary from point to point, there is a natural tendency for mass to be transferred, minimizing any concentration difference within the system. Mass transfer in a system is governed by Fick's first law: 'Diffusion flux from higher concentration to lower concentration is proportional to the gradient of the concentration of the substance and the diffusivity of the substance in the medium.' Mass transfer can take place due to different driving forces. Some of them are:
Mass can be transferred by the action of a pressure gradient (pressure diffusion)
Forced diffusion occurs because of the action of some external force
Diffusion can be caused by temperature gradients (thermal diffusion)
Diffusion can be caused by differences in chemical potential
This can be compared to Fick's law of diffusion, for a species A in a binary mixture consisting of A and B:
where D is the diffusivity constant.
Heat transfer
Many important engineered systems involve heat transfer. Some examples are the heating and cooling of process streams, phase changes, distillation, etc. The basic principle is the Fourier's law which is expressed as follows for a static system:
The net flux of heat through a system equals the conductivity times the rate of change of temperature with respect to position.
For convective transport involving turbulent flow, complex geometries, or difficult boundary conditions, the heat transfer may be represented by a heat transfer coefficient.
where A is the surface area, is the temperature driving force, Q is the heat flow per unit time, and h is the heat transfer coefficient.
Within heat transfer, two principal types of convection can occur:
Forced convection can occur in both laminar and turbulent flow. In the situation of laminar flow in circular tubes, several dimensionless numbers are used such as Nusselt number, Reynolds number, and Prandtl number. The commonly used equation is .
Natural or free convection is a function of Grashof and Prandtl numbers. The complexities of free convection heat transfer make it necessary to mainly use empirical relations from experimental data.
Heat transfer is analyzed in packed beds, nuclear reactors and heat exchangers.
Heat and mass transfer analogy
The heat and mass analogy allows solutions for mass transfer problems to be obtained from known solutions to heat transfer problems. Its arises from similar non-dimensional governing equations between heat and mass transfer.
Derivation
The non-dimensional energy equation for fluid flow in a boundary layer can simplify to the following, when heating from viscous dissipation and heat generation can be neglected:
Where and are the velocities in the x and y directions respectively normalized by the free stream velocity, and are the x and y coordinates non-dimensionalized by a relevant length scale, is the Reynolds number, is the Prandtl number, and is the non-dimensional temperature, which is defined by the local, minimum, and maximum temperatures:
The non-dimensional species transport equation for fluid flow in a boundary layer can be given as the following, assuming no bulk species generation:
Where is the non-dimensional concentration, and is the Schmidt number.
Transport of heat is driven by temperature differences, while transport of species is due to concentration differences. They differ by the relative diffusion of their transport compared to the diffusion of momentum. For heat, the comparison is between viscous diffusivity and thermal diffusion, given by the Prandtl number. Meanwhile, for mass transfer, the comparison is between viscous diffusivity and mass Diffusivity, given by the Schmidt number.
In some cases direct analytic solutions can be found from these equations for the Nusselt and Sherwood numbers. In cases where experimental results are used, one can assume these equations underlie the observed transport.
At an interface, the boundary conditions for both equations are also similar. For heat transfer at an interface, the no-slip condition allows us to equate conduction with convection, thus equating Fourier's law and Newton's law of cooling:
Where q” is the heat flux, is the thermal conductivity, is the heat transfer coefficient, and the subscripts and compare the surface and bulk values respectively.
For mass transfer at an interface, we can equate Fick's law with Newton's law for convection, yielding:
Where is the mass flux [kg/s ], is the diffusivity of species a in fluid b, and is the mass transfer coefficient. As we can see, and are analogous, and are analogous, while and are analogous.
Implementing the Analogy
Heat-Mass Analogy:
Because the Nu and Sh equations are derived from these analogous governing equations, one can directly swap the Nu and Sh and the Pr and Sc numbers to convert these equations between mass and heat.
In many situations, such as flow over a flat plate, the Nu and Sh numbers are functions of the Pr and Sc numbers to some coefficient . Therefore, one can directly calculate these numbers from one another using:
Where can be used in most cases, which comes from the analytical solution for the Nusselt Number for laminar flow over a flat plate. For best accuracy, n should be adjusted where correlations have a different exponent.
We can take this further by substituting into this equation the definitions of the heat transfer coefficient, mass transfer coefficient, and Lewis number, yielding:
For fully developed turbulent flow, with n=1/3, this becomes the Chilton–Colburn J-factor analogy. Said analogy also relates viscous forces and heat transfer, like the Reynolds analogy.
Limitations
The analogy between heat transfer and mass transfer is strictly limited to binary diffusion in dilute (ideal) solutions for which the mass transfer rates are low enough that mass transfer has no effect on the velocity field. The concentration of the diffusing species must be low enough that the chemical potential gradient is accurately represented by the concentration gradient (thus, the analogy has limited application to concentrated liquid solutions). When the rate of mass transfer is high or the concentration of the diffusing species is not low, corrections to the low-rate heat transfer coefficient can sometimes help. Further, in multicomponent mixtures, the transport of one species is affected by the chemical potential gradients of other species.
The heat and mass analogy may also break down in cases where the governing equations differ substantially. For instance, situations with substantial contributions from generation terms in the flow, such as bulk heat generation or bulk chemical reactions, may cause solutions to diverge.
Applications of the Heat-Mass Analogy
The analogy is useful for both using heat and mass transport to predict one another, or for understanding systems which experience simultaneous heat and mass transfer. For example, predicting heat transfer coefficients around turbine blades is challenging and is often done through measuring evaporating of a volatile compound and using the analogy. Many systems also experience simultaneous mass and heat transfer, and particularly common examples occur in processes with phase change, as the enthalpy of phase change often substantially influences heat transfer. Such examples include: evaporation at a water surface, transport of vapor in the air gap above a membrane distillation desalination membrane, and HVAC dehumidification equipment that combine heat transfer and selective membranes.
Applications
Pollution
The study of transport processes is relevant for understanding the release and distribution of pollutants into the environment. In particular, accurate modeling can inform mitigation strategies. Examples include the control of surface water pollution from urban runoff, and policies intended to reduce the copper content of vehicle brake pads in the U.S.
See also
Constitutive equation
Continuity equation
Wave propagation
Pulse
Action potential
Bioheat transfer
References
External links
Transport Phenomena Archive in the Teaching Archives of the Materials Digital Library Pathway
Chemical engineering | 0.786312 | 0.988396 | 0.777188 |
Point mutation | A point mutation is a genetic mutation where a single nucleotide base is changed, inserted or deleted from a DNA or RNA sequence of an organism's genome. Point mutations have a variety of effects on the downstream protein product—consequences that are moderately predictable based upon the specifics of the mutation. These consequences can range from no effect (e.g. synonymous mutations) to deleterious effects (e.g. frameshift mutations), with regard to protein production, composition, and function.
Causes
Point mutations usually take place during DNA replication. DNA replication occurs when one double-stranded DNA molecule creates two single strands of DNA, each of which is a template for the creation of the complementary strand. A single point mutation can change the whole DNA sequence. Changing one purine or pyrimidine may change the amino acid that the nucleotides code for.
Point mutations may arise from spontaneous mutations that occur during DNA replication. The rate of mutation may be increased by mutagens. Mutagens can be physical, such as radiation from UV rays, X-rays or extreme heat, or chemical (molecules that misplace base pairs or disrupt the helical shape of DNA). Mutagens associated with cancers are often studied to learn about cancer and its prevention.
There are multiple ways for point mutations to occur. First, ultraviolet (UV) light and higher-frequency light have ionizing capability, which in turn can affect DNA. Reactive oxygen molecules with free radicals, which are a byproduct of cellular metabolism, can also be very harmful to DNA. These reactants can lead to both single-stranded and double-stranded DNA breaks. Third, bonds in DNA eventually degrade, which creates another problem to keep the integrity of DNA to a high standard. There can also be replication errors that lead to substitution, insertion, or deletion mutations.
Categorization
Transition/transversion categorization
In 1959 Ernst Freese coined the terms "transitions" or "transversions" to categorize different types of point mutations. Transitions are replacement of a purine base with another purine or replacement of a pyrimidine with another pyrimidine. Transversions are replacement of a purine with a pyrimidine or vice versa. There is a systematic difference in mutation rates for transitions (Alpha) and transversions (Beta). Transition mutations are about ten times more common than transversions.
Functional categorization
Nonsense mutations include stop-gain and start-loss. Stop-gain is a mutation that results in a premature termination codon (a stop was gained), which signals the end of translation. This interruption causes the protein to be abnormally shortened. The number of amino acids lost mediates the impact on the protein's functionality and whether it will function whatsoever. Stop-loss is a mutation in the original termination codon (a stop was lost), resulting in abnormal extension of a protein's carboxyl terminus. Start-gain creates an AUG start codon upstream of the original start site. If the new AUG is near the original start site, in-frame within the processed transcript and downstream to a ribosomal binding site, it can be used to initiate translation. The likely effect is additional amino acids added to the amino terminus of the original protein. Frame-shift mutations are also possible in start-gain mutations, but typically do not affect translation of the original protein. Start-loss is a point mutation in a transcript's AUG start codon, resulting in the reduction or elimination of protein production.
Missense mutations code for a different amino acid. A missense mutation changes a codon so that a different protein is created, a non-synonymous change. Conservative mutations result in an amino acid change. However, the properties of the amino acid remain the same (e.g., hydrophobic, hydrophilic, etc.). At times, a change to one amino acid in the protein is not detrimental to the organism as a whole. Most proteins can withstand one or two point mutations before their function changes. Non-conservative mutations result in an amino acid change that has different properties than the wild type. The protein may lose its function, which can result in a disease in the organism. For example, sickle-cell disease is caused by a single point mutation (a missense mutation) in the beta-hemoglobin gene that converts a GAG codon into GUG, which encodes the amino acid valine rather than glutamic acid. The protein may also exhibit a "gain of function" or become activated, such is the case with the mutation changing a valine to glutamic acid in the BRAF gene; this leads to an activation of the RAF protein which causes unlimited proliferative signalling in cancer cells. These are both examples of a non-conservative (missense) mutation.
Silent mutations code for the same amino acid (a "synonymous substitution"). A silent mutation does not affect the functioning of the protein. A single nucleotide can change, but the new codon specifies the same amino acid, resulting in an unmutated protein. This type of change is called synonymous change since the old and new codon code for the same amino acid. This is possible because 64 codons specify only 20 amino acids. Different codons can lead to differential protein expression levels, however.
Single base pair insertions and deletions
Sometimes the term point mutation is used to describe insertions or deletions of a single base pair (which has more of an adverse effect on the synthesized protein due to the nucleotides' still being read in triplets, but in different frames: a mutation called a frameshift mutation).
General consequences
Point mutations that occur in non-coding sequences are most often without consequences, although there are exceptions. If the mutated base pair is in the promoter sequence of a gene, then the expression of the gene may change. Also, if the mutation occurs in the splicing site of an intron, then this may interfere with correct splicing of the transcribed pre-mRNA.
By altering just one amino acid, the entire peptide may change, thereby changing the entire protein. The new protein is called a protein variant. If the original protein functions in cellular reproduction then this single point mutation can change the entire process of cellular reproduction for this organism.
Point germline mutations can lead to beneficial as well as harmful traits or diseases. This leads to adaptations based on the environment where the organism lives. An advantageous mutation can create an advantage for that organism and lead to the trait's being passed down from generation to generation, improving and benefiting the entire population. The scientific theory of evolution is greatly dependent on point mutations in cells. The theory explains the diversity and history of living organisms on Earth. In relation to point mutations, it states that beneficial mutations allow the organism to thrive and reproduce, thereby passing its positively affected mutated genes on to the next generation. On the other hand, harmful mutations cause the organism to die or be less likely to reproduce in a phenomenon known as natural selection.
There are different short-term and long-term effects that can arise from mutations. Smaller ones would be a halting of the cell cycle at numerous points. This means that a codon coding for the amino acid glycine may be changed to a stop codon, causing the proteins that should have been produced to be deformed and unable to complete their intended tasks. Because the mutations can affect the DNA and thus the chromatin, it can prohibit mitosis from occurring due to the lack of a complete chromosome. Problems can also arise during the processes of transcription and replication of DNA. These all prohibit the cell from reproduction and thus lead to the death of the cell. Long-term effects can be a permanent changing of a chromosome, which can lead to a mutation. These mutations can be either beneficial or detrimental. Cancer is an example of how they can be detrimental.
Other effects of point mutations, or single nucleotide polymorphisms in DNA, depend on the location of the mutation within the gene. For example, if the mutation occurs in the region of the gene responsible for coding, the amino acid sequence of the encoded protein may be altered, causing a change in the function, protein localization, stability of the protein or protein complex. Many methods have been proposed to predict the effects of missense mutations on proteins. Machine learning algorithms train their models to distinguish known disease-associated from neutral mutations whereas other methods do not explicitly train their models but almost all methods exploit the evolutionary conservation assuming that changes at conserved positions tend to be more deleterious. While majority of methods provide a binary classification of effects of mutations into damaging and benign, a new level of annotation is needed to offer an explanation of why and how these mutations damage proteins.
Moreover, if the mutation occurs in the region of the gene where transcriptional machinery binds to the protein, the mutation can affect the binding of the transcription factors because the short nucleotide sequences recognized by the transcription factors will be altered. Mutations in this region can affect rate of efficiency of gene transcription, which in turn can alter levels of mRNA and, thus, protein levels in general.
Point mutations can have several effects on the behavior and reproduction of a protein depending on where the mutation occurs in the amino acid sequence of the protein. If the mutation occurs in the region of the gene that is responsible for coding for the protein, the amino acid may be altered. This slight change in the sequence of amino acids can cause a change in the function, activation of the protein meaning how it binds with a given enzyme, where the protein will be located within the cell, or the amount of free energy stored within the protein.
If the mutation occurs in the region of the gene where transcriptional machinery binds to the protein, the mutation can affect the way in which transcription factors bind to the protein. The mechanisms of transcription bind to a protein through recognition of short nucleotide sequences. A mutation in this region may alter these sequences and, thus, change the way the transcription factors bind to the protein. Mutations in this region can affect the efficiency of gene transcription, which controls both the levels of mRNA and overall protein levels.
Specific diseases caused by point mutations
Cancer
Point mutations in multiple tumor suppressor proteins cause cancer. For instance, point mutations in Adenomatous Polyposis Coli promote tumorigenesis. A novel assay, Fast parallel proteolysis (FASTpp), might help swift screening of specific stability defects in individual cancer patients.
Neurofibromatosis
Neurofibromatosis is caused by point mutations in the Neurofibromin 1 or Neurofibromin 2 gene.
Sickle-cell anemia
Sickle-cell anemia is caused by a point mutation in the β-globin chain of hemoglobin, causing the hydrophilic amino acid glutamic acid to be replaced with the hydrophobic amino acid valine at the sixth position.
The β-globin gene is found on the short arm of chromosome 11. The association of two wild-type α-globin subunits with two mutant β-globin subunits forms hemoglobin S (HbS). Under low-oxygen conditions (being at high altitude, for example), the absence of a polar amino acid at position six of the β-globin chain promotes the non-covalent polymerisation (aggregation) of hemoglobin, which distorts red blood cells into a sickle shape and decreases their elasticity.
Hemoglobin is a protein found in red blood cells, and is responsible for the transportation of oxygen through the body. There are two subunits that make up the hemoglobin protein: beta-globins and alpha-globins.
Beta-hemoglobin is created from the genetic information on the HBB, or "hemoglobin, beta" gene found on chromosome 11p15.5. A single point mutation in this polypeptide chain, which is 147 amino acids long, results in the disease known as Sickle Cell Anemia.
Sickle-cell anemia is an autosomal recessive disorder that affects 1 in 500 African Americans, and is one of the most common blood disorders in the United States. The single replacement of the sixth amino acid in the beta-globin, glutamic acid, with valine results in deformed red blood cells. These sickle-shaped cells cannot carry nearly as much oxygen as normal red blood cells and they get caught more easily in the capillaries, cutting off blood supply to vital organs. The single nucleotide change in the beta-globin means that even the smallest of exertions on the part of the carrier results in severe pain and even heart attack. Below is a chart depicting the first thirteen amino acids in the normal and abnormal sickle cell polypeptide chain.
Tay–Sachs disease
The cause of Tay–Sachs disease is a genetic defect that is passed from parent to child. This genetic defect is located in the HEXA gene, which is found on chromosome 15.
The HEXA gene makes part of an enzyme called beta-hexosaminidase A, which plays a critical role in the nervous system. This enzyme helps break down a fatty substance called GM2 ganglioside in nerve cells.
Mutations in the HEXA gene disrupt the activity of beta-hexosaminidase A, preventing the breakdown of the fatty substances. As a result, the fatty substances accumulate to deadly levels in the brain and spinal cord. The buildup of GM2 ganglioside causes progressive damage to the nerve cells. This is the cause of the signs and symptoms of Tay-Sachs disease.
Repeat-induced point mutation
In molecular biology, repeat-induced point mutation or RIP is a process by which DNA accumulates G:C to A:T transition mutations. Genomic evidence indicates that RIP occurs or has occurred in a variety of fungi while experimental evidence indicates that RIP is active in Neurospora crassa, Podospora anserina, Magnaporthe grisea, Leptosphaeria maculans, Gibberella zeae and Nectria haematococca. In Neurospora crassa, sequences mutated by RIP are often methylated de novo.
RIP occurs during the sexual stage in haploid nuclei after fertilization but prior to meiotic DNA replication. In Neurospora crassa, repeat sequences of at least 400 base pairs in length are vulnerable to RIP. Repeats with as low as 80% nucleotide identity may also be subject to RIP. Though the exact mechanism of repeat recognition and mutagenesis are poorly understood, RIP results in repeated sequences undergoing multiple transition mutations.
The RIP mutations do not seem to be limited to repeated sequences. Indeed, for example, in the phytopathogenic fungus L. maculans, RIP mutations are found in single copy regions, adjacent to the repeated elements. These regions are either non-coding regions or genes encoding small secreted proteins including avirulence genes.
The degree of RIP within these single copy regions was proportional to their proximity to repetitive elements.
Rep and Kistler have speculated that the presence of highly repetitive regions containing transposons, may promote mutation of resident effector genes. So the presence of effector genes within such regions is suggested to promote their adaptation and diversification when exposed to strong selection pressure.
As RIP mutation is traditionally observed to be restricted to repetitive regions and not single copy regions, Fudal et al. suggested that leakage of RIP mutation might occur within a relatively short distance of a RIP-affected repeat. Indeed, this has been reported in N. crassa whereby leakage of RIP was detected in single copy sequences at least 930 bp from the boundary of neighbouring duplicated sequences.
To elucidate the mechanism of detection of repeated sequences leading to RIP may allow to understand how the flanking sequences may also be affected.
Mechanism
RIP causes G:C to A:T transition mutations within repeats, however, the mechanism that detects the repeated sequences is unknown. RID is the only known protein essential for RIP. It is a DNA methyltransferease-like protein, that when mutated or knocked out results in loss of RIP. Deletion of the rid homolog in Aspergillus nidulans, dmtA, results in loss of fertility while deletion of the rid homolog in Ascobolus immersens, masc1, results in fertility defects and loss of methylation induced premeiotically (MIP).
Consequences
RIP is believed to have evolved as a defense mechanism against transposable elements, which resemble parasites by invading and multiplying within the genome.
RIP creates multiple missense and nonsense mutations in the coding sequence. This hypermutation of G-C to A-T in repetitive sequences eliminates functional gene products of the sequence (if there were any to begin with). In addition, many of the C-bearing nucleotides become methylated, thus decreasing transcription.
Use in molecular biology
Because RIP is so efficient at detecting and mutating repeats, fungal biologists often use it as a tool for mutagenesis. A second copy of a single-copy gene is first transformed into the genome. The fungus must then mate and go through its sexual cycle to activate the RIP machinery. Many different mutations within the duplicated gene are obtained from even a single fertilization event so that inactivated alleles, usually due to nonsense mutations, as well as alleles containing missense mutations can be obtained.
History
The cellular reproduction process of meiosis was discovered by Oscar Hertwig in 1876. Mitosis was discovered several years later in 1882 by Walther Flemming.
Hertwig studied sea urchins, and noticed that each egg contained one nucleus prior to fertilization and two nuclei after. This discovery proved that one spermatozoon could fertilize an egg, and therefore proved the process of meiosis. Hermann Fol continued Hertwig's research by testing the effects of injecting several spermatozoa into an egg, and found that the process did not work with more than one spermatozoon.
Flemming began his research of cell division starting in 1868. The study of cells was an increasingly popular topic in this time period. By 1873, Schneider had already begun to describe the steps of cell division. Flemming furthered this description in 1874 and 1875 as he explained the steps in more detail. He also argued with Schneider's findings that the nucleus separated into rod-like structures by suggesting that the nucleus actually separated into threads that in turn separated. Flemming concluded that cells replicate through cell division, to be more specific mitosis.
Matthew Meselson and Franklin Stahl are credited with the discovery of DNA replication. Watson and Crick acknowledged that the structure of DNA did indicate that there is some form of replicating process. However, there was not a lot of research done on this aspect of DNA until after Watson and Crick. People considered all possible methods of determining the replication process of DNA, but none were successful until Meselson and Stahl. Meselson and Stahl introduced a heavy isotope into some DNA and traced its distribution. Through this experiment, Meselson and Stahl were able to prove that DNA reproduces semi-conservatively.
See also
Missense mRNA
PAM matrix
References
External links
Modification of genetic information
Mutation
Molecular biology | 0.784539 | 0.990482 | 0.777072 |
Chemotaxonomy | Merriam-Webster defines chemotaxonomy as the method of biological classification based on similarities and dissimilarity in the structure of certain compounds among the organisms being classified. Advocates argue that, as proteins are more closely controlled by genes and less subjected to natural selection than the anatomical features, they are more reliable indicators of genetic relationships. The compounds studied most are proteins, amino acids, nucleic acids, peptides etc.
Physiology is the study of working of organs in a living being. Since working of the organs involves chemicals of the body, these compounds are called biochemical evidences. The study of morphological change has shown that there are changes in the structure of animals which result in evolution. When changes take place in the structure of a living organism, they will naturally be accompanied by changes in the physiological or biochemical processes.
John Griffith Vaughan and Victor Plouvier were among the pioneers of chemotaxonomy.
Biochemical products
The body of any animal in the animal kingdom is made up of a number of chemicals. Of these, only a few biochemical products have been taken into consideration to derive evidence for evolution.
Protoplasm: Every living cell, from a bacterium to an elephant, from grasses to the blue whale, has protoplasm. Though the complexity and constituents of the protoplasm increases from lower to higher living organism, the basic compound is always the protoplasm. Evolutionary significance: From this evidence, it is clear that all living things have a common origin point or a common ancestor, which in turn had protoplasm. Its complexity increased due to changes in the mode of life and habitat.
Nucleic acids: DNA and RNA are the two types of nucleic acids present in all living organisms. They are present in the chromosomes. The structure of these acids has been found to be similar in all animals. DNA always has two chains forming a double helix, and each chain is made up of nucleotides. Each nucleotide has a pentose sugar, a phosphate group, and nitrogenous bases like adenine, guanine, cytosine, and thymine. RNA contains uracil instead of thymine. It has been proved in the laboratory that a single strand of DNA of one species can match with the other strand from another species. If the alleles of the strands of any two species are close, then it can be concluded that these two species are more closely related.
Digestive enzymes are chemical compounds that help in digestion. Proteins are always digested by a particular type of enzymes like pepsin, trypsin, etc., in all animals from a single celled amoeba to a human being. The complexity in the composition of these enzymes increases from lower to higher organisms but are fundamentally the same. Likewise, carbohydrates are always digested by amylase, and fats by lipase.
End products of digestion: Irrespective of the type of animal, the end products of protein, carbohydrates and fats are amino acids, simple sugars, and fatty acids respectively. It can thus be comfortably concluded that the similarity of the end products is due to common ancestry.
Hormones are secretions from ductless glands called the endocrine glands like the thyroid, pituitary, adrenal, etc. Their chemical nature is the same in all animals. For example, thyroxine is secreted from the thyroid gland, irrespective of what the animal is. It is used to control metabolism in all animals. If a human being is deficient in thyroxine, it is not mandatory that this hormone should be supplemented from another human being. It can be extracted from any mammal and injected into humans for normal metabolism to take place. Likewise, insulin is secreted from the pancreas. If the thyroid gland from a tadpole is removed and replaced with a bovine thyroid gland, normal metabolism will take place and the tadpole will metamorphose into a frog. As there is a fundamental relationship among these animals, such exchange of hormones or glands is possible.
Nitrogenous Excretory Products: Mainly three types of nitrogenous waste is excreted by living organisms; ammonia is a characteristics of aquatic life form, urea is formed by the land and water dwellers, uric acid is excreted by terrestrial life forms. A frog, in its tadpole stage excretes ammonia just like a fish. When it turns into an adult frog and moves to land, it excretes urea instead of ammonia. Thus an aquatic ancestry to land animal is established. A chick on up to its fifth day of development excretes ammonia; from its 5th to 9th day, urea; and thereafter, uric acid. Based on these findings, Baldwin sought a biochemical recapitulation in the development of vertebrates with reference to nitrogenous excretory products.
Phosphagens are energy reservoirs of animals. They are present in the muscles. They supply energy for the synthesis of ATP. Generally, there are two types of phosphagens in animals, phosphoarginine (PA) in invertebrates and phosphocreatine (PC) in vertebrates. Among the echinoderms and prochordates, some have PA and others PC. Only a few have both PA and PC. Biochemically, these two groups are related. This is the most basic proof that the first chordate animals should have been derived only from echinoderm-like ancestors.
Body fluid of animals: When the body fluids of both aquatic and terrestrial animals are analyzed, it shows that they resemble sea water in their ionic composition. There is ample evidence that primitive members of most of the phyla lived in the sea in Paleozoic times. It is clear that the first life appeared only in the sea and then evolved onto land. A further point of interest is that the body fluids of most animals contain less magnesium and more potassium than the water of the present-day ocean. In the past, the ocean contained less magnesium and more potassium. Animals' bodies accumulated more of these minerals due to the structure of DNA, and this characteristic remains so today. When the first life forms appeared in the sea, they acquired the composition of the contemporary sea water, and retained it even after their evolution onto land, as it was a favorable trait.
Opsins: In the vertebrates, vision is controlled by two very distinct types of opsins, porphyropsin and rhodopsin. They are present in the rods of the retina. Fresh water fishes have porphyropsin; marine ones and land vertebrates have rhodopsin. In amphibians, a tadpole living in fresh water has porphyropsin, and the adult frog, which lives on land most of the time, has rhodopsin. In catadromous fish, which migrate from fresh water to the sea, the porphyropsin is replaced by rhodopsin. In an anadromous fish, which migrates from the sea to freshwater, the rhodopsin is replaced by porphyropsin. These examples show the freshwater origin of vertebrates. They then deviated into two lines, one leading to marine life and the other to terrestrial life.
Serological evidence: In recent years, experiments made in the composition of blood offer good evidence for evolution. It has been found that blood can be transmitted only between animals that are closely related. The degree of relationship between these animals is determined by what is known as the serological evidence. There are various methods of doing so; the method employed by George Nuttall is called the precipitation method. In this method, anti-serum of the involved animals has to be prepared. For human study, human blood is collected and allowed to clot. Then, the serum is separated from the erythrocytes. A rabbit is then injected with a small amount of serum at regular intervals, which is allowed to incubate for a few days. This forms antibodies in the rabbit's body. The rabbit's blood is then drawn and clotted. The serum separated from the red blood cells is called the anti-human serum.
When such a serum is treated with that of blood of monkeys or apes, a clear white precipitate is formed. When the serum is treated with the blood of any other animal like dogs, cats, or cows, no precipitate appears. It can thus be concluded that humans are more closely related to monkeys and apes. As a result, it has been determined that lizards are closely related to snakes, horses to donkeys, dogs to cats, etc. This systematic position of Limulus was controversial for a long time, but has been found to show that human serum is more closely related to arachnids than to crustaceans.
The field of biochemistry has greatly developed since Darwin's time, and this serological study is one of the most recent pieces of evidence of evolution. A number of biochemical products like nucleic acids, enzymes, hormones and phosphagens clearly show the relationship of all life forms. The composition of body fluid has shown that the first life originated in the oceans. The presence of nitrogenous waste products reveal the aquatic ancestry of vertebrates, and the nature of visual pigments points out the fresh water ancestry of land vertebrates. Serological tests indicate relationships within these animal phyla.
Paleontology
When only fragments of fossils, or some biomarkers remain in a rock or oil deposit, the class of organisms that produced it can often be determined using Fourier transform infrared spectroscopy
References
External links
http://www.merriam-webster.com/dictionary/chemotaxonomy
Phylogenetics | 0.809186 | 0.960311 | 0.77707 |
Nomothetic and idiographic | Nomothetic and idiographic are terms used by Neo-Kantian philosopher Wilhelm Windelband to describe two distinct approaches to knowledge, each one corresponding to a different intellectual tendency, and each one corresponding to a different branch of academia. To say that Windelband supported that last dichotomy is a consequent misunderstanding of his own thought. For him, any branch of science and any discipline can be handled by both methods as they offer two integrating points of view.
Nomothetic is based on what Kant described as a tendency to generalize, and is typical for the natural sciences. It describes the effort to derive laws that explain types or categories of objective phenomena, in general.
Idiographic is based on what Kant described as a tendency to specify, and is typical for the humanities. It describes the effort to understand the meaning of contingent, unique, and often cultural or subjective phenomena.
Use in the social sciences
The problem of whether to use nomothetic or idiographic approaches is most sharply felt in the social sciences, whose subject are unique individuals (idiographic perspective), but who have certain general properties or behave according to general rules (nomothetic perspective).
Often, nomothetic approaches are quantitative, and idiographic approaches are qualitative, although the "Personal Questionnaire" developed by Monte B. Shapiro and its further developments (e.g. Discan scale and PSYCHLOPS) are both quantitative and idiographic. Another very influential quantitative but idiographic tool is the Repertory grid when used with elicited constructs and perhaps elicited elements. Personal cognition (D.A. Booth) is idiographic, qualitative and quantitative, using the individual's own narrative of action within situation to scale the ongoing biosocial cognitive processes in units of discrimination from norm (with M.T. Conner 1986, R.P.J. Freeman 1993 and O. Sharpe 2005). Methods of "rigorous idiography" allow probabilistic evaluation of information transfer even with fully idiographic data.
In psychology, idiographic describes the study of the individual, who is seen as a unique agent with a unique life history, with properties setting them apart from other individuals (see idiographic image). A common method to study these unique characteristics is an (auto)biography, i.e. a narrative that recounts the unique sequence of events that made the person who they are. Nomothetic describes the study of classes or cohorts of individuals. Here the subject is seen as an exemplar of a population and their corresponding personality traits and behaviours. It is widely held that the terms idiographic and nomothetic were introduced to American psychology by Gordon Allport in 1937, but Hugo Münsterberg used them in his 1898 presidential address at the American Psychological Association meeting. This address was published in Psychological Review in 1899.
Theodore Millon stated that when spotting and diagnosing personality disorders, first clinicians start with the nomothetic perspective and look for various general scientific laws; then when they believe they have identified a disorder, they switch their view to the idiographic perspective to focus on the specific individual and his or her unique traits.
In sociology, the nomothetic model tries to find independent variables that account for the variations in a given phenomenon (e.g. What is the relationship between timing/frequency of childbirth and education?). Nomothetic explanations are probabilistic and usually incomplete. The idiographic model focuses on a complete, in-depth understanding of a single case (e.g. Why do I not have any pets?).
In anthropology, idiographic describes the study of a group, seen as an entity, with specific properties that set it apart from other groups. Nomothetic refers to the use of generalization rather than specific properties in the same context.
See also
Nomological
References
Further reading
Cone, J. D. (1986). "Idiographic, nomothetic, and related perspectives in behavioral assessment." In: R. O. Nelson & S. C. Hayes (eds.): Conceptual foundations of behavioral assessment (pp. 111–128). New York: Guilford.
Thomae, H. (1999). "The nomothetic-idiographic issue: Some roots and recent trends." International Journal of Group Tensions, 28(1), 187–215.
Concepts in epistemology | 0.787139 | 0.987158 | 0.777031 |
Hemodynamics | Hemodynamics or haemodynamics are the dynamics of blood flow. The circulatory system is controlled by homeostatic mechanisms of autoregulation, just as hydraulic circuits are controlled by control systems. The hemodynamic response continuously monitors and adjusts to conditions in the body and its environment. Hemodynamics explains the physical laws that govern the flow of blood in the blood vessels.
Blood flow ensures the transportation of nutrients, hormones, metabolic waste products, oxygen, and carbon dioxide throughout the body to maintain cell-level metabolism, the regulation of the pH, osmotic pressure and temperature of the whole body, and the protection from microbial and mechanical harm.
Blood is a non-Newtonian fluid, and is most efficiently studied using rheology rather than hydrodynamics. Because blood vessels are not rigid tubes, classic hydrodynamics and fluids mechanics based on the use of classical viscometers are not capable of explaining haemodynamics.
The study of the blood flow is called hemodynamics, and the study of the properties of the blood flow is called hemorheology.
Blood
Blood is a complex liquid. Blood is composed of plasma and formed elements. The plasma contains 91.5% water, 7% proteins and 1.5% other solutes. The formed elements are platelets, white blood cells, and red blood cells. The presence of these formed elements and their interaction with plasma molecules are the main reasons why blood differs so much from ideal Newtonian fluids.
Viscosity of plasma
Normal blood plasma behaves like a Newtonian fluid at physiological rates of shear. Typical values for the viscosity of normal human plasma at 37 °C is 1.4 mN·s/m2. The viscosity of normal plasma varies with temperature in the same way as does that of its solvent water;a 3°C change in temperature in the physiological range (36.5°C to 39.5°C)reduces plasma viscosity by about 10%.
Osmotic pressure of plasma
The osmotic pressure of solution is determined by the number of particles present and by the temperature. For example, a 1 molar solution of a substance contains molecules per liter of that substance and at 0 °C it has an osmotic pressure of . The osmotic pressure of the plasma affects the mechanics of the circulation in several ways. An alteration of the osmotic pressure difference across the membrane of a blood cell causes a shift of water and a change of cell volume. The changes in shape and flexibility affect the mechanical properties of whole blood. A change in plasma osmotic pressure alters the hematocrit, that is, the volume concentration of red cells in the whole blood by redistributing water between the intravascular and extravascular spaces. This in turn affects the mechanics of the whole blood.
Red blood cells
The red blood cell is highly flexible and biconcave in shape. Its membrane has a Young's modulus in the region of 106 Pa. Deformation in red blood cells is induced by shear stress. When a suspension is sheared, the red blood cells deform and spin because of the velocity gradient, with the rate of deformation and spin depending on the shear rate and the concentration.
This can influence the mechanics of the circulation and may complicate the measurement of blood viscosity. It is true that in a steady state flow of a viscous fluid through a rigid spherical body immersed in the fluid, where we assume the inertia is negligible in such a flow, it is believed that the downward gravitational force of the particle is balanced by the viscous drag force. From this force balance the speed of fall can be shown to be given by Stokes' law
Where a is the particle radius, ρp, ρf are the respectively particle and fluid density μ is the fluid viscosity, g is the gravitational acceleration. From the above equation we can see that the sedimentation velocity of the particle depends on the square of the radius. If the particle is released from rest in the fluid, its sedimentation velocity Us increases until it attains the steady value called the terminal velocity (U), as shown above.
Hemodilution
Hemodilution is the dilution of the concentration of red blood cells and plasma constituents by partially substituting the blood with colloids or crystalloids. It is a strategy to avoid exposure of patients to the potential hazards of homologous blood transfusions.
Hemodilution can be normovolemic, which implies the dilution of normal blood constituents by the use of expanders. During acute normovolemic hemodilution (ANH), blood subsequently lost during surgery contains proportionally fewer red blood cells per milliliter, thus minimizing intraoperative loss of the whole blood. Therefore, blood lost by the patient during surgery is not actually lost by the patient, for this volume is purified and redirected into the patient.
On the other hand, hypervolemic hemodilution (HVH) uses acute preoperative volume expansion without any blood removal. In choosing a fluid, however, it must be assured that when mixed, the remaining blood behaves in the microcirculation as in the original blood fluid, retaining all its properties of viscosity.
In presenting what volume of ANH should be applied one study suggests a mathematical model of ANH which calculates the maximum possible RCM savings using ANH, given the patients weight Hi and Hm.
To maintain the normovolemia, the withdrawal of autologous blood must be simultaneously replaced by a suitable hemodilute. Ideally, this is achieved by isovolemia exchange transfusion of a plasma substitute with a colloid osmotic pressure (OP). A colloid is a fluid containing particles that are large enough to exert an oncotic pressure across the micro-vascular membrane.
When debating the use of colloid or crystalloid, it is imperative to think about all the components of the starling equation:
To identify the minimum safe hematocrit desirable for a given patient the following equation is useful:
where EBV is the estimated blood volume; 70 mL/kg was used in this model and Hi (initial hematocrit) is the patient's initial hematocrit.
From the equation above it is clear that the volume of blood removed during the ANH to the Hm is the same as the BLs.
How much blood is to be removed is usually based on the weight, not the volume. The number of units that need to be removed to hemodilute to the maximum safe hematocrit (ANH) can be found by
This is based on the assumption that each unit removed by hemodilution has a volume of 450 mL (the actual volume of a unit will vary somewhat since completion of collection is dependent on weight and not volume).
The model assumes that the hemodilute value is equal to the Hm prior to surgery, therefore, the re-transfusion of blood obtained by hemodilution must begin when SBL begins.
The RCM available for retransfusion after ANH (RCMm) can be calculated from the patient's Hi and the final hematocrit after hemodilution(Hm)
The maximum SBL that is possible when ANH is used without falling below Hm(BLH) is found by assuming that all the blood removed during ANH is returned to the patient at a rate sufficient to maintain the hematocrit at the minimum safe level
If ANH is used as long as SBL does not exceed BLH there will not be any need for blood transfusion. We can conclude from the foregoing that H should therefore not exceed s.
The difference between the BLH and the BLs therefore is the incremental surgical blood loss (BLi) possible when using ANH.
When expressed in terms of the RCM
Where RCMi is the red cell mass that would have to be administered using homologous blood to maintain the Hm if ANH is not used and blood loss equals BLH.
The model used assumes ANH used for a 70 kg patient with an estimated blood volume of 70 ml/kg (4900 ml). A range of Hi and Hm was evaluated to understand conditions where hemodilution is necessary to benefit the patient.
Result
The result of the model calculations are presented in a table given in the appendix for a range of Hi from 0.30 to 0.50 with ANH performed to minimum hematocrits from 0.30 to 0.15. Given a Hi of 0.40, if the Hm is assumed to be 0.25.then from the equation above the RCM count is still high and ANH is not necessary, if BLs does not exceed 2303 ml, since the hemotocrit will not fall below Hm, although five units of blood must be removed during hemodilution. Under these conditions, to achieve the maximum benefit from the technique if ANH is used, no homologous blood will be required to maintain the Hm if blood loss does not exceed 2940 ml. In such a case, ANH can save a maximum of 1.1 packed red blood cell unit equivalent, and homologous blood transfusion is necessary to maintain Hm, even if ANH is used.
This model can be used to identify when ANH may be used for a given patient and the degree of ANH necessary to maximize that benefit.
For example, if Hi is 0.30 or less it is not possible to save a red cell mass equivalent to two units of homologous PRBC even if the patient is hemodiluted to an Hm of 0.15. That is because from the RCM equation the patient RCM falls short from the equation giving above.
If Hi is 0.40 one must remove at least 7.5 units of blood during ANH, resulting in an Hm of 0.20 to save two units equivalence. Clearly, the greater the Hi and the greater the number of units removed during hemodilution, the more effective ANH is for preventing homologous blood transfusion. The model here is designed to allow doctors to determine where ANH may be beneficial for a patient based on their knowledge of the Hi, the potential for SBL, and an estimate of the Hm. Though the model used a 70 kg patient, the result can be applied to any patient. To apply these result to any body weight, any of the values BLs, BLH and ANHH or PRBC given in the table need to be multiplied by the factor we will call T
Basically, the model considered above is designed to predict the maximum RCM that can save ANH.
In summary, the efficacy of ANH has been described mathematically by means of measurements of surgical blood loss and blood volume flow measurement. This form of analysis permits accurate estimation of the potential efficiency of the techniques and shows the application of measurement in the medical field.
Blood flow
Cardiac output
The heart is the driver of the circulatory system, pumping blood through rhythmic contraction and relaxation. The rate of blood flow out of the heart (often expressed in L/min) is known as the cardiac output (CO).
Blood being pumped out of the heart first enters the aorta, the largest artery of the body. It then proceeds to divide into smaller and smaller arteries, then into arterioles, and eventually capillaries, where oxygen transfer occurs. The capillaries connect to venules, and the blood then travels back through the network of veins to the venae cavae into the right heart. The micro-circulation — the arterioles, capillaries, and venules —constitutes most of the area of the vascular system and is the site of the transfer of O2, glucose, and enzyme substrates into the cells. The venous system returns the de-oxygenated blood to the right heart where it is pumped into the lungs to become oxygenated and CO2 and other gaseous wastes exchanged and expelled during breathing. Blood then returns to the left side of the heart where it begins the process again.
In a normal circulatory system, the volume of blood returning to the heart each minute is approximately equal to the volume that is pumped out each minute (the cardiac output). Because of this, the velocity of blood flow across each level of the circulatory system is primarily determined by the total cross-sectional area of that level.
Cardiac output is determined by two methods. One is to use the Fick equation:
The other thermodilution method is to sense the temperature change from a liquid injected in the proximal port of a Swan-Ganz to the distal port.
Cardiac output is mathematically expressed by the following equation:
where
CO = cardiac output (L/sec)
SV = stroke volume (ml)
HR = heart rate (bpm)
The normal human cardiac output is 5-6 L/min at rest. Not all blood that enters the left ventricle exits the heart. What is left at the end of diastole (EDV) minus the stroke volume make up the end systolic volume (ESV).
Anatomical features
Circulatory system of species subjected to orthostatic blood pressure (such as arboreal snakes) has evolved with physiological and morphological features to overcome the circulatory disturbance. For instance, in arboreal snakes the heart is closer to the head, in comparison with aquatic snakes. This facilitates blood perfusion to the brain.
Turbulence
Blood flow is also affected by the smoothness of the vessels, resulting in either turbulent (chaotic) or laminar (smooth) flow. Smoothness is reduced by the buildup of fatty deposits on the arterial walls.
The Reynolds number (denoted NR or Re) is a relationship that helps determine the behavior of a fluid in a tube, in this case blood in the vessel.
The equation for this dimensionless relationship is written as:
ρ: density of the blood
v: mean velocity of the blood
L: characteristic dimension of the vessel, in this case diameter
μ: viscosity of blood
The Reynolds number is directly proportional to the velocity and diameter of the tube. Note that NR is directly proportional to the mean velocity as well as the diameter. A Reynolds number of less than 2300 is laminar fluid flow, which is characterized by constant flow motion, whereas a value of over 4000, is represented as turbulent flow. Due to its smaller radius and lowest velocity compared to other vessels, the Reynolds number at the capillaries is very low, resulting in laminar instead of turbulent flow.
Velocity
Often expressed in cm/s. This value is inversely related to the total cross-sectional area of the blood vessel and also differs per cross-section, because in normal condition the blood flow has laminar characteristics. For this reason, the blood flow velocity is the fastest in the middle of the vessel and slowest at the vessel wall. In most cases, the mean velocity is used. There are many ways to measure blood flow velocity, like videocapillary microscoping with frame-to-frame analysis, or laser Doppler anemometry.
Blood velocities in arteries are higher during systole than during diastole. One parameter to quantify this difference is the pulsatility index (PI), which is equal to the difference between the peak systolic velocity and the minimum diastolic velocity divided by the mean velocity during the cardiac cycle. This value decreases with distance from the heart.
Blood vessels
Vascular resistance
Resistance is also related to vessel radius, vessel length, and blood viscosity.
In a first approach based on fluids, as indicated by the Hagen–Poiseuille equation. The equation is as follows:
∆P: pressure drop/gradient
μ: viscosity
l: length of tube. In the case of vessels with infinitely long lengths, l is replaced with diameter of the vessel.
Q: flow rate of the blood in the vessel
r: radius of the vessel
In a second approach, more realistic of the vascular resistance and coming from experimental observations on blood flows, according to Thurston, there is a plasma release-cell layering at the walls surrounding a plugged flow. It is a fluid layer in which at a distance δ, viscosity η is a function of δ written as η(δ), and these surrounding layers do not meet at the vessel centre in real blood flow. Instead, there is the plugged flow which is hyperviscous because holding high concentration of RBCs. Thurston assembled this layer to the flow resistance to describe blood flow by means of a viscosity η(δ) and thickness δ from the wall layer.
The blood resistance law appears as R adapted to blood flow profile :
where
R = resistance to blood flow
c = constant coefficient of flow
L = length of the vessel
η(δ) = viscosity of blood in the wall plasma release-cell layering
r = radius of the blood vessel
δ = distance in the plasma release-cell layer
Blood resistance varies depending on blood viscosity and its plugged flow (or sheath flow since they are complementary across the vessel section) size as well, and on the size of the vessels.
Assuming steady, laminar flow in the vessel, the blood vessels behavior is similar to that of a pipe. For instance if p1 and p2 are pressures are at the ends of the tube, the pressure drop/gradient is:
The larger arteries, including all large enough to see without magnification, are conduits with low vascular resistance (assuming no advanced atherosclerotic changes) with high flow rates that generate only small drops in pressure. The smaller arteries and arterioles have higher resistance, and confer the main blood pressure drop across major arteries to capillaries in the circulatory system.
In the arterioles blood pressure is lower than in the major arteries. This is due to bifurcations, which cause a drop in pressure. The more bifurcations, the higher the total cross-sectional area, therefore the pressure across the surface drops. This is why the arterioles have the highest pressure-drop. The pressure drop of the arterioles is the product of flow rate and resistance: ∆P=Q xresistance. The high resistance observed in the arterioles, which factor largely in the ∆P is a result of a smaller radius of about 30 μm. The smaller the radius of a tube, the larger the resistance to fluid flow.
Immediately following the arterioles are the capillaries. Following the logic observed in the arterioles, we expect the blood pressure to be lower in the capillaries compared to the arterioles. Since pressure is a function of force per unit area, (P = F/A), the larger the surface area, the lesser the pressure when an external force acts on it. Though the radii of the capillaries are very small, the network of capillaries has the largest surface area in the vascular network. They are known to have the largest surface area (485 mm^2) in the human vascular network. The larger the total cross-sectional area, the lower the mean velocity as well as the pressure.
Substances called vasoconstrictors can reduce the size of blood vessels, thereby increasing blood pressure. Vasodilators (such as nitroglycerin) increase the size of blood vessels, thereby decreasing arterial pressure.
If the blood viscosity increases (gets thicker), the result is an increase in arterial pressure. Certain medical conditions can change the viscosity of the blood. For instance, anemia (low red blood cell concentration) reduces viscosity, whereas increased red blood cell concentration increases viscosity. It had been thought that aspirin and related "blood thinner" drugs decreased the viscosity of blood, but instead studies found that they act by reducing the tendency of the blood to clot.
To determine the systemic vascular resistance (SVR) the formula for calculating all resistance is used.
This translates for SVR into:
Where
SVR = systemic vascular resistance (mmHg/L/min)
MAP = mean arterial pressure (mmHg)
CVP = central venous pressure (mmHg)
CO = cardiac output (L/min)
To get this in Wood units the answer is multiplied by 80.
Normal systemic vascular resistance is between 900 and 1440 dynes/sec/cm−5.
Wall tension
Regardless of site, blood pressure is related to the wall tension of the vessel according to the Young–Laplace equation (assuming that the thickness of the vessel wall is very small as compared to the diameter of the lumen):
where
P is the blood pressure
t is the wall thickness
r is the inside radius of the cylinder.
is the cylinder stress or "hoop stress".
For the thin-walled assumption to be valid the vessel must have a wall thickness of no more than about one-tenth (often cited as one twentieth) of its radius.
The cylinder stress, in turn, is the average force exerted circumferentially (perpendicular both to the axis and to the radius of the object) in the cylinder wall, and can be described as:
where:
F is the force exerted circumferentially on an area of the cylinder wall that has the following two lengths as sides:
t is the radial thickness of the cylinder
l is the axial length of the cylinder
Stress
When force is applied to a material it starts to deform or move. As the force needed to deform a material (e.g. to make a fluid flow) increases with the size of the surface of the material A., the magnitude of this force F is proportional to the area A of the portion of the surface. Therefore, the quantity (F/A) that is the force per unit area is called the stress. The shear stress at the wall that is associated with blood flow through an artery depends on the artery size and geometry and can range between 0.5 and 4 Pa.
.
Under normal conditions, to avoid atherogenesis, thrombosis, smooth muscle proliferation and endothelial apoptosis, shear stress maintains its magnitude and direction within an acceptable range. In some cases occurring due to blood hammer, shear stress reaches larger values. While the direction of the stress may also change by the reverse flow, depending on the hemodynamic conditions. Therefore, this situation can lead to atherosclerosis disease.
Capacitance
Veins are described as the "capacitance vessels" of the body because over 70% of the blood volume resides in the venous system. Veins are more compliant than arteries and expand to accommodate changing volume.
Blood pressure
The blood pressure in the circulation is principally due to the pumping action of the heart. The pumping action of the heart generates pulsatile blood flow, which is conducted into the arteries, across the micro-circulation and eventually, back via the venous system to the heart. During each heartbeat, systemic arterial blood pressure varies between a maximum (systolic) and a minimum (diastolic) pressure. In physiology, these are often simplified into one value, the mean arterial pressure (MAP), which is calculated as follows:
where:
MAP = Mean Arterial Pressure
DP = Diastolic blood pressure
PP = Pulse pressure which is systolic pressure minus diastolic pressure.
Differences in mean blood pressure are responsible for blood flow from one location to another in the circulation. The rate of mean blood flow depends on both blood pressure and the resistance to flow presented by the blood vessels. Mean blood pressure decreases as the circulating blood moves away from the heart through arteries and capillaries due to viscous losses of energy. Mean blood pressure drops over the whole circulation, although most of the fall occurs along the small arteries and arterioles. Gravity affects blood pressure via hydrostatic forces (e.g., during standing), and valves in veins, breathing, and pumping from contraction of skeletal muscles also influence blood pressure in veins.
The relationship between pressure, flow, and resistance is expressed in the following equation:
When applied to the circulatory system, we get:
where
CO = cardiac output (in L/min)
MAP = mean arterial pressure (in mmHg), the average pressure of blood as it leaves the heart
RAP = right atrial pressure (in mmHg), the average pressure of blood as it returns to the heart
SVR = systemic vascular resistance (in mmHg * min/L)
A simplified form of this equation assumes right atrial pressure is approximately 0:
The ideal blood pressure in the brachial artery, where standard blood pressure cuffs measure pressure, is <120/80 mmHg. Other major arteries have similar levels of blood pressure recordings indicating very low disparities among major arteries. In the innominate artery, the average reading is 110/70 mmHg, the right subclavian artery averages 120/80 and the abdominal aorta is 110/70 mmHg. The relatively uniform pressure in the arteries indicate that these blood vessels act as a pressure reservoir for fluids that are transported within them.
Pressure drops gradually as blood flows from the major arteries, through the arterioles, the capillaries until blood is pushed up back into the heart via the venules, the veins through the vena cava with the help of the muscles. At any given pressure drop, the flow rate is determined by the resistance to the blood flow. In the arteries, with the absence of diseases, there is very little or no resistance to blood. The vessel diameter is the most principal determinant to control resistance. Compared to other smaller vessels in the body, the artery has a much bigger diameter (4 mm), therefore the resistance is low.
The arm–leg (blood pressure) gradient is the difference between the blood pressure measured in the arms and that measured in the legs. It is normally less than 10 mm Hg, but may be increased in e.g. coarctation of the aorta.
Clinical significance
Pressure monitoring
Hemodynamic monitoring is the observation of hemodynamic parameters over time, such as blood pressure and heart rate. Blood pressure can be monitored either invasively through an inserted blood pressure transducer assembly (providing continuous monitoring), or noninvasively by repeatedly measuring the blood pressure with an inflatable blood pressure cuff.
Hypertension is diagnosed by the presence of arterial blood pressures of 140/90 or greater for two clinical visits.
Pulmonary Artery Wedge Pressure can show if there is congestive heart failure, mitral and aortic valve disorders, hypervolemia, shunts, or cardiac tamponade.
Remote, indirect monitoring of blood flow by laser Doppler
Noninvasive hemodynamic monitoring of eye fundus vessels can be performed by Laser Doppler holography, with near infrared light. The eye offers a unique opportunity for the non-invasive exploration of cardiovascular diseases. Laser Doppler imaging by digital holography can measure blood flow in the retina and choroid, whose Doppler responses exhibit a pulse-shaped profile with time This technique enables non invasive functional microangiography by high-contrast measurement of Doppler responses from endoluminal blood flow profiles in vessels in the posterior segment of the eye. Differences in blood pressure drive the flow of blood throughout the circulation. The rate of mean blood flow depends on both blood pressure and the hemodynamic resistance to flow presented by the blood vessels.
Glossary
ANHAcute Normovolemic Hemodilution
ANHuNumber of Units During ANH
BLHMaximum Blood Loss Possible When ANH Is Used Before Homologous Blood Transfusion Is Needed
BLIIncremental Blood Loss Possible with ANH.(BLH – BLs)
BLsMaximum blood loss without ANH before homologous blood transfusion is required
EBVEstimated Blood Volume(70 mL/kg)
HctHaematocrit Always Expressed Here As A Fraction
HiInitial Haematocrit
HmMinimum Safe Haematocrit
PRBCPacked Red Blood Cell Equivalent Saved by ANH
RCMRed cell mass.
RCMHCell Mass Available For Transfusion after ANH
RCMIRed Cell Mass Saved by ANH
SBLSurgical Blood Loss
Etymology and pronunciation
The word hemodynamics uses combining forms of hemo- (which comes from the ancient Greek haima, meaning blood) and dynamics, thus "the dynamics of blood". The vowel of the hemo- syllable is variously written according to the ae/e variation.
Blood hammer
Blood pressure
Cardiac output
Cardiovascular System Dynamics Society
Electrical cardiometry
Esophogeal doppler
Hemodynamics of the aorta
Impedance cardiography
Photoplethysmogram
Laser Doppler imaging
Windkessel effect
Functional near-infrared spectroscopy
Notes and references
Bibliography
Berne RM, Levy MN. Cardiovascular physiology. 7th Ed Mosby 1997
Rowell LB. Human Cardiovascular Control. Oxford University press 1993
Braunwald E (Editor). Heart Disease: A Textbook of Cardiovascular Medicine. 5th Ed. W.B.Saunders 1997
Siderman S, Beyar R, Kleber AG. Cardiac Electrophysiology, Circulation and Transport. Kluwer Academic Publishers 1991
American Heart Association
Otto CM, Stoddard M, Waggoner A, Zoghbi WA. Recommendations for Quantification of Doppler Echocardiography: A Report from the Doppler Quantification Task Force of the Nomenclature and Standards Committee of the American Society of Echocardiography. J Am Soc Echocardiogr 2002;15:167-184
Peterson LH, The Dynamics of Pulsatile Blood Flow, Circ. Res. 1954;2;127-139
Hemodynamic Monitoring, Bigatello LM, George E., Minerva Anestesiol, 2002 Apr;68(4):219-25
Claude Franceschi L'investigation vasculaire par ultrasonographie Doppler Masson 1979 ISBN Nr 2-225-63679-6
Claude Franceschi; Paolo Zamboni Principles of Venous Hemodynamics Nova Science Publishers 2009-01 ISBN Nr 1606924850/9781606924853
Claude Franceschi Venous Insufficiency of the pelvis and lower extremities-Hemodynamic Rationale
WR Milnor: Hemodynamics, Williams & Wilkins, 1982
B Bo Sramek: Systemic Hemodynamics and Hemodynamic Management, 4th Edition, ESBN 1-59196-046-0
External links
Learn hemodynamics
Fluid mechanics
Computational fluid dynamics
Cardiovascular physiology
Exercise physiology
Blood
Mathematics in medicine
Fluid dynamics | 0.783727 | 0.991439 | 0.777018 |
Natural science | Natural science is one of the branches of science concerned with the description, understanding and prediction of natural phenomena, based on empirical evidence from observation and experimentation. Mechanisms such as peer review and reproducibility of findings are used to try to ensure the validity of scientific advances.
Natural science can be divided into two main branches: life science and physical science. Life science is alternatively known as biology, and physical science is subdivided into branches: physics, chemistry, earth science, and astronomy. These branches of natural science may be further divided into more specialized branches (also known as fields). As empirical sciences, natural sciences use tools from the formal sciences, such as mathematics and logic, converting information about nature into measurements that can be explained as clear statements of the "laws of nature".
Modern natural science succeeded more classical approaches to natural philosophy. Galileo, Kepler, Descartes, Bacon, and Newton debated the benefits of using approaches which were more mathematical and more experimental in a methodical way. Still, philosophical perspectives, conjectures, and presuppositions, often overlooked, remain necessary in natural science. Systematic data collection, including discovery science, succeeded natural history, which emerged in the 16th century by describing and classifying plants, animals, minerals, and so on. Today, "natural history" suggests observational descriptions aimed at popular audiences.
Criteria
Philosophers of science have suggested several criteria, including Karl Popper's controversial falsifiability criterion, to help them differentiate scientific endeavors from non-scientific ones. Validity, accuracy, and quality control, such as peer review and reproducibility of findings, are amongst the most respected criteria in today's global scientific community.
In natural science, impossibility assertions come to be widely accepted as overwhelmingly probable rather than considered proven to the point of being unchallengeable. The basis for this strong acceptance is a combination of extensive evidence of something not occurring, combined with an underlying theory, very successful in making predictions, whose assumptions lead logically to the conclusion that something is impossible. While an impossibility assertion in natural science can never be proved, it could be refuted by the observation of a single counterexample. Such a counterexample would require that the assumptions underlying the theory that implied the impossibility be re-examined.
Branches of natural science
Biology
This field encompasses a diverse set of disciplines that examine phenomena related to living organisms. The scale of study can range from sub-component biophysics up to complex ecologies. Biology is concerned with the characteristics, classification and behaviors of organisms, as well as how species were formed and their interactions with each other and the environment.
The biological fields of botany, zoology, and medicine date back to early periods of civilization, while microbiology was introduced in the 17th century with the invention of the microscope. However, it was not until the 19th century that biology became a unified science. Once scientists discovered commonalities between all living things, it was decided they were best studied as a whole.
Some key developments in biology were the discovery of genetics, evolution through natural selection, the germ theory of disease, and the application of the techniques of chemistry and physics at the level of the cell or organic molecule.
Modern biology is divided into subdisciplines by the type of organism and by the scale being studied. Molecular biology is the study of the fundamental chemistry of life, while cellular biology is the examination of the cell; the basic building block of all life. At a higher level, anatomy and physiology look at the internal structures, and their functions, of an organism, while ecology looks at how various organisms interrelate.
Earth science
Earth science (also known as geoscience) is an all-embracing term for the sciences related to the planet Earth, including geology, geography, geophysics, geochemistry, climatology, glaciology, hydrology, meteorology, and oceanography.
Although mining and precious stones have been human interests throughout the history of civilization, the development of the related sciences of economic geology and mineralogy did not occur until the 18th century. The study of the earth, particularly paleontology, blossomed in the 19th century. The growth of other disciplines, such as geophysics, in the 20th century led to the development of the theory of plate tectonics in the 1960s, which has had a similar effect on the Earth sciences as the theory of evolution had on biology. Earth sciences today are closely linked to petroleum and mineral resources, climate research, and to environmental assessment and remediation.
Atmospheric sciences
Although sometimes considered in conjunction with the earth sciences, due to the independent development of its concepts, techniques, and practices and also the fact of it having a wide range of sub-disciplines under its wing, atmospheric science is also considered a separate branch of natural science. This field studies the characteristics of different layers of the atmosphere from ground level to the edge of the space. The timescale of the study also varies from day to century. Sometimes, the field also includes the study of climatic patterns on planets other than Earth.
Oceanography
The serious study of oceans began in the early- to mid-20th century. As a field of natural science, it is relatively young, but stand-alone programs offer specializations in the subject. Though some controversies remain as to the categorization of the field under earth sciences, interdisciplinary sciences, or as a separate field in its own right, most modern workers in the field agree that it has matured to a state that it has its own paradigms and practices.
Planetary science
Planetary science or planetology, is the scientific study of planets, which include terrestrial planets like the Earth, and other types of planets, such as gas giants and ice giants. Planetary science also concerns other celestial bodies, such as dwarf planets moons, asteroids, and comets. This largely includes the Solar System, but recently has started to expand to exoplanets, particularly terrestrial exoplanets. It explores various objects, spanning from micrometeoroids to gas giants, to establish their composition, movements, genesis, interrelation, and past. Planetary science is an interdisciplinary domain, having originated from astronomy and Earth science, and currently encompassing a multitude of areas, such as planetary geology, cosmochemistry, atmospheric science, physics, oceanography, hydrology, theoretical planetology, glaciology, and exoplanetology. Related fields encompass space physics, which delves into the impact of the Sun on the bodies in the Solar System, and astrobiology.
Planetary science comprises interconnected observational and theoretical branches. Observational research entails a combination of space exploration, primarily through robotic spacecraft missions utilizing remote sensing, and comparative experimental work conducted in Earth-based laboratories. The theoretical aspect involves extensive mathematical modelling and computer simulation.
Typically, planetary scientists are situated within astronomy and physics or Earth sciences departments in universities or research centers. However, there are also dedicated planetary science institutes worldwide. Generally, individuals pursuing a career in planetary science undergo graduate-level studies in one of the Earth sciences, astronomy, astrophysics, geophysics, or physics. They then focus their research within the discipline of planetary science. Major conferences are held annually, and numerous peer reviewed journals cater to the diverse research interests in planetary science. Some planetary scientists are employed by private research centers and frequently engage in collaborative research initiatives.
Chemistry
Constituting the scientific study of matter at the atomic and molecular scale, chemistry deals primarily with collections of atoms, such as gases, molecules, crystals, and metals. The composition, statistical properties, transformations, and reactions of these materials are studied. Chemistry also involves understanding the properties and interactions of individual atoms and molecules for use in larger-scale applications.
Most chemical processes can be studied directly in a laboratory, using a series of (often well-tested) techniques for manipulating materials, as well as an understanding of the underlying processes. Chemistry is often called "the central science" because of its role in connecting the other natural sciences.
Early experiments in chemistry had their roots in the system of alchemy, a set of beliefs combining mysticism with physical experiments. The science of chemistry began to develop with the work of Robert Boyle, the discoverer of gases, and Antoine Lavoisier, who developed the theory of the conservation of mass.
The discovery of the chemical elements and atomic theory began to systematize this science, and researchers developed a fundamental understanding of states of matter, ions, chemical bonds and chemical reactions. The success of this science led to a complementary chemical industry that now plays a significant role in the world economy.
Physics
Physics embodies the study of the fundamental constituents of the universe, the forces and interactions they exert on one another, and the results produced by these interactions. Physics is generally regarded as foundational because all other natural sciences use and obey the field's principles and laws. Physics relies heavily on mathematics as the logical framework for formulating and quantifying principles.
The study of the principles of the universe has a long history and largely derives from direct observation and experimentation. The formulation of theories about the governing laws of the universe has been central to the study of physics from very early on, with philosophy gradually yielding to systematic, quantitative experimental testing and observation as the source of verification. Key historical developments in physics include Isaac Newton's theory of universal gravitation and classical mechanics, an understanding of electricity and its relation to magnetism, Einstein's theories of special and general relativity, the development of thermodynamics, and the quantum mechanical model of atomic and subatomic physics.
The field of physics is vast and can include such diverse studies as quantum mechanics and theoretical physics, applied physics and optics. Modern physics is becoming increasingly specialized, where researchers tend to focus on a particular area rather than being "universalists" like Isaac Newton, Albert Einstein, and Lev Landau, who worked in multiple areas.
Astronomy
Astronomy is a natural science that studies celestial objects and phenomena. Objects of interest include planets, moons, stars, nebulae, galaxies, and comets. Astronomy is the study of everything in the universe beyond Earth's atmosphere, including objects we can see with our naked eyes. It is one of the oldest sciences.
Astronomers of early civilizations performed methodical observations of the night sky, and astronomical artifacts have been found from much earlier periods. There are two types of astronomy: observational astronomy and theoretical astronomy. Observational astronomy is focused on acquiring and analyzing data, mainly using basic principles of physics. In contrast, Theoretical astronomy is oriented towards developing computer or analytical models to describe astronomical objects and phenomena.
This discipline is the science of celestial objects and phenomena that originate outside the Earth's atmosphere. It is concerned with the evolution, physics, chemistry, meteorology, geology, and motion of celestial objects, as well as the formation and development of the universe.
Astronomy includes examining, studying, and modeling stars, planets, and comets. Most of the information used by astronomers is gathered by remote observation. However, some laboratory reproduction of celestial phenomena has been performed (such as the molecular chemistry of the interstellar medium). There is considerable overlap with physics and in some areas of earth science. There are also interdisciplinary fields such as astrophysics, planetary sciences, and cosmology, along with allied disciplines such as space physics and astrochemistry.
While the study of celestial features and phenomena can be traced back to antiquity, the scientific methodology of this field began to develop in the middle of the 17th century. A key factor was Galileo's introduction of the telescope to examine the night sky in more detail.
The mathematical treatment of astronomy began with Newton's development of celestial mechanics and the laws of gravitation. However, it was triggered by earlier work of astronomers such as Kepler. By the 19th century, astronomy had developed into formal science, with the introduction of instruments such as the spectroscope and photography, along with much-improved telescopes and the creation of professional observatories.
Interdisciplinary studies
The distinctions between the natural science disciplines are not always sharp, and they share many cross-discipline fields. Physics plays a significant role in the other natural sciences, as represented by astrophysics, geophysics, chemical physics and biophysics. Likewise chemistry is represented by such fields as biochemistry, physical chemistry, geochemistry and astrochemistry.
A particular example of a scientific discipline that draws upon multiple natural sciences is environmental science. This field studies the interactions of physical, chemical, geological, and biological components of the environment, with particular regard to the effect of human activities and the impact on biodiversity and sustainability. This science also draws upon expertise from other fields, such as economics, law, and social sciences.
A comparable discipline is oceanography, as it draws upon a similar breadth of scientific disciplines. Oceanography is sub-categorized into more specialized cross-disciplines, such as physical oceanography and marine biology. As the marine ecosystem is vast and diverse, marine biology is further divided into many subfields, including specializations in particular species.
There is also a subset of cross-disciplinary fields with strong currents that run counter to specialization by the nature of the problems they address. Put another way: In some fields of integrative application, specialists in more than one field are a key part of most scientific discourse. Such integrative fields, for example, include nanoscience, astrobiology, and complex system informatics.
Materials science
Materials science is a relatively new, interdisciplinary field that deals with the study of matter and its properties and the discovery and design of new materials. Originally developed through the field of metallurgy, the study of the properties of materials and solids has now expanded into all materials. The field covers the chemistry, physics, and engineering applications of materials, including metals, ceramics, artificial polymers, and many others. The field's core deals with relating the structure of materials with their properties.
Materials science is at the forefront of research in science and engineering. It is an essential part of forensic engineering (the investigation of materials, products, structures, or components that fail or do not operate or function as intended, causing personal injury or damage to property) and failure analysis, the latter being the key to understanding, for example, the cause of various aviation accidents. Many of the most pressing scientific problems that are faced today are due to the limitations of the materials that are available, and, as a result, breakthroughs in this field are likely to have a significant impact on the future of technology.
The basis of materials science involves studying the structure of materials and relating them to their properties. Understanding this structure-property correlation, material scientists can then go on to study the relative performance of a material in a particular application. The major determinants of the structure of a material and, thus, of its properties are its constituent chemical elements and how it has been processed into its final form. These characteristics, taken together and related through the laws of thermodynamics and kinetics, govern a material's microstructure and thus its properties.
History
Some scholars trace the origins of natural science as far back as pre-literate human societies, where understanding the natural world was necessary for survival. People observed and built up knowledge about the behavior of animals and the usefulness of plants as food and medicine, which was passed down from generation to generation. These primitive understandings gave way to more formalized inquiry around 3500 to 3000 BC in the Mesopotamian and Ancient Egyptian cultures, which produced the first known written evidence of natural philosophy, the precursor of natural science. While the writings show an interest in astronomy, mathematics, and other aspects of the physical world, the ultimate aim of inquiry about nature's workings was, in all cases, religious or mythological, not scientific.
A tradition of scientific inquiry also emerged in Ancient China, where Taoist alchemists and philosophers experimented with elixirs to extend life and cure ailments. They focused on the yin and yang, or contrasting elements in nature; the yin was associated with femininity and coldness, while yang was associated with masculinity and warmth. The five phases – fire, earth, metal, wood, and water – described a cycle of transformations in nature. The water turned into wood, which turned into the fire when it burned. The ashes left by fire were earth. Using these principles, Chinese philosophers and doctors explored human anatomy, characterizing organs as predominantly yin or yang, and understood the relationship between the pulse, the heart, and the flow of blood in the body centuries before it became accepted in the West.
Little evidence survives of how Ancient Indian cultures around the Indus River understood nature, but some of their perspectives may be reflected in the Vedas, a set of sacred Hindu texts. They reveal a conception of the universe as ever-expanding and constantly being recycled and reformed. Surgeons in the Ayurvedic tradition saw health and illness as a combination of three humors: wind, bile and phlegm. A healthy life resulted from a balance among these humors. In Ayurvedic thought, the body consisted of five elements: earth, water, fire, wind, and space. Ayurvedic surgeons performed complex surgeries and developed a detailed understanding of human anatomy.
Pre-Socratic philosophers in Ancient Greek culture brought natural philosophy a step closer to direct inquiry about cause and effect in nature between 600 and 400 BC. However, an element of magic and mythology remained. Natural phenomena such as earthquakes and eclipses were explained increasingly in the context of nature itself instead of being attributed to angry gods. Thales of Miletus, an early philosopher who lived from 625 to 546 BC, explained earthquakes by theorizing that the world floated on water and that water was the fundamental element in nature. In the 5th century BC, Leucippus was an early exponent of atomism, the idea that the world is made up of fundamental indivisible particles. Pythagoras applied Greek innovations in mathematics to astronomy and suggested that the earth was spherical.
Aristotelian natural philosophy (400 BC–1100 AD)
Later Socratic and Platonic thought focused on ethics, morals, and art and did not attempt an investigation of the physical world; Plato criticized pre-Socratic thinkers as materialists and anti-religionists. Aristotle, however, a student of Plato who lived from 384 to 322 BC, paid closer attention to the natural world in his philosophy. In his History of Animals, he described the inner workings of 110 species, including the stingray, catfish and bee. He investigated chick embryos by breaking open eggs and observing them at various stages of development. Aristotle's works were influential through the 16th century, and he is considered to be the father of biology for his pioneering work in that science. He also presented philosophies about physics, nature, and astronomy using inductive reasoning in his works Physics and Meteorology.
While Aristotle considered natural philosophy more seriously than his predecessors, he approached it as a theoretical branch of science. Still, inspired by his work, Ancient Roman philosophers of the early 1st century AD, including Lucretius, Seneca and Pliny the Elder, wrote treatises that dealt with the rules of the natural world in varying degrees of depth. Many Ancient Roman Neoplatonists of the 3rd to the 6th centuries also adapted Aristotle's teachings on the physical world to a philosophy that emphasized spiritualism. Early medieval philosophers including Macrobius, Calcidius and Martianus Capella also examined the physical world, largely from a cosmological and cosmographical perspective, putting forth theories on the arrangement of celestial bodies and the heavens, which were posited as being composed of aether.
Aristotle's works on natural philosophy continued to be translated and studied amid the rise of the Byzantine Empire and Abbasid Caliphate.
In the Byzantine Empire, John Philoponus, an Alexandrian Aristotelian commentator and Christian theologian, was the first to question Aristotle's physics teaching. Unlike Aristotle, who based his physics on verbal argument, Philoponus instead relied on observation and argued for observation rather than resorting to a verbal argument. He introduced the theory of impetus. John Philoponus' criticism of Aristotelian principles of physics served as inspiration for Galileo Galilei during the Scientific Revolution.
A revival in mathematics and science took place during the time of the Abbasid Caliphate from the 9th century onward, when Muslim scholars expanded upon Greek and Indian natural philosophy. The words alcohol, algebra and zenith all have Arabic roots.
Medieval natural philosophy (1100–1600)
Aristotle's works and other Greek natural philosophy did not reach the West until about the middle of the 12th century, when works were translated from Greek and Arabic into Latin. The development of European civilization later in the Middle Ages brought with it further advances in natural philosophy. European inventions such as the horseshoe, horse collar and crop rotation allowed for rapid population growth, eventually giving way to urbanization and the foundation of schools connected to monasteries and cathedrals in modern-day France and England. Aided by the schools, an approach to Christian theology developed that sought to answer questions about nature and other subjects using logic. This approach, however, was seen by some detractors as heresy. By the 12th century, Western European scholars and philosophers came into contact with a body of knowledge of which they had previously been ignorant: a large corpus of works in Greek and Arabic that were preserved by Islamic scholars. Through translation into Latin, Western Europe was introduced to Aristotle and his natural philosophy. These works were taught at new universities in Paris and Oxford by the early 13th century, although the practice was frowned upon by the Catholic church. A 1210 decree from the Synod of Paris ordered that "no lectures are to be held in Paris either publicly or privately using Aristotle's books on natural philosophy or the commentaries, and we forbid all this under pain of ex-communication."
In the late Middle Ages, Spanish philosopher Dominicus Gundissalinus translated a treatise by the earlier Persian scholar Al-Farabi called On the Sciences into Latin, calling the study of the mechanics of nature Scientia naturalis, or natural science. Gundissalinus also proposed his classification of the natural sciences in his 1150 work On the Division of Philosophy. This was the first detailed classification of the sciences based on Greek and Arab philosophy to reach Western Europe. Gundissalinus defined natural science as "the science considering only things unabstracted and with motion," as opposed to mathematics and sciences that rely on mathematics. Following Al-Farabi, he separated the sciences into eight parts, including: physics, cosmology, meteorology, minerals science, and plant and animal science.
Later, philosophers made their own classifications of the natural sciences. Robert Kilwardby wrote On the Order of the Sciences in the 13th century that classed medicine as a mechanical science, along with agriculture, hunting, and theater, while defining natural science as the science that deals with bodies in motion. Roger Bacon, an English friar and philosopher, wrote that natural science dealt with "a principle of motion and rest, as in the parts of the elements of fire, air, earth, and water, and in all inanimate things made from them." These sciences also covered plants, animals and celestial bodies. Later in the 13th century, a Catholic priest and theologian Thomas Aquinas defined natural science as dealing with "mobile beings" and "things which depend on a matter not only for their existence but also for their definition." There was broad agreement among scholars in medieval times that natural science was about bodies in motion. However, there was division about including fields such as medicine, music, and perspective. Philosophers pondered questions including the existence of a vacuum, whether motion could produce heat, the colors of rainbows, the motion of the earth, whether elemental chemicals exist, and where in the atmosphere rain is formed.
In the centuries up through the end of the Middle Ages, natural science was often mingled with philosophies about magic and the occult. Natural philosophy appeared in various forms, from treatises to encyclopedias to commentaries on Aristotle. The interaction between natural philosophy and Christianity was complex during this period; some early theologians, including Tatian and Eusebius, considered natural philosophy an outcropping of pagan Greek science and were suspicious of it. Although some later Christian philosophers, including Aquinas, came to see natural science as a means of interpreting scripture, this suspicion persisted until the 12th and 13th centuries. The Condemnation of 1277, which forbade setting philosophy on a level equal with theology and the debate of religious constructs in a scientific context, showed the persistence with which Catholic leaders resisted the development of natural philosophy even from a theological perspective. Aquinas and Albertus Magnus, another Catholic theologian of the era, sought to distance theology from science in their works. "I don't see what one's interpretation of Aristotle has to do with the teaching of the faith," he wrote in 1271.
Newton and the scientific revolution (1600–1800)
By the 16th and 17th centuries, natural philosophy evolved beyond commentary on Aristotle as more early Greek philosophy was uncovered and translated. The invention of the printing press in the 15th century, the invention of the microscope and telescope, and the Protestant Reformation fundamentally altered the social context in which scientific inquiry evolved in the West. Christopher Columbus's discovery of a new world changed perceptions about the physical makeup of the world, while observations by Copernicus, Tyco Brahe and Galileo brought a more accurate picture of the solar system as heliocentric and proved many of Aristotle's theories about the heavenly bodies false. Several 17th-century philosophers, including Thomas Hobbes, John Locke and Francis Bacon, made a break from the past by rejecting Aristotle and his medieval followers outright, calling their approach to natural philosophy superficial.
The titles of Galileo's work Two New Sciences and Johannes Kepler's New Astronomy underscored the atmosphere of change that took hold in the 17th century as Aristotle was dismissed in favor of novel methods of inquiry into the natural world. Bacon was instrumental in popularizing this change; he argued that people should use the arts and sciences to gain dominion over nature. To achieve this, he wrote that "human life [must] be endowed with discoveries and powers." He defined natural philosophy as "the knowledge of Causes and secret motions of things; and enlarging the bounds of Human Empire, to the effecting of all things possible." Bacon proposed that scientific inquiry be supported by the state and fed by the collaborative research of scientists, a vision that was unprecedented in its scope, ambition, and forms at the time. Natural philosophers came to view nature increasingly as a mechanism that could be taken apart and understood, much like a complex clock. Natural philosophers including Isaac Newton, Evangelista Torricelli and Francesco Redi conducted experiments focusing on the flow of water, measuring atmospheric pressure using a barometer and disproving spontaneous generation. Scientific societies and scientific journals emerged and were spread widely through the printing press, touching off the scientific revolution. Newton in 1687 published his The Mathematical Principles of Natural Philosophy, or Principia Mathematica, which set the groundwork for physical laws that remained current until the 19th century.
Some modern scholars, including Andrew Cunningham, Perry Williams, and Floris Cohen, argue that natural philosophy is not properly called science and that genuine scientific inquiry began only with the scientific revolution. According to Cohen, "the emancipation of science from an overarching entity called 'natural philosophy is one defining characteristic of the Scientific Revolution." Other historians of science, including Edward Grant, contend that the scientific revolution that blossomed in the 17th, 18th, and 19th centuries occurred when principles learned in the exact sciences of optics, mechanics, and astronomy began to be applied to questions raised by natural philosophy. Grant argues that Newton attempted to expose the mathematical basis of nature – the immutable rules it obeyed – and, in doing so, joined natural philosophy and mathematics for the first time, producing an early work of modern physics.
The scientific revolution, which began to take hold in the 17th century, represented a sharp break from Aristotelian modes of inquiry. One of its principal advances was the use of the scientific method to investigate nature. Data was collected, and repeatable measurements were made in experiments. Scientists then formed hypotheses to explain the results of these experiments. The hypothesis was then tested using the principle of falsifiability to prove or disprove its accuracy. The natural sciences continued to be called natural philosophy, but the adoption of the scientific method took science beyond the realm of philosophical conjecture and introduced a more structured way of examining nature.
Newton, an English mathematician and physicist, was a seminal figure in the scientific revolution. Drawing on advances made in astronomy by Copernicus, Brahe, and Kepler, Newton derived the universal law of gravitation and laws of motion. These laws applied both on earth and in outer space, uniting two spheres of the physical world previously thought to function independently, according to separate physical rules. Newton, for example, showed that the tides were caused by the gravitational pull of the moon. Another of Newton's advances was to make mathematics a powerful explanatory tool for natural phenomena. While natural philosophers had long used mathematics as a means of measurement and analysis, its principles were not used as a means of understanding cause and effect in nature until Newton.
In the 18th century and 19th century, scientists including Charles-Augustin de Coulomb, Alessandro Volta, and Michael Faraday built upon Newtonian mechanics by exploring electromagnetism, or the interplay of forces with positive and negative charges on electrically charged particles. Faraday proposed that forces in nature operated in "fields" that filled space. The idea of fields contrasted with the Newtonian construct of gravitation as simply "action at a distance", or the attraction of objects with nothing in the space between them to intervene. James Clerk Maxwell in the 19th century unified these discoveries in a coherent theory of electrodynamics. Using mathematical equations and experimentation, Maxwell discovered that space was filled with charged particles that could act upon each other and were a medium for transmitting charged waves.
Significant advances in chemistry also took place during the scientific revolution. Antoine Lavoisier, a French chemist, refuted the phlogiston theory, which posited that things burned by releasing "phlogiston" into the air. Joseph Priestley had discovered oxygen in the 18th century, but Lavoisier discovered that combustion was the result of oxidation. He also constructed a table of 33 elements and invented modern chemical nomenclature. Formal biological science remained in its infancy in the 18th century, when the focus lay upon the classification and categorization of natural life. This growth in natural history was led by Carl Linnaeus, whose 1735 taxonomy of the natural world is still in use. Linnaeus, in the 1750s, introduced scientific names for all his species.
19th-century developments (1800–1900)
By the 19th century, the study of science had come into the purview of professionals and institutions. In so doing, it gradually acquired the more modern name of natural science. The term scientist was coined by William Whewell in an 1834 review of Mary Somerville's On the Connexion of the Sciences. But the word did not enter general use until nearly the end of the same century.
Modern natural science (1900–present)
According to a famous 1923 textbook, Thermodynamics and the Free Energy of Chemical Substances, by the American chemist Gilbert N. Lewis and the American physical chemist Merle Randall, the natural sciences contain three great branches:
Aside from the logical and mathematical sciences, there are three great branches of natural science which stand apart by reason of the variety of far reaching deductions drawn from a small number of primary postulates — they are mechanics, electrodynamics, and thermodynamics.
Today, natural sciences are more commonly divided into life sciences, such as botany and zoology, and physical sciences, which include physics, chemistry, astronomy, and Earth sciences.
See also
Empiricism
Branches of science
List of academic disciplines and sub-disciplines
Natural Sciences (Cambridge), for the Tripos at the University of Cambridge
Natural history
References
Bibliography
Further reading
Defining Natural Sciences Ledoux, S. F., 2002: Defining Natural Sciences, Behaviorology Today, 5(1), 34–36.
The History of Recent Science and Technology
Natural Sciences Contains updated information on research in the Natural Sciences including biology, geography and the applied life and earth sciences.
Reviews of Books About Natural Science This site contains over 50 previously published reviews of books about natural science, plus selected essays on timely topics in natural science.
Scientific Grant Awards Database Contains details of over 2,000,000 scientific research projects conducted over the past 25 years.
E!Science Up-to-date science news aggregator from major sources including universities.
Branches of science | 0.777384 | 0.999468 | 0.77697 |
Reaction–diffusion system | Reaction–diffusion systems are mathematical models that correspond to several physical phenomena. The most common is the change in space and time of the concentration of one or more chemical substances: local chemical reactions in which the substances are transformed into each other, and diffusion which causes the substances to spread out over a surface in space.
Reaction–diffusion systems are naturally applied in chemistry. However, the system can also describe dynamical processes of non-chemical nature. Examples are found in biology, geology and physics (neutron diffusion theory) and ecology. Mathematically, reaction–diffusion systems take the form of semi-linear parabolic partial differential equations. They can be represented in the general form
where represents the unknown vector function, is a diagonal matrix of diffusion coefficients, and accounts for all local reactions. The solutions of reaction–diffusion equations display a wide range of behaviours, including the formation of travelling waves and wave-like phenomena as well as other self-organized patterns like stripes, hexagons or more intricate structure like dissipative solitons. Such patterns have been dubbed "Turing patterns". Each function, for which a reaction diffusion differential equation holds, represents in fact a concentration variable.
One-component reaction–diffusion equations
The simplest reaction–diffusion equation is in one spatial dimension in plane geometry,
is also referred to as the Kolmogorov–Petrovsky–Piskunov equation. If the reaction term vanishes, then the equation represents a pure diffusion process. The corresponding equation is Fick's second law. The choice yields Fisher's equation that was originally used to describe the spreading of biological populations, the Newell–Whitehead-Segel equation with to describe Rayleigh–Bénard convection, the more general Zeldovich–Frank-Kamenetskii equation with and (Zeldovich number) that arises in combustion theory, and its particular degenerate case with that is sometimes referred to as the Zeldovich equation as well.
The dynamics of one-component systems is subject to certain restrictions as the evolution equation can also be written in the variational form
and therefore describes a permanent decrease of the "free energy" given by the functional
with a potential such that
In systems with more than one stationary homogeneous solution, a typical solution is given by travelling fronts connecting the homogeneous states. These solutions move with constant speed without changing their shape and are of the form with , where is the speed of the travelling wave. Note that while travelling waves are generically stable structures, all non-monotonous stationary solutions (e.g. localized domains composed of a front-antifront pair) are unstable. For , there is a simple proof for this statement: if is a stationary solution and is an infinitesimally perturbed solution, linear stability analysis yields the equation
With the ansatz we arrive at the eigenvalue problem
of Schrödinger type where negative eigenvalues result in the instability of the solution. Due to translational invariance is a neutral eigenfunction with the eigenvalue , and all other eigenfunctions can be sorted according to an increasing number of nodes with the magnitude of the corresponding real eigenvalue increases monotonically with the number of zeros. The eigenfunction should have at least one zero, and for a non-monotonic stationary solution the corresponding eigenvalue cannot be the lowest one, thereby implying instability.
To determine the velocity of a moving front, one may go to a moving coordinate system and look at stationary solutions:
This equation has a nice mechanical analogue as the motion of a mass with position in the course of the "time" under the force with the damping coefficient c which allows for a rather illustrative access to the construction of different types of solutions and the determination of .
When going from one to more space dimensions, a number of statements from one-dimensional systems can still be applied. Planar or curved wave fronts are typical structures, and a new effect arises as the local velocity of a curved front becomes dependent on the local radius of curvature (this can be seen by going to polar coordinates). This phenomenon leads to the so-called curvature-driven instability.
Two-component reaction–diffusion equations
Two-component systems allow for a much larger range of possible phenomena than their one-component counterparts. An important idea that was first proposed by Alan Turing is that a state that is stable in the local system can become unstable in the presence of diffusion.
A linear stability analysis however shows that when linearizing the general two-component system
a plane wave perturbation
of the stationary homogeneous solution will satisfy
Turing's idea can only be realized in four equivalence classes of systems characterized by the signs of the Jacobian of the reaction function. In particular, if a finite wave vector is supposed to be the most unstable one, the Jacobian must have the signs
This class of systems is named activator-inhibitor system after its first representative: close to the ground state, one component stimulates the production of both components while the other one inhibits their growth. Its most prominent representative is the FitzHugh–Nagumo equation
with which describes how an action potential travels through a nerve. Here, and are positive constants.
When an activator-inhibitor system undergoes a change of parameters, one may pass from conditions under which a homogeneous ground state is stable to conditions under which it is linearly unstable. The corresponding bifurcation may be either a Hopf bifurcation to a globally oscillating homogeneous state with a dominant wave number or a Turing bifurcation to a globally patterned state with a dominant finite wave number. The latter in two spatial dimensions typically leads to stripe or hexagonal patterns.
For the Fitzhugh–Nagumo example, the neutral stability curves marking the boundary of the linearly stable region for the Turing and Hopf bifurcation are given by
If the bifurcation is subcritical, often localized structures (dissipative solitons) can be observed in the hysteretic region where the pattern coexists with the ground state. Other frequently encountered structures comprise pulse trains (also known as periodic travelling waves), spiral waves and target patterns. These three solution types are also generic features of two- (or more-) component reaction–diffusion equations in which the local dynamics have a stable limit cycle
Three- and more-component reaction–diffusion equations
For a variety of systems, reaction–diffusion equations with more than two components have been proposed, e.g. the Belousov–Zhabotinsky reaction, for blood clotting, fission waves or planar gas discharge systems.
It is known that systems with more components allow for a variety of phenomena not possible in systems with one or two components (e.g. stable running pulses in more than one spatial dimension without global feedback). An introduction and systematic overview of the possible phenomena in dependence on the properties of the underlying system is given in.
Applications and universality
In recent times, reaction–diffusion systems have attracted much interest as a prototype model for pattern formation. The above-mentioned patterns (fronts, spirals, targets, hexagons, stripes and dissipative solitons) can be found in various types of reaction–diffusion systems in spite of large discrepancies e.g. in the local reaction terms. It has also been argued that reaction–diffusion processes are an essential basis for processes connected to morphogenesis in biology and may even be related to animal coats and skin pigmentation. Other applications of reaction–diffusion equations include ecological invasions, spread of epidemics, tumour growth, dynamics of fission waves, wound healing and visual hallucinations. Another reason for the interest in reaction–diffusion systems is that although they are nonlinear partial differential equations, there are often possibilities for an analytical treatment.
Experiments
Well-controllable experiments in chemical reaction–diffusion systems have up to now been realized in three ways. First, gel reactors or filled capillary tubes may be used. Second, temperature pulses on catalytic surfaces have been investigated. Third, the propagation of running nerve pulses is modelled using reaction–diffusion systems.
Aside from these generic examples, it has turned out that under appropriate circumstances electric transport systems like plasmas or semiconductors can be described in a reaction–diffusion approach. For these systems various experiments on pattern formation have been carried out.
Numerical treatments
A reaction–diffusion system can be solved by using methods of numerical mathematics. There are existing several numerical treatments in research literature. Also for complex geometries numerical solution methods are proposed. To highest degree of detail reaction-diffusion systems are described with particle based simulation tools like SRSim or ReaDDy which employ for example reversible interacting-particle reaction dynamics.
See also
Autowave
Diffusion-controlled reaction
Chemical kinetics
Phase space method
Autocatalytic reactions and order creation
Pattern formation
Patterns in nature
Periodic travelling wave
Stochastic geometry
MClone
The Chemical Basis of Morphogenesis
Turing pattern
Multi-state modeling of biomolecules
Examples
Fisher's equation
Zeldovich–Frank-Kamenetskii equation
FitzHugh–Nagumo model
Wrinkle paint
References
External links
Reaction–Diffusion by the Gray–Scott Model: Pearson's parameterization a visual map of the parameter space of Gray–Scott reaction diffusion.
A thesis on reaction–diffusion patterns with an overview of the field
RD Tool: an interactive web application for reaction-diffusion simulation
Mathematical modeling
Parabolic partial differential equations
Reaction mechanisms | 0.783337 | 0.991773 | 0.776893 |
Computational mathematics | Computational mathematics is the study of the interaction between mathematics and calculations done by a computer.
A large part of computational mathematics consists roughly of using mathematics for allowing and improving computer computation in areas of science and engineering where mathematics are useful. This involves in particular algorithm design, computational complexity, numerical methods and computer algebra.
Computational mathematics refers also to the use of computers for mathematics itself. This includes mathematical experimentation for establishing conjectures (particularly in number theory), the use of computers for proving theorems (for example the four color theorem), and the design and use of proof assistants.
Areas of computational mathematics
Computational mathematics emerged as a distinct part of applied mathematics by the early 1950s. Currently, computational mathematics can refer to or include:
Computational sciences, also known as scientific computation or computational engineering
Systems sciences, for which directly requires the mathematical models from Systems engineering
Solving mathematical problems by computer simulation as opposed to traditional engineering methods.
Numerical methods used in scientific computation, for example numerical linear algebra and numerical solution of partial differential equations
Stochastic methods, such as Monte Carlo methods and other representations of uncertainty in scientific computation
The mathematics of scientific computation, in particular numerical analysis, the theory of numerical methods
Computational complexity
Computer algebra and computer algebra systems
Computer-assisted research in various areas of mathematics, such as logic (automated theorem proving), discrete mathematics, combinatorics, number theory, and computational algebraic topology
Cryptography and computer security, which involve, in particular, research on primality testing, factorization, elliptic curves, and mathematics of blockchain
Computational linguistics, the use of mathematical and computer techniques in natural languages
Computational algebraic geometry
Computational group theory
Computational geometry
Computational number theory
Computational topology
Computational statistics
Algorithmic information theory
Algorithmic game theory
Mathematical economics, the use of mathematics in economics, finance and, to certain extents, of accounting.
Experimental mathematics
See also
References
Further reading
External links
Foundations of Computational Mathematics, a non-profit organization
International Journal of Computer Discovered Mathematics
Applied mathematics
Computational science | 0.785233 | 0.98937 | 0.776885 |
Facilitated diffusion | Facilitated diffusion (also known as facilitated transport or passive-mediated transport) is the process of spontaneous passive transport (as opposed to active transport) of molecules or ions across a biological membrane via specific transmembrane integral proteins. Being passive, facilitated transport does not directly require chemical energy from ATP hydrolysis in the transport step itself; rather, molecules and ions move down their concentration gradient according to the principles of diffusion.
Facilitated diffusion differs from simple diffusion in several ways:
The transport relies on molecular binding between the cargo and the membrane-embedded channel or carrier protein.
The rate of facilitated diffusion is saturable with respect to the concentration difference between the two phases; unlike free diffusion which is linear in the concentration difference.
The temperature dependence of facilitated transport is substantially different due to the presence of an activated binding event, as compared to free diffusion where the dependence on temperature is mild.
Polar molecules and large ions dissolved in water cannot diffuse freely across the plasma membrane due to the hydrophobic nature of the fatty acid tails of the phospholipids that comprise the lipid bilayer. Only small, non-polar molecules, such as oxygen and carbon dioxide, can diffuse easily across the membrane. Hence, small polar molecules are transported by proteins in the form of transmembrane channels. These channels are gated, meaning that they open and close, and thus deregulate the flow of ions or small polar molecules across membranes, sometimes against the osmotic gradient. Larger molecules are transported by transmembrane carrier proteins, such as permeases, that change their conformation as the molecules are carried across (e.g. glucose or amino acids).
Non-polar molecules, such as retinol or lipids, are poorly soluble in water. They are transported through aqueous compartments of cells or through extracellular space by water-soluble carriers (e.g. retinol binding protein). The metabolites are not altered because no energy is required for facilitated diffusion. Only permease changes its shape in order to transport metabolites. The form of transport through a cell membrane in which a metabolite is modified is called group translocation transportation.
Glucose, sodium ions, and chloride ions are just a few examples of molecules and ions that must efficiently cross the plasma membrane but to which the lipid bilayer of the membrane is virtually impermeable. Their transport must therefore be "facilitated" by proteins that span the membrane and provide an alternative route or bypass mechanism. Some examples of proteins that mediate this process are glucose transporters, organic cation transport proteins, urea transporter, monocarboxylate transporter 8 and monocarboxylate transporter 10.
In vivo model of facilitated diffusion
Many physical and biochemical processes are regulated by diffusion. Facilitated diffusion is one form of diffusion and it is important in several metabolic processes. Facilitated diffusion is the main mechanism behind the binding of Transcription Factors (TFs) to designated target sites on the DNA molecule. The in vitro model, which is a very well known method of facilitated diffusion, that takes place outside of a living cell, explains the 3-dimensional pattern of diffusion in the cytosol and the 1-dimensional diffusion along the DNA contour. After carrying out extensive research on processes occurring out of the cell, this mechanism was generally accepted but there was a need to verify that this mechanism could take place in vivo or inside of living cells. Bauer & Metzler (2013) therefore carried out an experiment using a bacterial genome in which they investigated the average time for TF – DNA binding to occur. After analyzing the process for the time it takes for TF's to diffuse across the contour and cytoplasm of the bacteria's DNA, it was concluded that in vitro and in vivo are similar in that the association and dissociation rates of TF's to and from the DNA are similar in both. Also, on the DNA contour, the motion is slower and target sites are easy to localize while in the cytoplasm, the motion is faster but the TF's are not sensitive to their targets and so binding is restricted.
Intracellular facilitated diffusion
Single-molecule imaging is an imaging technique which provides an ideal resolution necessary for the study of the Transcription factor binding mechanism in living cells. In prokaryotic bacteria cells such as E. coli, facilitated diffusion is required in order for regulatory proteins to locate and bind to target sites on DNA base pairs. There are 2 main steps involved: the protein binds to a non-specific site on the DNA and then it diffuses along the DNA chain until it locates a target site, a process referred to as sliding. According to Brackley et al. (2013), during the process of protein sliding, the protein searches the entire length of the DNA chain using 3-D and 1-D diffusion patterns. During 3-D diffusion, the high incidence of Crowder proteins creates an osmotic pressure which brings searcher proteins (e.g. Lac Repressor) closer to the DNA to increase their attraction and enable them to bind, as well as steric effect which exclude the Crowder proteins from this region (Lac operator region). Blocker proteins participate in 1-D diffusion only i.e. bind to and diffuse along the DNA contour and not in the cytosol.
Facilitated diffusion of proteins on Chromatin
The in vivo model mentioned above clearly explains 3-D and 1-D diffusion along the DNA strand and the binding of proteins to target sites on the chain. Just like prokaryotic cells, in eukaryotes, facilitated diffusion occurs in the nucleoplasm on chromatin filaments, accounted for by the switching dynamics of a protein when it is either bound to a chromatin thread or when freely diffusing in the nucleoplasm. In addition, given that the chromatin molecule is fragmented, its fractal properties need to be considered. After calculating the search time for a target protein, alternating between the 3-D and 1-D diffusion phases on the chromatin fractal structure, it was deduced that facilitated diffusion in eukaryotes precipitates the searching process and minimizes the searching time by increasing the DNA-protein affinity.
For oxygen
The oxygen affinity with hemoglobin on red blood cell surfaces enhances this bonding ability. In a system of facilitated diffusion of oxygen, there is a tight relationship between the ligand which is oxygen and the carrier which is either hemoglobin or myoglobin. This mechanism of facilitated diffusion of oxygen by hemoglobin or myoglobin was discovered and initiated by Wittenberg and Scholander. They carried out experiments to test for the steady-state of diffusion of oxygen at various pressures. Oxygen-facilitated diffusion occurs in a homogeneous environment where oxygen pressure can be relatively controlled.
For oxygen diffusion to occur, there must be a full saturation pressure (more) on one side of the membrane and full reduced pressure (less) on the other side of the membrane i.e. one side of the membrane must be of higher concentration. During facilitated diffusion, hemoglobin increases the rate of constant diffusion of oxygen and facilitated diffusion occurs when oxyhemoglobin molecule is randomly displaced.
For carbon monoxide
Facilitated diffusion of carbon monoxide is similar to that of oxygen. Carbon monoxide also combines with hemoglobin and myoglobin, but carbon monoxide has a dissociation velocity that 100 times less than that of oxygen. Its affinity for myoglobin is 40 times higher and 250 times higher for hemoglobin, compared to oxygen.
For glucose
Since glucose is a large molecule, its diffusion across a membrane is difficult. Hence, it diffuses across membranes through facilitated diffusion, down the concentration gradient. The carrier protein at the membrane binds to the glucose and alters its shape such that it can easily to be transported. Movement of glucose into the cell could be rapid or slow depending on the number of membrane-spanning protein. It is transported against the concentration gradient by a dependent glucose symporter which provides a driving force to other glucose molecules in the cells. Facilitated diffusion helps in the release of accumulated glucose into the extracellular space adjacent to the blood capillary.
See also
Transmembrane channels
Major facilitator superfamily
References
External links
Facilitated Diffusion - Description and Animation
Facilitated Diffusion- Definition and Supplement
Diffusion
Transport proteins | 0.780228 | 0.995702 | 0.776875 |
Biological half-life | Biological half-life (elimination half-life, pharmacological half-life) is the time taken for concentration of a biological substance (such as a medication) to decrease from its maximum concentration (Cmax) to half of Cmax in the blood plasma. It is denoted by the abbreviation .
This is used to measure the removal of things such as metabolites, drugs, and signalling molecules from the body. Typically, the biological half-life refers to the body's natural detoxification (cleansing) through liver metabolism and through the excretion of the measured substance through the kidneys and intestines. This concept is used when the rate of removal is roughly exponential.
In a medical context, half-life explicitly describes the time it takes for the blood plasma concentration of a substance to halve (plasma half-life) its steady-state when circulating in the full blood of an organism. This measurement is useful in medicine, pharmacology and pharmacokinetics because it helps determine how much of a drug needs to be taken and how frequently it needs to be taken if a certain average amount is needed constantly. By contrast, the stability of a substance in plasma is described as plasma stability. This is essential to ensure accurate analysis of drugs in plasma and for drug discovery.
The relationship between the biological and plasma half-lives of a substance can be complex depending on the substance in question, due to factors including accumulation in tissues, protein binding, active metabolites, and receptor interactions.
Examples
Water
The biological half-life of water in a human is about 7 to 14 days. It can be altered by behavior. Drinking large amounts of alcohol will reduce the biological half-life of water in the body. This has been used to decontaminate patients who are internally contaminated with tritiated water. The basis of this decontamination method is to increase the rate at which the water in the body is replaced with new water.
Alcohol
The removal of ethanol (drinking alcohol) through oxidation by alcohol dehydrogenase in the liver from the human body is limited. Hence the removal of a large concentration of alcohol from blood may follow zero-order kinetics. Also the rate-limiting steps for one substance may be in common with other substances. For instance, the blood alcohol concentration can be used to modify the biochemistry of methanol and ethylene glycol. In this way the oxidation of methanol to the toxic formaldehyde and formic acid in the human body can be prevented by giving an appropriate amount of ethanol to a person who has ingested methanol. Methanol is very toxic and causes blindness and death. A person who has ingested ethylene glycol can be treated in the same way. Half life is also relative to the subjective metabolic rate of the individual in question.
Common prescription medications
Metals
The biological half-life of caesium in humans is between one and four months. This can be shortened by feeding the person prussian blue. The prussian blue in the digestive system acts as a solid ion exchanger which absorbs the caesium while releasing potassium ions.
For some substances, it is important to think of the human or animal body as being made up of several parts, each with their own affinity for the substance, and each part with a different biological half-life (physiologically-based pharmacokinetic modelling). Attempts to remove a substance from the whole organism may have the effect of increasing the burden present in one part of the organism. For instance, if a person who is contaminated with lead is given EDTA in a chelation therapy, then while the rate at which lead is lost from the body will be increased, the lead within the body tends to relocate into the brain where it can do the most harm.
Polonium in the body has a biological half-life of about 30 to 50 days.
Caesium in the body has a biological half-life of about one to four months.
Mercury (as methylmercury) in the body has a half-life of about 65 days.
Lead in the blood has a half life of 28–36 days.
Lead in bone has a biological half-life of about ten years.
Cadmium in bone has a biological half-life of about 30 years.
Plutonium in bone has a biological half-life of about 100 years.
Plutonium in the liver has a biological half-life of about 40 years.
Peripheral half-life
Some substances may have different half-lives in different parts of the body. For example, oxytocin has a half-life of typically about three minutes in the blood when given intravenously. Peripherally administered (e.g. intravenous) peptides like oxytocin cross the blood-brain-barrier very poorly, although very small amounts (< 1%) do appear to enter the central nervous system in humans when given via this route. In contrast to peripheral administration, when administered intranasally via a nasal spray, oxytocin reliably crosses the blood–brain barrier and exhibits psychoactive effects in humans. In addition, unlike the case of peripheral administration, intranasal oxytocin has a central duration of at least 2.25 hours and as long as 4 hours. In likely relation to this fact, endogenous oxytocin concentrations in the brain have been found to be as much as 1000-fold higher than peripheral levels.
Rate equations
First-order elimination
Half-times apply to processes where the elimination rate is exponential. If is the concentration of a substance at time , its time dependence is given by
where k is the reaction rate constant. Such a decay rate arises from a first-order reaction where the rate of elimination is proportional to the amount of the substance:
The half-life for this process is
Alternatively, half-life is given by
where λz is the slope of the terminal phase of the time–concentration curve for the substance on a semilogarithmic scale.
Half-life is determined by clearance (CL) and volume of distribution (VD) and the relationship is described by the following equation:
In clinical practice, this means that it takes 4 to 5 times the half-life for a drug's serum concentration to reach steady state after regular dosing is started, stopped, or the dose changed. So, for example, digoxin has a half-life (or t) of 24–36 h; this means that a change in the dose will take the best part of a week to take full effect. For this reason, drugs with a long half-life (e.g., amiodarone, elimination t of about 58 days) are usually started with a loading dose to achieve their desired clinical effect more quickly.
Biphasic half-life
Many drugs follow a biphasic elimination curve — first a steep slope then a shallow slope:
STEEP (initial) part of curve —> initial distribution of the drug in the body.
SHALLOW part of curve —> ultimate excretion of drug, which is dependent on the release of the drug from tissue compartments into the blood.
The longer half-life is called the terminal half-life and the half-life of the largest component is called the dominant half-life. For a more detailed description see Pharmacokinetics § Multi-compartmental models.
See also
Half-life, pertaining to the general mathematical concept in physics or pharmacology.
Effective half-life
References
Pharmacokinetics
Mathematics in medicine
Temporal exponentials | 0.778991 | 0.99725 | 0.776849 |
Content analysis | Content analysis is the study of documents and communication artifacts, which might be texts of various formats, pictures, audio or video. Social scientists use content analysis to examine patterns in communication in a replicable and systematic manner. One of the key advantages of using content analysis to analyse social phenomena is their non-invasive nature, in contrast to simulating social experiences or collecting survey answers.
Practices and philosophies of content analysis vary between academic disciplines. They all involve systematic reading or observation of texts or artifacts which are assigned labels (sometimes called codes) to indicate the presence of interesting, meaningful pieces of content. By systematically labeling the content of a set of texts, researchers can analyse patterns of content quantitatively using statistical methods, or use qualitative methods to analyse meanings of content within texts.
Computers are increasingly used in content analysis to automate the labeling (or coding) of documents. Simple computational techniques can provide descriptive data such as word frequencies and document lengths. Machine learning classifiers can greatly increase the number of texts that can be labeled, but the scientific utility of doing so is a matter of debate. Further, numerous computer-aided text analysis (CATA) computer programs are available that analyze text for predetermined linguistic, semantic, and psychological characteristics.
Goals
Content analysis is best understood as a broad family of techniques. Effective researchers choose techniques that best help them answer their substantive questions. That said, according to Klaus Krippendorff, six questions must be addressed in every content analysis:
Which data are analyzed?
How are the data defined?
From what population are data drawn?
What is the relevant context?
What are the boundaries of the analysis?
What is to be measured?
The simplest and most objective form of content analysis considers unambiguous characteristics of the text such as word frequencies, the page area taken by a newspaper column, or the duration of a radio or television program. Analysis of simple word frequencies is limited because the meaning of a word depends on surrounding text. Key Word In Context (KWIC) routines address this by placing words in their textual context. This helps resolve ambiguities such as those introduced by synonyms and homonyms.
A further step in analysis is the distinction between dictionary-based (quantitative) approaches and qualitative approaches. Dictionary-based approaches set up a list of categories derived from the frequency list of words and control the distribution of words and their respective categories over the texts. While methods in quantitative content analysis in this way transform observations of found categories into quantitative statistical data, the qualitative content analysis focuses more on the intentionality and its implications. There are strong parallels between qualitative content analysis and thematic analysis.
Qualitative and quantitative content analysis
Quantitative content analysis highlights frequency counts and statistical analysis of these coded frequencies. Additionally, quantitative content analysis begins with a framed hypothesis with coding decided on before the analysis begins. These coding categories are strictly relevant to the researcher's hypothesis. Quantitative analysis also takes a deductive approach. Examples of content-analytical variables and constructs can be found, for example, in the open-access database DOCA. This database compiles, systematizes, and evaluates relevant content-analytical variables of communication and political science research areas and topics.
Siegfried Kracauer provides a critique of quantitative analysis, asserting that it oversimplifies complex communications in order to be more reliable. On the other hand, qualitative analysis deals with the intricacies of latent interpretations, whereas quantitative has a focus on manifest meanings. He also acknowledges an "overlap" of qualitative and quantitative content analysis. Patterns are looked at more closely in qualitative analysis, and based on the latent meanings that the researcher may find, the course of the research could be changed. It is inductive and begins with open research questions, as opposed to a hypothesis.
Codebooks
The data collection instrument used in content analysis is the codebook or coding scheme. In qualitative content analysis the codebook is constructed and improved during coding, while in quantitative content analysis the codebook needs to be developed and pretested for reliability and validity before coding. The codebook includes detailed instructions for human coders plus clear definitions of the respective concepts or variables to be coded plus the assigned values.
According to current standards of good scientific practice, each content analysis study should provide their codebook in the appendix or as supplementary material so that reproducibility of the study is ensured. On the Open Science Framework (OSF) server of the Center for Open Science a lot of codebooks of content analysis studies are freely available via search for "codebook".
Furthermore, the Database of Variables for Content Analysis (DOCA) provides an open access archive of pretested variables and established codebooks for content analyses. Measures from the archive can be adopted in future studies to ensure the use of high-quality and comparable instruments. DOCA covers, among others, measures for the content analysis of fictional media and entertainment (e.g., measures for sexualization in video games), of user-generated media content (e.g., measures for online hate speech), and of news media and journalism (e.g., measures for stock photo use in press reporting on child sexual abuse, and measures of personalization in election campaign coverage).
Computational tools
With the rise of common computing facilities like PCs, computer-based methods of analysis are growing in popularity. Answers to open ended questions, newspaper articles, political party manifestos, medical records or systematic observations in experiments can all be subject to systematic analysis of textual data.
By having contents of communication available in form of machine readable texts, the input is analyzed for frequencies and coded into categories for building up inferences.
Computer-assisted analysis can help with large, electronic data sets by cutting out time and eliminating the need for multiple human coders to establish inter-coder reliability. However, human coders can still be employed for content analysis, as they are often more able to pick out nuanced and latent meanings in text. A study found that human coders were able to evaluate a broader range and make inferences based on latent meanings.
Reliability and Validity
Robert Weber notes: "To make valid inferences from the text, it is important that the classification procedure be reliable in the sense of being consistent: Different people should code the same text in the same way". The validity, inter-coder reliability and intra-coder reliability are subject to intense methodological research efforts over long years.
Neuendorf suggests that when human coders are used in content analysis at least two independent coders should be used. Reliability of human coding is often measured using a statistical measure of inter-coder reliability or "the amount of agreement or correspondence among two or more coders". Lacy and Riffe identify the measurement of inter-coder reliability as a strength of quantitative content analysis, arguing that, if content analysts do not measure inter-coder reliability, their data are no more reliable than the subjective impressions of a single reader.
According to today's reporting standards, quantitative content analyses should be published with complete codebooks and for all variables or measures in the codebook the appropriate inter-coder or inter-rater reliability coefficients should be reported based on empirical pre-tests. Furthermore, the validity of all variables or measures in the codebook must be ensured. This can be achieved through the use of established measures that have proven their validity in earlier studies. Also, the content validity of the measures can be checked by experts from the field who scrutinize and then approve or correct coding instructions, definitions and examples in the codebook.
Kinds of text
There are five types of texts in content analysis:
written text, such as books and papers
oral text, such as speech and theatrical performance
iconic text, such as drawings, paintings, and icons
audio-visual text, such as TV programs, movies, and videos
hypertexts, which are texts found on the Internet
History
Content analysis is research using the categorization and classification of speech, written text, interviews, images, or other forms of communication. In its beginnings, using the first newspapers at the end of the 19th century, analysis was done manually by measuring the number of columns given a subject. The approach can also be traced back to a university student studying patterns in Shakespeare's literature in 1893.
Over the years, content analysis has been applied to a variety of scopes. Hermeneutics and philology have long used content analysis to interpret sacred and profane texts and, in many cases, to attribute texts' authorship and authenticity.
In recent times, particularly with the advent of mass communication, content analysis has known an increasing use to deeply analyze and understand media content and media logic.
The political scientist Harold Lasswell formulated the core questions of content analysis in its early-mid 20th-century mainstream version: "Who says what, to whom, why, to what extent and with what effect?". The strong emphasis for a quantitative approach started up by Lasswell was finally carried out by another "father" of content analysis, Bernard Berelson, who proposed a definition of content analysis which, from this point of view, is emblematic: "a research technique for the objective, systematic and quantitative description of the manifest content of communication".
Quantitative content analysis has enjoyed a renewed popularity in recent years thanks to technological advances and fruitful application in of mass communication and personal communication research. Content analysis of textual big data produced by new media, particularly social media and mobile devices has become popular. These approaches take a simplified view of language that ignores the complexity of semiosis, the process by which meaning is formed out of language. Quantitative content analysts have been criticized for limiting the scope of content analysis to simple counting, and for applying the measurement methodologies of the natural sciences without reflecting critically on their appropriateness to social science. Conversely, qualitative content analysts have been criticized for being insufficiently systematic and too impressionistic. Krippendorff argues that quantitative and qualitative approaches to content analysis tend to overlap, and that there can be no generalisable conclusion as to which approach is superior.
Content analysis can also be described as studying traces, which are documents from past times, and artifacts, which are non-linguistic documents. Texts are understood to be produced by communication processes in a broad sense of that phrase—often gaining mean through abduction.
Latent and manifest content
Manifest content is readily understandable at its face value. Its meaning is direct. Latent content is not as overt, and requires interpretation to uncover the meaning or implication.
Uses
Holsti groups fifteen uses of content analysis into three basic categories:
make inferences about the antecedents of a communication
describe and make inferences about characteristics of a communication
make inferences about the effects of a communication.
He also places these uses into the context of the basic communication paradigm.
The following table shows fifteen uses of content analysis in terms of their general purpose, element of the communication paradigm to which they apply, and the general question they are intended to answer.
As a counterpoint, there are limits to the scope of use for the procedures that characterize content analysis. In particular, if access to the goal of analysis can be obtained by direct means without material interference, then direct measurement techniques yield better data. Thus, while content analysis attempts to quantifiably describe communications whose features are primarily categorical——limited usually to a nominal or ordinal scale——via selected conceptual units (the unitization) which are assigned values (the categorization) for enumeration while monitoring intercoder reliability, if instead the target quantity manifestly is already directly measurable——typically on an interval or ratio scale——especially a continuous physical quantity, then such targets usually are not listed among those needing the "subjective" selections and formulations of content analysis. For example (from mixed research and clinical application), as medical images communicate diagnostic features to physicians, neuroimaging's stroke (infarct) volume scale called ASPECTS is unitized as 10 qualitatively delineated (unequal) brain regions in the middle cerebral artery territory, which it categorizes as being at least partly versus not at all infarcted in order to enumerate the latter, with published series often assessing intercoder reliability by Cohen's kappa. The foregoing italicized operations impose the uncredited form of content analysis onto an estimation of infarct extent, which instead is easily enough and more accurately measured as a volume directly on the images. ("Accuracy ... is the highest form of reliability.") The concomitant clinical assessment, however, by the National Institutes of Health Stroke Scale (NIHSS) or the modified Rankin Scale (mRS), retains the necessary form of content analysis. Recognizing potential limits of content analysis across the contents of language and images alike, Klaus Krippendorff affirms that "comprehen[sion] ... may ... not conform at all to the process of classification and/or counting by which most content analyses proceed," suggesting that content analysis might materially distort a message.
The development of the initial coding scheme
The process of the initial coding scheme or approach to coding is contingent on the particular content analysis approach selected. Through a directed content analysis, the scholars draft a preliminary coding scheme from pre-existing theory or assumptions. While with the conventional content analysis approach, the initial coding scheme developed from the data.
The conventional process of coding
With either approach above, immersing oneself into the data to obtain an overall picture is recommendable for researchers to conduct. Furthermore, identifying a consistent and clear unit of coding is vital, and researchers' choices range from a single word to several paragraphs, from texts to iconic symbols. Last, constructing the relationships between codes by sorting out them within specific categories or themes.
See also
Donald Wayne Foster
Hermeneutics
Text mining
The Polish Peasant in Europe and America
Transition words
Video content analysis
Grounded theory
References
Further reading
Quantitative research
Qualitative research
Hermeneutics | 0.783684 | 0.991254 | 0.776831 |
Rasch model | The Rasch model, named after Georg Rasch, is a psychometric model for analyzing categorical data, such as answers to questions on a reading assessment or questionnaire responses, as a function of the trade-off between the respondent's abilities, attitudes, or personality traits, and the item difficulty. For example, they may be used to estimate a student's reading ability or the extremity of a person's attitude to capital punishment from responses on a questionnaire. In addition to psychometrics and educational research, the Rasch model and its extensions are used in other areas, including the health profession, agriculture, and market research.
The mathematical theory underlying Rasch models is a special case of item response theory. However, there are important differences in the interpretation of the model parameters and its philosophical implications that separate proponents of the Rasch model from the item response modeling tradition. A central aspect of this divide relates to the role of specific objectivity, a defining property of the Rasch model according to Georg Rasch, as a requirement for successful measurement.
Overview
The Rasch model for measurement
In the Rasch model, the probability of a specified response (e.g. right/wrong answer) is modeled as a function of person and item parameters. Specifically, in the original Rasch model, the probability of a correct response is modeled as a logistic function of the difference between the person and item parameter. The mathematical form of the model is provided later in this article. In most contexts, the parameters of the model characterize the proficiency of the respondents and the difficulty of the items as locations on a continuous latent variable. For example, in educational tests, item parameters represent the difficulty of items while person parameters represent the ability or attainment level of people who are assessed. The higher a person's ability relative to the difficulty of an item, the higher the probability of a correct response on that item. When a person's location on the latent trait is equal to the difficulty of the item, there is by definition a 0.5 probability of a correct response in the Rasch model.
A Rasch model is a model in one sense in that it represents the structure which data should exhibit in order to obtain measurements from the data; i.e. it provides a criterion for successful measurement. Beyond data, Rasch's equations model relationships we expect to obtain in the real world. For instance, education is intended to prepare children for the entire range of challenges they will face in life, and not just those that appear in textbooks or on tests. By requiring measures to remain the same (invariant) across different tests measuring the same thing, Rasch models make it possible to test the hypothesis that the particular challenges posed in a curriculum and on a test coherently represent the infinite population of all possible challenges in that domain. A Rasch model is therefore a model in the sense of an ideal or standard that provides a heuristic fiction serving as a useful organizing principle even when it is never actually observed in practice.
The perspective or paradigm underpinning the Rasch model is distinct from the perspective underpinning statistical modelling. Models are most often used with the intention of describing a set of data. Parameters are modified and accepted or rejected based on how well they fit the data. In contrast, when the Rasch model is employed, the objective is to obtain data which fit the model. The rationale for this perspective is that the Rasch model embodies requirements which must be met in order to obtain measurement, in the sense that measurement is generally understood in the physical sciences.
A useful analogy for understanding this rationale is to consider objects measured on a weighing scale. Suppose the weight of an object A is measured as being substantially greater than the weight of an object B on one occasion, then immediately afterward the weight of object B is measured as being substantially greater than the weight of object A. A property we require of measurements is that the resulting comparison between objects should be the same, or invariant, irrespective of other factors. This key requirement is embodied within the formal structure of the Rasch model. Consequently, the Rasch model is not altered to suit data. Instead, the method of assessment should be changed so that this requirement is met, in the same way that a weighing scale should be rectified if it gives different comparisons between objects upon separate measurements of the objects.
Data analysed using the model are usually responses to conventional items on tests, such as educational tests with right/wrong answers. However, the model is a general one, and can be applied wherever discrete data are obtained with the intention of measuring a quantitative attribute or trait.
Scaling
When all test-takers have an opportunity to attempt all items on a single test, each total score on the test maps to a unique estimate of ability and the greater the total, the greater the ability estimate. Total scores do not have a linear relationship with ability estimates. Rather, the relationship is non-linear as shown in Figure 1. The total score is shown on the vertical axis, while the corresponding person location estimate is shown on the horizontal axis. For the particular test on which the test characteristic curve (TCC) shown in Figure 1 is based, the relationship is approximately linear throughout the range of total scores from about 13 to 31. The shape of the TCC is generally somewhat sigmoid as in this example. However, the precise relationship between total scores and person location estimates depends on the distribution of items on the test. The TCC is steeper in ranges on the continuum in which there are more items, such as in the range on either side of 0 in Figures 1 and 2.
In applying the Rasch model, item locations are often scaled first, based on methods such as those described below. This part of the process of scaling is often referred to as item calibration. In educational tests, the smaller the proportion of correct responses, the higher the difficulty of an item and hence the higher the item's scale location. Once item locations are scaled, the person locations are measured on the scale. As a result, person and item locations are estimated on a single scale as shown in Figure 2.
Interpreting scale locations
For dichotomous data such as right/wrong answers, by definition, the location of an item on a scale corresponds with the person location at which there is a 0.5 probability of a correct response to the question. In general, the probability of a person responding correctly to a question with difficulty lower than that person's location is greater than 0.5, while the probability of responding correctly to a question with difficulty greater than the person's location is less than 0.5. The Item Characteristic Curve (ICC) or Item Response Function (IRF) shows the probability of a correct response as a function of the ability of persons. A single ICC is shown and explained in more detail in relation to Figure 4 in this article (see also the item response function). The leftmost ICCs in Figure 3 are the easiest items, the rightmost ICCs in the same figure are the most difficult items.
When responses of a person are sorted according to item difficulty, from lowest to highest, the most likely pattern is a Guttman pattern or vector; i.e. {1,1,...,1,0,0,0,...,0}. However, while this pattern is the most probable given the structure of the Rasch model, the model requires only probabilistic Guttman response patterns; that is, patterns which tend toward the Guttman pattern. It is unusual for responses to conform strictly to the pattern because there are many possible patterns. It is unnecessary for responses to conform strictly to the pattern in order for data to fit the Rasch model.
Each ability estimate has an associated standard error of measurement, which quantifies the degree of uncertainty associated with the ability estimate. Item estimates also have standard errors. Generally, the standard errors of item estimates are considerably smaller than the standard errors of person estimates because there are usually more response data for an item than for a person. That is, the number of people attempting a given item is usually greater than the number of items attempted by a given person. Standard errors of person estimates are smaller where the slope of the ICC is steeper, which is generally through the middle range of scores on a test. Thus, there is greater precision in this range since the steeper the slope, the greater the distinction between any two points on the line.
Statistical and graphical tests are used to evaluate the correspondence of data with the model. Certain tests are global, while others focus on specific items or people. Certain tests of fit provide information about which items can be used to increase the reliability of a test by omitting or correcting problems with poor items. In Rasch Measurement the person separation index is used instead of reliability indices. However, the person separation index is analogous to a reliability index. The separation index is a summary of the genuine separation as a ratio to separation including measurement error. As mentioned earlier, the level of measurement error is not uniform across the range of a test, but is generally larger for more extreme scores (low and high).
Features of the Rasch model
The class of models is named after Georg Rasch, a Danish mathematician and statistician who advanced the epistemological case for the models based on their congruence with a core requirement of measurement in physics; namely the requirement of invariant comparison. This is the defining feature of the class of models, as is elaborated upon in the following section. The Rasch model for dichotomous data has a close conceptual relationship to the law of comparative judgment (LCJ), a model formulated and used extensively by L. L. Thurstone, and therefore also to the Thurstone scale.
Prior to introducing the measurement model he is best known for, Rasch had applied the Poisson distribution to reading data as a measurement model, hypothesizing that in the relevant empirical context, the number of errors made by a given individual was governed by the ratio of the text difficulty to the person's reading ability. Rasch referred to this model as the multiplicative Poisson model. Rasch's model for dichotomous data – i.e. where responses are classifiable into two categories – is his most widely known and used model, and is the main focus here. This model has the form of a simple logistic function.
The brief outline above highlights certain distinctive and interrelated features of Rasch's perspective on social measurement, which are as follows:
He was concerned principally with the measurement of individuals, rather than with distributions among populations.
He was concerned with establishing a basis for meeting a priori requirements for measurement deduced from physics and, consequently, did not invoke any assumptions about the distribution of levels of a trait in a population.
Rasch's approach explicitly recognizes that it is a scientific hypothesis that a given trait is both quantitative and measurable, as operationalized in a particular experimental context.
Thus, congruent with the perspective articulated by Thomas Kuhn in his 1961 paper The function of measurement in modern physical science, measurement was regarded both as being founded in theory, and as being instrumental to detecting quantitative anomalies incongruent with hypotheses related to a broader theoretical framework. This perspective is in contrast to that generally prevailing in the social sciences, in which data such as test scores are directly treated as measurements without requiring a theoretical foundation for measurement. Although this contrast exists, Rasch's perspective is actually complementary to the use of statistical analysis or modelling that requires interval-level measurements, because the purpose of applying a Rasch model is to obtain such measurements. Applications of Rasch models are described in a wide variety of sources.
Invariant comparison and sufficiency
The Rasch model for dichotomous data is often regarded as an item response theory (IRT) model with one item parameter. However, rather than being a particular IRT model, proponents of the model regard it as a model that possesses a property which distinguishes it from other IRT models. Specifically, the defining property of Rasch models is their formal or mathematical embodiment of the principle of invariant comparison. Rasch summarised the principle of invariant comparison as follows:
The comparison between two stimuli should be independent of which particular individuals were instrumental for the comparison; and it should also be independent of which other stimuli within the considered class were or might also have been compared.
Symmetrically, a comparison between two individuals should be independent of which particular stimuli within the class considered were instrumental for the comparison; and it should also be independent of which other individuals were also compared, on the same or some other occasion.
Rasch models embody this principle because their formal structure permits algebraic separation of the person and item parameters, in the sense that the person parameter can be eliminated during the process of statistical estimation of item parameters. This result is achieved through the use of conditional maximum likelihood estimation, in which the response space is partitioned according to person total scores. The consequence is that the raw score for an item or person is the sufficient statistic for the item or person parameter. That is to say, the person total score contains all information available within the specified context about the individual, and the item total score contains all information with respect to item, with regard to the relevant latent trait. The Rasch model requires a specific structure in the response data, namely a probabilistic Guttman structure.
In somewhat more familiar terms, Rasch models provide a basis and justification for obtaining person locations on a continuum from total scores on assessments. Although it is not uncommon to treat total scores directly as measurements, they are actually counts of discrete observations rather than measurements. Each observation represents the observable outcome of a comparison between a person and item. Such outcomes are directly analogous to the observation of the tipping of a beam balance in one direction or another. This observation would indicate that one or other object has a greater mass, but counts of such observations cannot be treated directly as measurements.
Rasch pointed out that the principle of invariant comparison is characteristic of measurement in physics using, by way of example, a two-way experimental frame of reference in which each instrument exerts a mechanical force upon solid bodies to produce acceleration. Rasch stated of this context: "Generally: If for any two objects we find a certain ratio of their accelerations produced by one instrument, then the same ratio will be found for any other of the instruments". It is readily shown that Newton's second law entails that such ratios are inversely proportional to the ratios of the masses of the bodies.
The mathematical form of the Rasch model for dichotomous data
Let be a dichotomous random variable where, for example, denotes a correct response and an incorrect response to a given assessment item. In the Rasch model for dichotomous data, the probability of the outcome is given by:
where is the ability of person and is the difficulty of item . Thus, in the case of a dichotomous attainment item, is the probability of success upon interaction between the relevant person and assessment item. It is readily shown that the log odds, or logit, of correct response by a person to an item, based on the model, is equal to . Given two examinees with different ability parameters and and an arbitrary item with difficulty , compute the difference in logits for these two examinees by . This difference becomes . Conversely, it can be shown that the log odds of a correct response by the same person to one item, conditional on a correct response to one of two items, is equal to the difference between the item locations. For example,
where is the total score of person n over the two items, which implies a correct response to one or other of the items. Hence, the conditional log odds does not involve the person parameter , which can therefore be eliminated by conditioning on the total score . That is, by partitioning the responses according to raw scores and calculating the log odds of a correct response, an estimate is obtained without involvement of . More generally, a number of item parameters can be estimated iteratively through application of a process such as Conditional Maximum Likelihood estimation (see Rasch model estimation). While more involved, the same fundamental principle applies in such estimations.
The ICC of the Rasch model for dichotomous data is shown in Figure 4. The grey line maps the probability of the discrete outcome (that is, correctly answering the question) for persons with different locations on the latent continuum (that is, their level of abilities). The location of an item is, by definition, that location at which the probability that is equal to 0.5. In figure 4, the black circles represent the actual or observed proportions of persons within Class Intervals for which the outcome was observed. For example, in the case of an assessment item used in the context of educational psychology, these could represent the proportions of persons who answered the item correctly. Persons are ordered by the estimates of their locations on the latent continuum and classified into Class Intervals on this basis in order to graphically inspect the accordance of observations with the model. There is a close conformity of the data with the model. In addition to graphical inspection of data, a range of statistical tests of fit are used to evaluate whether departures of observations from the model can be attributed to random effects alone, as required, or whether there are systematic departures from the model.
Polytomous extensions of the Rasch model
There are multiple polytomous extensions to the Rasch model, which generalize the dichotomous model so that it can be applied in contexts in which successive integer scores represent categories of increasing level or magnitude of a latent trait, such as increasing ability, motor function, endorsement of a statement, and so forth. These polytomous extensions are, for example, applicable to the use of Likert scales, grading in educational assessment, and scoring of performances by judges.
Other considerations
A criticism of the Rasch model is that it is overly restrictive or prescriptive because an assumption of the model is that all items have equal discrimination, whereas in practice, items discriminations vary, and thus no data set will ever show perfect data-model fit. A frequent misunderstanding is that the Rasch model does not permit each item to have a different discrimination, but equal discrimination is an assumption of invariant measurement, so differing item discriminations are not forbidden, but rather indicate that measurement quality does not equal a theoretical ideal. Just as in physical measurement, real world datasets will never perfectly match theoretical models, so the relevant question is whether a particular data set provides sufficient quality of measurement for the purpose at hand, not whether it perfectly matches an unattainable standard of perfection.
A criticism specific to the use of the Rasch model with response data from multiple choice items is that there is no provision in the model for guessing because the left asymptote always approaches a zero probability in the Rasch model. This implies that a person of low ability will always get an item wrong. However, low-ability individuals completing a multiple-choice exam have a substantially higher probability of choosing the correct answer by chance alone (for a k-option item, the likelihood is around 1/k).
The three-parameter logistic model relaxes both these assumptions and the two-parameter logistic model allows varying slopes. However, the specification of uniform discrimination and zero left asymptote are necessary properties of the model in order to sustain sufficiency of the simple, unweighted raw score. In practice, the non-zero lower asymptote found in multiple-choice datasets is less of a threat to measurement than commonly assumed and typically does not result in substantive errors in measurement when well-developed test items are used sensibly
Verhelst & Glas (1995) derive Conditional Maximum Likelihood (CML) equations for a model they refer to as the One Parameter Logistic Model (OPLM). In algebraic form it appears to be identical with the 2PL model, but OPLM contains preset discrimination indexes rather than 2PL's estimated discrimination parameters. As noted by these authors, though, the problem one faces in estimation with estimated discrimination parameters is that the discriminations are unknown, meaning that the weighted raw score "is not a mere statistic, and hence it is impossible to use CML as an estimation method". That is, sufficiency of the weighted "score" in the 2PL cannot be used according to the way in which a sufficient statistic is defined. If the weights are imputed instead of being estimated, as in OPLM, conditional estimation is possible and some of the properties of the Rasch model are retained. In OPLM, the values of the discrimination index are restricted to between 1 and 15. A limitation of this approach is that in practice, values of discrimination indexes must be preset as a starting point. This means some type of estimation of discrimination is involved when the purpose is to avoid doing so.
The Rasch model for dichotomous data inherently entails a single discrimination parameter which, as noted by Rasch, constitutes an arbitrary choice of the unit in terms of which magnitudes of the latent trait are expressed or estimated. However, the Rasch model requires that the discrimination is uniform across interactions between persons and items within a specified frame of reference (i.e. the assessment context given conditions for assessment).
Application of the model provides diagnostic information regarding how well the criterion is met. Application of the model can also provide information about how well items or questions on assessments work to measure the ability or trait. For instance, knowing the proportion of persons that engage in a given behavior, the Rasch model can be used to derive the relations between difficulty of behaviors, attitudes and behaviors. Prominent advocates of Rasch models include Benjamin Drake Wright, David Andrich and Erling Andersen.
See also
Mokken scale
Guttman scale
References
Further reading
Andrich, D. (1978a). A rating formulation for ordered response categories. Psychometrika, 43, 357–74.
Andrich, D. (1988). Rasch models for measurement. Beverly Hills: Sage Publications.
Baker, F. (2001). The Basics of Item Response Theory. ERIC Clearinghouse on Assessment and Evaluation, University of Maryland, College Park, MD. Available free with software included from IRT at Edres.org
Fischer, G.H. & Molenaar, I.W. (1995). Rasch models: foundations, recent developments and applications. New York: Springer-Verlag.
Goldstein H & Blinkhorn S (1977). Monitoring Educational Standards: an inappropriate model. . Bull.Br.Psychol.Soc. 30 309–311
Goldstein H & Blinkhorn S (1982). The Rasch Model Still Does Not Fit. BERJ 82 167–170.
Hambleton RK, Jones RW. "Comparison of classical test theory and item response," Educational Measurement: Issues and Practice 1993; 12(3):38–47. available in the ITEMS Series from the National Council on Measurement in Education
Harris D. Comparison of 1-, 2-, and 3-parameter IRT models. Educational Measurement: Issues and Practice;. 1989; 8: 35–41 available in the ITEMS Series from the National Council on Measurement in Education
von Davier, M., & Carstensen, C. H. (2007). Multivariate and Mixture Distribution Rasch Models: Extensions and Applications. New York: Springer.
von Davier, M. (2016). Rasch Model. In Wim J. van der Linden (ed.): Handbook of Item Response Theory (Boca Raton: CRC Press), Routledge Handbooks.
Wright, B.D., & Stone, M.H. (1979). Best Test Design. Chicago, IL: MESA Press.
Wu, M. & Adams, R. (2007). Applying the Rasch model to psycho-social measurement: A practical approach. Melbourne, Australia: Educational Measurement Solutions. Available free from Educational Measurement Solutions
External links
Institute for Objective Measurement Online Rasch Resources
Pearson Psychometrics Laboratory, with information about Rasch models
Journal of Applied Measurement
Journal of Outcome Measurement (all issues available for free downloading)
Berkeley Evaluation & Assessment Research Center (ConstructMap software)
Directory of Rasch Software – freeware and paid
IRT Modeling Lab at U. Illinois Urbana Champ.
National Council on Measurement in Education (NCME)
Rasch Measurement Transactions
The Standards for Educational and Psychological Testing
The Trouble with Rasch
Psychometrics
Educational psychology
Statistical models
Personality theories | 0.78645 | 0.987755 | 0.77682 |
Acid dissociation constant | In chemistry, an acid dissociation constant (also known as acidity constant, or acid-ionization constant; denoted ) is a quantitative measure of the strength of an acid in solution. It is the equilibrium constant for a chemical reaction
HA <=> A^- + H^+
known as dissociation in the context of acid–base reactions. The chemical species HA is an acid that dissociates into , called the conjugate base of the acid, and a hydrogen ion, . The system is said to be in equilibrium when the concentrations of its components do not change over time, because both forward and backward reactions are occurring at the same rate.
The dissociation constant is defined by
or by its logarithmic form
where quantities in square brackets represent the molar concentrations of the species at equilibrium. For example, a hypothetical weak acid having Ka = 10−5, the value of log Ka is the exponent (−5), giving pKa = 5. For acetic acid, Ka = 1.8 x 10−5, so pKa is about 5. A higher Ka corresponds to a stronger acid (an acid that is more dissociated at equilibrium). The form pKa is often used because it provides a convenient logarithmic scale, where a lower pKa corresponds to a stronger acid.
Theoretical background
The acid dissociation constant for an acid is a direct consequence of the underlying thermodynamics of the dissociation reaction; the pKa value is directly proportional to the standard Gibbs free energy change for the reaction. The value of the pKa changes with temperature and can be understood qualitatively based on Le Châtelier's principle: when the reaction is endothermic, Ka increases and pKa decreases with increasing temperature; the opposite is true for exothermic reactions.
The value of pKa also depends on molecular structure of the acid in many ways. For example, Pauling proposed two rules: one for successive pKa of polyprotic acids (see Polyprotic acids below), and one to estimate the pKa of oxyacids based on the number of =O and −OH groups (see Factors that affect pKa values below). Other structural factors that influence the magnitude of the acid dissociation constant include inductive effects, mesomeric effects, and hydrogen bonding. Hammett type equations have frequently been applied to the estimation of pKa.
The quantitative behaviour of acids and bases in solution can be understood only if their pKa values are known. In particular, the pH of a solution can be predicted when the analytical concentration and pKa values of all acids and bases are known; conversely, it is possible to calculate the equilibrium concentration of the acids and bases in solution when the pH is known. These calculations find application in many different areas of chemistry, biology, medicine, and geology. For example, many compounds used for medication are weak acids or bases, and a knowledge of the pKa values, together with the octanol-water partition coefficient, can be used for estimating the extent to which the compound enters the blood stream. Acid dissociation constants are also essential in aquatic chemistry and chemical oceanography, where the acidity of water plays a fundamental role. In living organisms, acid–base homeostasis and enzyme kinetics are dependent on the pKa values of the many acids and bases present in the cell and in the body. In chemistry, a knowledge of pKa values is necessary for the preparation of buffer solutions and is also a prerequisite for a quantitative understanding of the interaction between acids or bases and metal ions to form complexes. Experimentally, pKa values can be determined by potentiometric (pH) titration, but for values of pKa less than about 2 or more than about 11, spectrophotometric or NMR measurements may be required due to practical difficulties with pH measurements.
Definitions
According to Arrhenius's original molecular definition, an acid is a substance that dissociates in aqueous solution, releasing the hydrogen ion (a proton):
HA <=> A- + H+
The equilibrium constant for this dissociation reaction is known as a dissociation constant. The liberated proton combines with a water molecule to give a hydronium (or oxonium) ion (naked protons do not exist in solution), and so Arrhenius later proposed that the dissociation should be written as an acid–base reaction:
HA + H2O <=> A- + H3O+
Brønsted and Lowry generalised this further to a proton exchange reaction:
The acid loses a proton, leaving a conjugate base; the proton is transferred to the base, creating a conjugate acid. For aqueous solutions of an acid HA, the base is water; the conjugate base is and the conjugate acid is the hydronium ion. The Brønsted–Lowry definition applies to other solvents, such as dimethyl sulfoxide: the solvent S acts as a base, accepting a proton and forming the conjugate acid .
HA + S <=> A- + SH+
In solution chemistry, it is common to use as an abbreviation for the solvated hydrogen ion, regardless of the solvent. In aqueous solution denotes a solvated hydronium ion rather than a proton.
The designation of an acid or base as "conjugate" depends on the context. The conjugate acid of a base B dissociates according to
BH+ + OH- <=> B + H2O
which is the reverse of the equilibrium
The hydroxide ion , a well known base, is here acting as the conjugate base of the acid water. Acids and bases are thus regarded simply as donors and acceptors of protons respectively.
A broader definition of acid dissociation includes hydrolysis, in which protons are produced by the splitting of water molecules. For example, boric acid produces as if it were a proton donor, but it has been confirmed by Raman spectroscopy that this is due to the hydrolysis equilibrium:
B(OH)3 + 2 H2O <=> B(OH)4- + H3O+
Similarly, metal ion hydrolysis causes ions such as to behave as weak acids:
[Al(H2O)6]^3+ + H2O <=> [Al(H2O)5(OH)]^2+ + H3O+
According to Lewis's original definition, an acid is a substance that accepts an electron pair to form a coordinate covalent bond.
Equilibrium constant
An acid dissociation constant is a particular example of an equilibrium constant. The dissociation of a monoprotic acid, HA, in dilute solution can be written as
HA <=> A- + H+
The thermodynamic equilibrium constant can be defined by
where represents the activity, at equilibrium, of the chemical species X. is dimensionless since activity is dimensionless. Activities of the products of dissociation are placed in the numerator, activities of the reactants are placed in the denominator. See activity coefficient for a derivation of this expression.
Since activity is the product of concentration and activity coefficient (γ) the definition could also be written as
where represents the concentration of HA and is a quotient of activity coefficients.
To avoid the complications involved in using activities, dissociation constants are determined, where possible, in a medium of high ionic strength, that is, under conditions in which can be assumed to be always constant. For example, the medium might be a solution of 0.1 molar (M) sodium nitrate or 3 M potassium perchlorate. With this assumption,
is obtained. Note, however, that all published dissociation constant values refer to the specific ionic medium used in their determination and that different values are obtained with different conditions, as shown for acetic acid in the illustration above. When published constants refer to an ionic strength other than the one required for a particular application, they may be adjusted by means of specific ion theory (SIT) and other theories.
Cumulative and stepwise constants
A cumulative equilibrium constant, denoted by is related to the product of stepwise constants, denoted by For a dibasic acid the relationship between stepwise and overall constants is as follows
H2A <=> A^2- + 2H+
Note that in the context of metal-ligand complex formation, the equilibrium constants for the formation of metal complexes are usually defined as association constants. In that case, the equilibrium constants for ligand protonation are also defined as association constants. The numbering of association constants is the reverse of the numbering of dissociation constants; in this example
Association and dissociation constants
When discussing the properties of acids it is usual to specify equilibrium constants as acid dissociation constants, denoted by Ka, with numerical values given the symbol pKa.
On the other hand, association constants are used for bases.
However, general purpose computer programs that are used to derive equilibrium constant values from experimental data use association constants for both acids and bases. Because stability constants for a metal-ligand complex are always specified as association constants, ligand protonation must also be specified as an association reaction. The definitions show that the value of an acid dissociation constant is the reciprocal of the value of the corresponding association constant:
Notes
For a given acid or base in water, , the self-ionization constant of water.
The association constant for the formation of a supramolecular complex may be denoted as Ka; in such cases "a" stands for "association", not "acid".
For polyprotic acids, the numbering of stepwise association constants is the reverse of the numbering of the dissociation constants. For example, for phosphoric acid (details in the polyprotic acids section below):
Temperature dependence
All equilibrium constants vary with temperature according to the van 't Hoff equation
is the gas constant and is the absolute temperature. Thus, for exothermic reactions, the standard enthalpy change, , is negative and K decreases with temperature. For endothermic reactions, is positive and K increases with temperature.
The standard enthalpy change for a reaction is itself a function of temperature, according to Kirchhoff's law of thermochemistry:
where is the heat capacity change at constant pressure. In practice may be taken to be constant over a small temperature range.
Dimensionality
In the equation
Ka appears to have dimensions of concentration. However, since , the equilibrium constant, , cannot have a physical dimension. This apparent paradox can be resolved in various ways.
Assume that the quotient of activity coefficients has a numerical value of 1, so that has the same numerical value as the thermodynamic equilibrium constant .
Express each concentration value as the ratio c/c0, where c0 is the concentration in a [hypothetical] standard state, with a numerical value of 1, by definition.
Express the concentrations on the mole fraction scale. Since mole fraction has no dimension, the quotient of concentrations will, by definition, be a pure number.
The procedures, (1) and (2), give identical numerical values for an equilibrium constant. Furthermore, since a concentration is simply proportional to mole fraction and density :
and since the molar mass is a constant in dilute solutions, an equilibrium constant value determined using (3) will be simply proportional to the values obtained with (1) and (2).
It is common practice in biochemistry to quote a value with a dimension as, for example, "Ka = 30 mM" in order to indicate the scale, millimolar (mM) or micromolar (μM) of the concentration values used for its calculation.
Strong acids and bases
An acid is classified as "strong" when the concentration of its undissociated species is too low to be measured. Any aqueous acid with a pKa value of less than 0 is almost completely deprotonated and is considered a strong acid. All such acids transfer their protons to water and form the solvent cation species (H3O+ in aqueous solution) so that they all have essentially the same acidity, a phenomenon known as solvent leveling. They are said to be fully dissociated in aqueous solution because the amount of undissociated acid, in equilibrium with the dissociation products, is below the detection limit. Likewise, any aqueous base with an association constant pKb less than about 0, corresponding to pKa greater than about 14, is leveled to OH− and is considered a strong base.
Nitric acid, with a pK value of around −1.7, behaves as a strong acid in aqueous solutions with a pH greater than 1. At lower pH values it behaves as a weak acid.
pKa values for strong acids have been estimated by theoretical means. For example, the pKa value of aqueous HCl has been estimated as −9.3.
Monoprotic acids
After rearranging the expression defining Ka, and putting , one obtains
This is the Henderson–Hasselbalch equation, from which the following conclusions can be drawn.
At half-neutralization the ratio ; since , the pH at half-neutralization is numerically equal to pKa. Conversely, when , the concentration of HA is equal to the concentration of A−.
The buffer region extends over the approximate range pKa ± 2. Buffering is weak outside the range pKa ± 1. At pH ≤ pKa − 2 the substance is said to be fully protonated and at pH ≥ pKa + 2 it is fully dissociated (deprotonated).
If the pH is known, the ratio may be calculated. This ratio is independent of the analytical concentration of the acid.
In water, measurable pKa values range from about −2 for a strong acid to about 12 for a very weak acid (or strong base).
A buffer solution of a desired pH can be prepared as a mixture of a weak acid and its conjugate base. In practice, the mixture can be created by dissolving the acid in water, and adding the requisite amount of strong acid or base. When the pKa and analytical concentration of the acid are known, the extent of dissociation and pH of a solution of a monoprotic acid can be easily calculated using an ICE table.
Polyprotic acids
A polyprotic acid is a compound which may lose more than 1 proton. Stepwise dissociation constants are each defined for the loss of a single proton. The constant for dissociation of the first proton may be denoted as Ka1 and the constants for dissociation of successive protons as Ka2, etc. Phosphoric acid, , is an example of a polyprotic acid as it can lose three protons.
{| class="wikitable"
! Equilibrium
! pK definition and value
|-
| H3PO4 <=> H2PO4- + H+
|
|-
| H2PO4- <=> HPO4^2- + H+
|
|-
| HPO4^2- <=> PO4^3- + H+
|
|}
When the difference between successive pK values is about four or more, as in this example, each species may be considered as an acid in its own right; In fact salts of may be crystallised from solution by adjustment of pH to about 5.5 and salts of may be crystallised from solution by adjustment of pH to about 10. The species distribution diagram shows that the concentrations of the two ions are maximum at pH 5.5 and 10.
When the difference between successive pK values is less than about four there is overlap between the pH range of existence of the species in equilibrium. The smaller the difference, the more the overlap. The case of citric acid is shown at the right; solutions of citric acid are buffered over the whole range of pH 2.5 to 7.5.
According to Pauling's first rule, successive pK values of a given acid increase . For oxyacids with more than one ionizable hydrogen on the same atom, the pKa values often increase by about 5 units for each proton removed, as in the example of phosphoric acid above.
It can be seen in the table above that the second proton is removed from a negatively charged species. Since the proton carries a positive charge extra work is needed to remove it, which is why pKa2 is greater than pKa1. pKa3 is greater than pKa2 because there is further charge separation. When an exception to Pauling's rule is found, it indicates that a major change in structure is also occurring. In the case of (aq), the vanadium is octahedral, 6-coordinate, whereas vanadic acid is tetrahedral, 4-coordinate. This means that four "particles" are released with the first dissociation, but only two "particles" are released with the other dissociations, resulting in a much greater entropy contribution to the standard Gibbs free energy change for the first reaction than for the others.
{| class="wikitable"
! Equilibrium
! pKa
|-
| [VO2(H2O)4]+ <=> H3VO4 + H+ + 2H2O
|
|-
| H3VO4 <=> H2VO4- + H+
|
|-
| H2VO4- <=> HVO4^2- + H+
|
|-
| HVO4^2- <=> VO4^3- + H+
|
|}
Isoelectric point
For substances in solution, the isoelectric point (pI) is defined as the pH at which the sum, weighted by charge value, of concentrations of positively charged species is equal to the weighted sum of concentrations of negatively charged species. In the case that there is one species of each type, the isoelectric point can be obtained directly from the pK values. Take the example of glycine, defined as AH. There are two dissociation equilibria to consider.
AH2+ <=> AH~+ H+ \qquad [AH][H+] = \mathit{K}_1 [AH2+]
AH <=> A^-~+H+ \qquad [A^- ][H+] = \mathit{K}_2 [AH]
Substitute the expression for [AH] from the second equation into the first equation
[A^- ][H+]^2 = \mathit{K}_1 \mathit{K}_2 [AH2+]
At the isoelectric point the concentration of the positively charged species, , is equal to the concentration of the negatively charged species, , so
Therefore, taking cologarithms, the pH is given by
pI values for amino acids are listed at proteinogenic amino acid. When more than two charged species are in equilibrium with each other a full speciation calculation may be needed.
Bases and basicity
The equilibrium constant Kb for a base is usually defined as the association constant for protonation of the base, B, to form the conjugate acid, .
B + H2O <=> HB+ + OH-
Using similar reasoning to that used before
Kb is related to Ka for the conjugate acid. In water, the concentration of the hydroxide ion, , is related to the concentration of the hydrogen ion by , therefore
Substitution of the expression for into the expression for Kb gives
When Ka, Kb and Kw are determined under the same conditions of temperature and ionic strength, it follows, taking cologarithms, that pKb = pKw − pKa. In aqueous solutions at 25 °C, pKw is 13.9965, so
with sufficient accuracy for most practical purposes. In effect there is no need to define pKb separately from pKa, but it is done here as often only pKb values can be found in the older literature.
For an hydrolyzed metal ion, Kb can also be defined as a stepwise dissociation constant
This is the reciprocal of an association constant for formation of the complex.
Basicity expressed as dissociation constant of conjugate acid
Because the relationship pKb = pKw − pKa holds only in aqueous solutions (though analogous relationships apply for other amphoteric solvents), subdisciplines of chemistry like organic chemistry that usually deal with nonaqueous solutions generally do not use pKb as a measure of basicity. Instead, the pKa of the conjugate acid, denoted by pKaH, is quoted when basicity needs to be quantified. For base B and its conjugate acid BH+ in equilibrium, this is defined as
A higher value for pKaH corresponds to a stronger base. For example, the values and indicate that (triethylamine) is a stronger base than (pyridine).
Amphoteric substances
An amphoteric substance is one that can act as an acid or as a base, depending on pH. Water (below) is amphoteric. Another example of an amphoteric molecule is the bicarbonate ion that is the conjugate base of the carbonic acid molecule H2CO3 in the equilibrium
H2CO3 + H2O <=> HCO3- + H3O+
but also the conjugate acid of the carbonate ion in (the reverse of) the equilibrium
HCO3- + OH- <=> CO3^2- + H2O
Carbonic acid equilibria are important for acid–base homeostasis in the human body.
An amino acid is also amphoteric with the added complication that the neutral molecule is subject to an internal acid–base equilibrium in which the basic amino group attracts and binds the proton from the acidic carboxyl group, forming a zwitterion.
NH2CHRCO2H <=> NH3+CHRCO2-
At pH less than about 5 both the carboxylate group and the amino group are protonated. As pH increases the acid dissociates according to
NH3+CHRCO2H <=> NH3+CHRCO2- + H+
At high pH a second dissociation may take place.
NH3+CHRCO2- <=> NH2CHRCO2- + H+
Thus the amino acid molecule is amphoteric because it may either be protonated or deprotonated.
Water self-ionization
The water molecule may either gain or lose a proton. It is said to be amphiprotic. The ionization equilibrium can be written
H2O <=> OH- + H+
where in aqueous solution denotes a solvated proton. Often this is written as the hydronium ion , but this formula is not exact because in fact there is solvation by more than one water molecule and species such as , , and are also present.
The equilibrium constant is given by
With solutions in which the solute concentrations are not very high, the concentration can be assumed to be constant, regardless of solute(s); this expression may then be replaced by
The self-ionization constant of water, Kw, is thus just a special case of an acid dissociation constant. A logarithmic form analogous to pKa may also be defined
These data can be modelled by a parabola with
From this equation, pKw = 14 at 24.87 °C. At that temperature both hydrogen and hydroxide ions have a concentration of 10−7 M.
Acidity in nonaqueous solutions
A solvent will be more likely to promote ionization of a dissolved acidic molecule in the following circumstances:
It is a protic solvent, capable of forming hydrogen bonds.
It has a high donor number, making it a strong Lewis base.
It has a high dielectric constant (relative permittivity), making it a good solvent for ionic species.
pKa values of organic compounds are often obtained using the aprotic solvents dimethyl sulfoxide (DMSO) and acetonitrile (ACN).
DMSO is widely used as an alternative to water because it has a lower dielectric constant than water, and is less polar and so dissolves non-polar, hydrophobic substances more easily. It has a measurable pKa range of about 1 to 30. Acetonitrile is less basic than DMSO, and, so, in general, acids are weaker and bases are stronger in this solvent. Some pKa values at 25 °C for acetonitrile (ACN) and dimethyl sulfoxide (DMSO). are shown in the following tables. Values for water are included for comparison.
Ionization of acids is less in an acidic solvent than in water. For example, hydrogen chloride is a weak acid when dissolved in acetic acid. This is because acetic acid is a much weaker base than water.
HCl + CH3CO2H <=> Cl- + CH3C(OH)2+
Compare this reaction with what happens when acetic acid is dissolved in the more acidic solvent pure sulfuric acid:
H2SO4 + CH3CO2H <=> HSO4- + CH3C(OH)2+
The unlikely geminal diol species is stable in these environments. For aqueous solutions the pH scale is the most convenient acidity function. Other acidity functions have been proposed for non-aqueous media, the most notable being the Hammett acidity function, H0, for superacid media and its modified version H− for superbasic media.
In aprotic solvents, oligomers, such as the well-known acetic acid dimer, may be formed by hydrogen bonding. An acid may also form hydrogen bonds to its conjugate base. This process, known as homoconjugation, has the effect of enhancing the acidity of acids, lowering their effective pKa values, by stabilizing the conjugate base. Homoconjugation enhances the proton-donating power of toluenesulfonic acid in acetonitrile solution by a factor of nearly 800.
In aqueous solutions, homoconjugation does not occur, because water forms stronger hydrogen bonds to the conjugate base than does the acid.
Mixed solvents
When a compound has limited solubility in water it is common practice (in the pharmaceutical industry, for example) to determine pKa values in a solvent mixture such as water/dioxane or water/methanol, in which the compound is more soluble. In the example shown at the right, the pKa value rises steeply with increasing percentage of dioxane as the dielectric constant of the mixture is decreasing.
A pKa value obtained in a mixed solvent cannot be used directly for aqueous solutions. The reason for this is that when the solvent is in its standard state its activity is defined as one. For example, the standard state of water:dioxane mixture with 9:1 mixing ratio is precisely that solvent mixture, with no added solutes. To obtain the pKa value for use with aqueous solutions it has to be extrapolated to zero co-solvent concentration from values obtained from various co-solvent mixtures.
These facts are obscured by the omission of the solvent from the expression that is normally used to define pKa, but pKa values obtained in a given mixed solvent can be compared to each other, giving relative acid strengths. The same is true of pKa values obtained in a particular non-aqueous solvent such a DMSO.
A universal, solvent-independent, scale for acid dissociation constants has not been developed, since there is no known way to compare the standard states of two different solvents.
Factors that affect pKa values
Pauling's second rule is that the value of the first pKa for acids of the formula XOm(OH)n depends primarily on the number of oxo groups m, and is approximately independent of the number of hydroxy groups n, and also of the central atom X. Approximate values of pKa are 8 for m = 0, 2 for m = 1, −3 for m = 2 and < −10 for m = 3. Alternatively, various numerical formulas have been proposed including pKa = 8 − 5m (known as Bell's rule), pKa = 7 − 5m, or pKa = 9 − 7m. The dependence on m correlates with the oxidation state of the central atom, X: the higher the oxidation state the stronger the oxyacid.
For example, pKa for HClO is 7.2, for HClO2 is 2.0, for HClO3 is −1 and HClO4 is a strong acid. The increased acidity on adding an oxo group is due to stabilization of the conjugate base by delocalization of its negative charge over an additional oxygen atom. This rule can help assign molecular structure: for example, phosphorous acid, having molecular formula H3PO3, has a pKa near 2, which suggested that the structure is HPO(OH)2, as later confirmed by NMR spectroscopy, and not P(OH)3, which would be expected to have a pKa near 8.
Inductive effects and mesomeric effects affect the pKa values. A simple example is provided by the effect of replacing the hydrogen atoms in acetic acid by the more electronegative chlorine atom. The electron-withdrawing effect of the substituent makes ionisation easier, so successive pKa values decrease in the series 4.7, 2.8, 1.4, and 0.7 when 0, 1, 2, or 3 chlorine atoms are present. The Hammett equation, provides a general expression for the effect of substituents.
log(Ka) = log(K) + ρσ.
Ka is the dissociation constant of a substituted compound, K is the dissociation constant when the substituent is hydrogen, ρ is a property of the unsubstituted compound and σ has a particular value for each substituent. A plot of log(Ka) against σ is a straight line with intercept log(K) and slope ρ. This is an example of a linear free energy relationship as log(Ka) is proportional to the standard free energy change. Hammett originally formulated the relationship with data from benzoic acid with different substituents in the ortho- and para- positions: some numerical values are in Hammett equation. This and other studies allowed substituents to be ordered according to their electron-withdrawing or electron-releasing power, and to distinguish between inductive and mesomeric effects.
Alcohols do not normally behave as acids in water, but the presence of a double bond adjacent to the OH group can substantially decrease the pKa by the mechanism of keto–enol tautomerism. Ascorbic acid is an example of this effect. The diketone 2,4-pentanedione (acetylacetone) is also a weak acid because of the keto–enol equilibrium. In aromatic compounds, such as phenol, which have an OH substituent, conjugation with the aromatic ring as a whole greatly increases the stability of the deprotonated form.
Structural effects can also be important. The difference between fumaric acid and maleic acid is a classic example. Fumaric acid is (E)-1,4-but-2-enedioic acid, a trans isomer, whereas maleic acid is the corresponding cis isomer, i.e. (Z)-1,4-but-2-enedioic acid (see cis-trans isomerism). Fumaric acid has pKa values of approximately 3.0 and 4.5. By contrast, maleic acid has pKa values of approximately 1.5 and 6.5. The reason for this large difference is that when one proton is removed from the cis isomer (maleic acid) a strong intramolecular hydrogen bond is formed with the nearby remaining carboxyl group. This favors the formation of the maleate H+, and it opposes the removal of the second proton from that species. In the trans isomer, the two carboxyl groups are always far apart, so hydrogen bonding is not observed.
Proton sponge, 1,8-bis(dimethylamino)naphthalene, has a pKa value of 12.1. It is one of the strongest amine bases known. The high basicity is attributed to the relief of strain upon protonation and strong internal hydrogen bonding.
Effects of the solvent and solvation should be mentioned also in this section. It turns out, these influences are more subtle than that of a dielectric medium mentioned above. For example, the expected (by electronic effects of methyl substituents) and observed in gas phase order of basicity of methylamines, Me3N > Me2NH > MeNH2 > NH3, is changed by water to Me2NH > MeNH2 > Me3N > NH3. Neutral methylamine molecules are hydrogen-bonded to water molecules mainly through one acceptor, N–HOH, interaction and only occasionally just one more donor bond, NH–OH2. Hence, methylamines are stabilized to about the same extent by hydration, regardless of the number of methyl groups. In stark contrast, corresponding methylammonium cations always utilize all the available protons for donor NH–OH2 bonding. Relative stabilization of methylammonium ions thus decreases with the number of methyl groups explaining the order of water basicity of methylamines.
Thermodynamics
An equilibrium constant is related to the standard Gibbs energy change for the reaction, so for an acid dissociation constant
.
R is the gas constant and T is the absolute temperature. Note that and . At 25 °C, ΔG in kJ·mol−1 ≈ 5.708 pKa (1 kJ·mol−1 = 1000 joules per mole). Free energy is made up of an enthalpy term and an entropy term.
The standard enthalpy change can be determined by calorimetry or by using the van 't Hoff equation, though the calorimetric method is preferable. When both the standard enthalpy change and acid dissociation constant have been determined, the standard entropy change is easily calculated from the equation above. In the following table, the entropy terms are calculated from the experimental values of pKa and ΔH. The data were critically selected and refer to 25 °C and zero ionic strength, in water.
The first point to note is that, when pKa is positive, the standard free energy change for the dissociation reaction is also positive. Second, some reactions are exothermic and some are endothermic, but, when ΔH is negative TΔS is the dominant factor, which determines that ΔG is positive. Last, the entropy contribution is always unfavourable in these reactions. Ions in aqueous solution tend to orient the surrounding water molecules, which orders the solution and decreases the entropy. The contribution of an ion to the entropy is the partial molar entropy which is often negative, especially for small or highly charged ions. The ionization of a neutral acid involves formation of two ions so that the entropy decreases. On the second ionization of the same acid, there are now three ions and the anion has a charge, so the entropy again decreases.
Note that the standard free energy change for the reaction is for the changes from the reactants in their standard states to the products in their standard states. The free energy change at equilibrium is zero since the chemical potentials of reactants and products are equal at equilibrium.
Experimental determination
The experimental determination of pKa values is commonly performed by means of titrations, in a medium of high ionic strength and at constant temperature. A typical procedure would be as follows. A solution of the compound in the medium is acidified with a strong acid to the point where the compound is fully protonated. The solution is then titrated with a strong base until all the protons have been removed. At each point in the titration pH is measured using a glass electrode and a pH meter. The equilibrium constants are found by fitting calculated pH values to the observed values, using the method of least squares.
The total volume of added strong base should be small compared to the initial volume of titrand solution in order to keep the ionic strength nearly constant. This will ensure that pKa remains invariant during the titration.
A calculated titration curve for oxalic acid is shown at the right. Oxalic acid has pKa values of 1.27 and 4.27. Therefore, the buffer regions will be centered at about pH 1.3 and pH 4.3. The buffer regions carry the information necessary to get the pKa values as the concentrations of acid and conjugate base change along a buffer region.
Between the two buffer regions there is an end-point, or equivalence point, at about pH 3. This end-point is not sharp and is typical of a diprotic acid whose buffer regions overlap by a small amount: pKa2 − pKa1 is about three in this example. (If the difference in pK values were about two or less, the end-point would not be noticeable.) The second end-point begins at about pH 6.3 and is sharp. This indicates that all the protons have been removed. When this is so, the solution is not buffered and the pH rises steeply on addition of a small amount of strong base. However, the pH does not continue to rise indefinitely. A new buffer region begins at about pH 11 (pKw − 3), which is where self-ionization of water becomes important.
It is very difficult to measure pH values of less than two in aqueous solution with a glass electrode, because the Nernst equation breaks down at such low pH values. To determine pK values of less than about 2 or more than about 11 spectrophotometric or NMR measurements may be used instead of, or combined with, pH measurements.
When the glass electrode cannot be employed, as with non-aqueous solutions, spectrophotometric methods are frequently used. These may involve absorbance or fluorescence measurements. In both cases the measured quantity is assumed to be proportional to the sum of contributions from each photo-active species; with absorbance measurements the Beer–Lambert law is assumed to apply.
Isothermal titration calorimetry (ITC) may be used to determine both a pK value and the corresponding standard enthalpy for acid dissociation. Software to perform the calculations is supplied by the instrument manufacturers for simple systems.
Aqueous solutions with normal water cannot be used for 1H NMR measurements but heavy water, , must be used instead. 13C NMR data, however, can be used with normal water and 1H NMR spectra can be used with non-aqueous media. The quantities measured with NMR are time-averaged chemical shifts, as proton exchange is fast on the NMR time-scale. Other chemical shifts, such as those of 31P can be measured.
Micro-constants
For some polyprotic acids, dissociation (or association) occurs at more than one nonequivalent site, and the observed macroscopic equilibrium constant, or macro-constant, is a combination of micro-constants involving distinct species. When one reactant forms two products in parallel, the macro-constant is a sum of two micro-constants, This is true for example for the deprotonation of the amino acid cysteine, which exists in solution as a neutral zwitterion . The two micro-constants represent deprotonation either at sulphur or at nitrogen, and the macro-constant sum here is the acid dissociation constant
Similarly, a base such as spermine has more than one site where protonation can occur. For example, mono-protonation can occur at a terminal group or at internal groups. The Kb values for dissociation of spermine protonated at one or other of the sites are examples of micro-constants. They cannot be determined directly by means of pH, absorbance, fluorescence or NMR measurements; a measured Kb value is the sum of the K values for the micro-reactions.
Nevertheless, the site of protonation is very important for biological function, so mathematical methods have been developed for the determination of micro-constants.
When two reactants form a single product in parallel, the macro-constant For example, the abovementioned equilibrium for spermine may be considered in terms of Ka values of two tautomeric conjugate acids, with macro-constant In this case This is equivalent to the preceding expression since is proportional to
When a reactant undergoes two reactions in series, the macro-constant for the combined reaction is the product of the micro-constant for the two steps. For example, the abovementioned cysteine zwitterion can lose two protons, one from sulphur and one from nitrogen, and the overall macro-constant for losing two protons is the product of two dissociation constants This can also be written in terms of logarithmic constants as
Applications and significance
A knowledge of pKa values is important for the quantitative treatment of systems involving acid–base equilibria in solution. Many applications exist in biochemistry; for example, the pKa values of proteins and amino acid side chains are of major importance for the activity of enzymes and the stability of proteins. Protein pKa values cannot always be measured directly, but may be calculated using theoretical methods. Buffer solutions are used extensively to provide solutions at or near the physiological pH for the study of biochemical reactions; the design of these solutions depends on a knowledge of the pKa values of their components. Important buffer solutions include MOPS, which provides a solution with pH 7.2, and tricine, which is used in gel electrophoresis. Buffering is an essential part of acid base physiology including acid–base homeostasis, and is key to understanding disorders such as acid–base disorder. The isoelectric point of a given molecule is a function of its pK values, so different molecules have different isoelectric points. This permits a technique called isoelectric focusing, which is used for separation of proteins by 2-D gel polyacrylamide gel electrophoresis.
Buffer solutions also play a key role in analytical chemistry. They are used whenever there is a need to fix the pH of a solution at a particular value. Compared with an aqueous solution, the pH of a buffer solution is relatively insensitive to the addition of a small amount of strong acid or strong base. The buffer capacity of a simple buffer solution is largest when pH = pKa. In acid–base extraction, the efficiency of extraction of a compound into an organic phase, such as an ether, can be optimised by adjusting the pH of the aqueous phase using an appropriate buffer. At the optimum pH, the concentration of the electrically neutral species is maximised; such a species is more soluble in organic solvents having a low dielectric constant than it is in water. This technique is used for the purification of weak acids and bases.
A pH indicator is a weak acid or weak base that changes colour in the transition pH range, which is approximately pKa ± 1. The design of a universal indicator requires a mixture of indicators whose adjacent pKa values differ by about two, so that their transition pH ranges just overlap.
In pharmacology, ionization of a compound alters its physical behaviour and macro properties such as solubility and lipophilicity, log p). For example, ionization of any compound will increase the solubility in water, but decrease the lipophilicity. This is exploited in drug development to increase the concentration of a compound in the blood by adjusting the pKa of an ionizable group.
Knowledge of pKa values is important for the understanding of coordination complexes, which are formed by the interaction of a metal ion, Mm+, acting as a Lewis acid, with a ligand, L, acting as a Lewis base. However, the ligand may also undergo protonation reactions, so the formation of a complex in aqueous solution could be represented symbolically by the reaction
To determine the equilibrium constant for this reaction, in which the ligand loses a proton, the pKa of the protonated ligand must be known. In practice, the ligand may be polyprotic; for example EDTA4− can accept four protons; in that case, all pKa values must be known. In addition, the metal ion is subject to hydrolysis, that is, it behaves as a weak acid, so the pK values for the hydrolysis reactions must also be known.
Assessing the hazard associated with an acid or base may require a knowledge of pKa values. For example, hydrogen cyanide is a very toxic gas, because the cyanide ion inhibits the iron-containing enzyme cytochrome c oxidase. Hydrogen cyanide is a weak acid in aqueous solution with a pKa of about 9. In strongly alkaline solutions, above pH 11, say, it follows that sodium cyanide is "fully dissociated" so the hazard due to the hydrogen cyanide gas is much reduced. An acidic solution, on the other hand, is very hazardous because all the cyanide is in its acid form. Ingestion of cyanide by mouth is potentially fatal, independently of pH, because of the reaction with cytochrome c oxidase.
In environmental science acid–base equilibria are important for lakes and rivers; for example, humic acids are important components of natural waters. Another example occurs in chemical oceanography: in order to quantify the solubility of iron(III) in seawater at various salinities, the pKa values for the formation of the iron(III) hydrolysis products , and were determined, along with the solubility product of iron hydroxide.
Values for common substances
There are multiple techniques to determine the pKa of a chemical, leading to some discrepancies between different sources. Well measured values are typically within 0.1 units of each other. Data presented here were taken at 25 °C in water. More values can be found in the Thermodynamics section, above. A table of pKa of carbon acids, measured in DMSO, can be found on the page on carbanions.
See also
Acidosis
Acids in wine: tartaric, malic and citric are the principal acids in wine.
Alkalosis
Arterial blood gas
Chemical equilibrium
Conductivity (electrolytic)
Grotthuss mechanism: how protons are transferred between hydronium ions and water molecules, accounting for the exceptionally high ionic mobility of the proton (animation).
Hammett acidity function: a measure of acidity that is used for very concentrated solutions of strong acids, including superacids.
Ion transport number
Ocean acidification: dissolution of atmospheric carbon dioxide affects seawater pH. The reaction depends on total inorganic carbon and on solubility equilibria with solid carbonates such as limestone and dolomite.
Law of dilution
pCO2
pH
Predominance diagram: relates to equilibria involving polyoxyanions. pKa values are needed to construct these diagrams.
Proton affinity: a measure of basicity in the gas phase.
Stability constants of complexes: formation of a complex can often be seen as a competition between proton and metal ion for a ligand, which is the product of dissociation of an acid.
Notes
References
Further reading
(Previous edition published as )
(Non-aqueous solvents)
(translation editor: Mary R. Masson)
Chapter 4: Solvent Effects on the Position of Homogeneous Chemical Equilibria.
External links
Acidity–Basicity Data in Nonaqueous Solvents Extensive bibliography of pKa values in DMSO, acetonitrile, THF, heptane, 1,2-dichloroethane, and in the gas phase
Curtipot All-in-one freeware for pH and acid–base equilibrium calculations and for simulation and analysis of potentiometric titration curves with spreadsheets
SPARC Physical/Chemical property calculator Includes a database with aqueous, non-aqueous, and gaseous phase pKa values than can be searched using SMILES or CAS registry numbers
Aqueous-Equilibrium Constants pKa values for various acid and bases. Includes a table of some solubility products
Free guide to pKa and log p interpretation and measurement Explanations of the relevance of these properties to pharmacology
Free online prediction tool (Marvin) pKa, log p, log d etc. From ChemAxon
Chemicalize.org:List of predicted structure based properties
pKa Chart by David A. Evans
Equilibrium chemistry
Acids
Bases (chemistry)
Analytical chemistry
Physical chemistry | 0.778961 | 0.997186 | 0.776769 |
Chemical composition | A chemical composition specifies the identity, arrangement, and ratio of the chemical elements making up a compound by way of chemical and atomic bonds.
Chemical formulas can be used to describe the relative amounts of elements present in a compound. For example, the chemical formula for water is H2O: this means that each molecule of water is constituted by 2 atoms of hydrogen (H) and 1 atom of oxygen (O). The chemical composition of water may be interpreted as a 2:1 ratio of hydrogen atoms to oxygen atoms. Different types of chemical formulas are used to convey composition information, such as an empirical or molecular formula.
Nomenclature can be used to express not only the elements present in a compound but their arrangement within the molecules of the compound. In this way, compounds will have unique names which can describe their elemental composition.
Composite mixture
The chemical composition of a mixture can be defined as the distribution of the individual substances that constitute the mixture, called "components". In other words, it is equivalent to quantifying the concentration of each component. Because there are different ways to define the concentration of a component, there are also different ways to define the composition of a mixture. It may be expressed as molar fraction, volume fraction, mass fraction, molality, molarity or normality or mixing ratio.
Chemical composition of a mixture can be represented graphically in plots like ternary plot and quaternary plot.
References
Chemical properties
Analytical chemistry | 0.785175 | 0.989288 | 0.776765 |
Biotransformation | Biotransformation is the biochemical modification of one chemical compound or a mixture of chemical compounds. Biotransformations can be conducted with whole cells, their lysates, or purified enzymes. Increasingly, biotransformations are effected with purified enzymes. Major industries and life-saving technologies depend on biotransformations.
Advantages and disadvantages
Compared to the conventional production of chemicals, biotransformations are often attractive because their selectivities can be high, limiting the coproduction of undesirable coproducts. Generally operating under mild temperatures and pressures in aqueous solutions, many biotransformations are "green". The catalysts, i.e. the enzymes, are amenable to improvement by genetic manipulation.
Biotechnology usually is restrained by substrate scope. Petrochemicals for example are often not amenable to biotransformations, especially on the scale required for some applications, e.g. fuels. Biotransformations can be slow and are often incompatible with high temperatures, which are employed in traditional chemical synthesis to increase rates. Enzymes are generally only stable <100 °C, and usually much lower. Enzymes, like other catalysts are poisonable. In some cases, performance or recyclability can be improved by using immobilized enzymes.
Historical
Wine and beer making are examples of biotransformations that have been practiced since ancient times. Vinegar has long been produced by fermentation, involving the oxidation of ethanol to acetic acid. Cheesemaking traditionally relies on microbes to convert dairy precursors. Yogurt is produced by inoculating heat-treated milk with microorganisms such as Streptococcus thermophilus and Lactobacillus bulgaricus.
Modern examples
Pharmaceuticals
Beta-lactam antibiotics, e.g., penicillin and cephalosporin are produced by biotransformations in an industry valued several billions of dollars. Processes are conducted in vessels up to 60,000 gal in volume. Sugars, methionine, and ammonium salts are used as C,S,N sources. Genetically modified Penicillium chrysogenum is employed for penicillin production.
Some steroids are hydroxylated in vitro to give drugs.
Sugars
High fructose corn syrup is generated by biotransformation of corn starch, which is converted to a mixture of glucose and fructose. Glucoamylase is one enzyme used in the process.
Cyclodextrins are produced by transferases.
Amino acids
Amino acids are sometimes produced industrially by transaminases. In other cases, amino acids are obtained by biotransformations of peptides using peptidases.
Acrylamide
With acrylonitrile and water as substrates, nitrile hydratase enzymes are used to produce acrylamide, a valued monomer.
Biofuels
Many kinds of fuels and lubricants are produced by processes that include biotransformations starting from natural precursors such as fats, cellulose, and sugars.
See also
Biotechnology
Biodegradation
References
Bioremediation
Biotechnology
Biodegradation | 0.792485 | 0.98012 | 0.776731 |
Compressibility factor | In thermodynamics, the compressibility factor (Z), also known as the compression factor or the gas deviation factor, describes the deviation of a real gas from ideal gas behaviour. It is simply defined as the ratio of the molar volume of a gas to the molar volume of an ideal gas at the same temperature and pressure. It is a useful thermodynamic property for modifying the ideal gas law to account for the real gas behaviour. In general, deviation from ideal behaviour becomes more significant the closer a gas is to a phase change, the lower the temperature or the larger the pressure. Compressibility factor values are usually obtained by calculation from equations of state (EOS), such as the virial equation which take compound-specific empirical constants as input. For a gas that is a mixture of two or more pure gases (air or natural gas, for example), the gas composition must be known before compressibility can be calculated.
Alternatively, the compressibility factor for specific gases can be read from generalized compressibility charts that plot as a function of pressure at constant temperature.
The compressibility factor should not be confused with the compressibility (also known as coefficient of compressibility or isothermal compressibility) of a material, which is the measure of the relative volume change of a fluid or solid in response to a pressure change.
Definition and physical significance
The compressibility factor is defined in thermodynamics and engineering frequently as:
where p is the pressure, is the density of the gas and is the specific gas constant, being the molar mass, and the is the absolute temperature (kelvin or Rankine scale).
In statistical mechanics the description is:
where is the pressure, is the number of moles of gas, is the absolute temperature, is the gas constant, and is unit volume.
For an ideal gas the compressibility factor is per definition. In many real world applications requirements for accuracy demand that deviations from ideal gas behaviour, i.e., real gas behaviour, be taken into account. The value of generally increases with pressure and decreases with temperature. At high pressures molecules are colliding more often. This allows repulsive forces between molecules to have a noticeable effect, making the molar volume of the real gas greater than the molar volume of the corresponding ideal gas, which causes to exceed one. When pressures are lower, the molecules are free to move. In this case attractive forces dominate, making . The closer the gas is to its critical point or its boiling point, the more deviates from the ideal case.
Fugacity
The compressibility factor is linked to the fugacity by the relation:
Generalized compressibility factor graphs for pure gases
The unique relationship between the compressibility factor and the reduced temperature, , and the reduced pressure, , was first recognized by Johannes Diderik van der Waals in 1873 and is known as the two-parameter principle of corresponding states. The principle of corresponding states expresses the generalization that the properties of a gas which are dependent on intermolecular forces are related to the critical properties of the gas in a universal way. That provides a most important basis for developing correlations of molecular properties.
As for the compressibility of gases, the principle of corresponding states indicates that any pure gas at the same reduced temperature, , and reduced pressure, , should have the same compressibility factor.
The reduced temperature and pressure are defined by
and
Here and are known as the critical temperature and critical pressure of a gas. They are characteristics of each specific gas with being the temperature above which it is not possible to liquify a given gas and is the minimum pressure required to liquify a given gas at its critical temperature. Together they define the critical point of a fluid above which distinct liquid and gas phases of a given fluid do not exist.
The pressure-volume-temperature (PVT) data for real gases varies from one pure gas to another. However, when the compressibility factors of various single-component gases are graphed versus pressure along with temperature isotherms many of the graphs exhibit similar isotherm shapes.
In order to obtain a generalized graph that can be used for many different gases, the reduced pressure and temperature, and , are used to normalize the compressibility factor data. Figure 2 is an example of a generalized compressibility factor graph derived from hundreds of experimental PVT data points of 10 pure gases, namely methane, ethane, ethylene, propane, n-butane, i-pentane, n-hexane, nitrogen, carbon dioxide and steam.
There are more detailed generalized compressibility factor graphs based on as many as 25 or more different pure gases, such as the Nelson-Obert graphs. Such graphs are said to have an accuracy within 1–2 percent for values greater than 0.6 and within 4–6 percent for values of 0.3–0.6.
The generalized compressibility factor graphs may be considerably in error for strongly polar gases which are gases for which the centers of positive and negative charge do not coincide. In such cases the estimate for may be in error by as much as 15–20 percent.
The quantum gases hydrogen, helium, and neon do not conform to the corresponding-states behavior and the reduced pressure and temperature for those three gases should be redefined in the following manner to improve the accuracy of predicting their compressibility factors when using the generalized graphs:
and
where the temperatures are in kelvins and the pressures are in atmospheres.
Reading a generalized compressibility chart
In order to read a compressibility chart, the reduced pressure and temperature must be known. If either the reduced pressure or temperature is unknown, the reduced specific volume must be found. Unlike the reduced pressure and temperature, the reduced specific volume is not found by using the critical volume. The reduced specific volume is defined by,
where is the specific volume.
Once two of the three reduced properties are found, the compressibility chart can be used. In a compressibility chart, reduced pressure is on the x-axis and Z is on the y-axis. When given the reduced pressure and temperature, find the given pressure on the x-axis. From there, move up on the chart until the given reduced temperature is found. Z is found by looking where those two points intersect. the same process can be followed if reduced specific volume is given with either reduced pressure or temperature.
Observations made from a generalized compressibility chart
There are three observations that can be made when looking at a generalized compressibility chart. These observations are:
Gases behave as an ideal gas regardless of temperature when the reduced pressure is much less than one (PR ≪ 1).
When reduced temperature is greater than two (TR > 2), ideal-gas behavior can be assumed regardless of pressure, unless pressure is much greater than one (PR ≫ 1).
Gases deviate from ideal-gas behavior the most in the vicinity of the critical point.
Theoretical models
The virial equation is especially useful to describe the causes of non-ideality at a molecular level (very few gases are mono-atomic) as it is derived directly from statistical mechanics:
Where the coefficients in the numerator are known as virial coefficients and are functions of temperature.
The virial coefficients account for interactions between successively larger groups of molecules. For example, accounts for interactions between pairs, for interactions between three gas molecules, and so on. Because interactions between large numbers of molecules are rare, the virial equation is usually truncated after the third term.
When this truncation is assumed, the compressibility factor is linked to the intermolecular-force potential φ by:
The Real gas article features more theoretical methods to compute compressibility factors.
Physical mechanism of temperature and pressure dependence
Deviations of the compressibility factor, Z, from unity are due to attractive and repulsive intermolecular forces. At a given temperature and pressure, repulsive forces tend to make the volume larger than for an ideal gas; when these forces dominate Z is greater than unity. When attractive forces dominate, Z is less than unity. The relative importance of attractive forces decreases as temperature increases (see effect on gases).
As seen above, the behavior of Z is qualitatively similar for all gases. Molecular nitrogen, N, is used here to further describe and understand that behavior. All data used in this section were obtained from the NIST Chemistry WebBook. It is useful to note that for N the normal boiling point of the liquid is 77.4 K and the critical point is at 126.2 K and 34.0 bar.
The figure on the right shows an overview covering a wide temperature range. At low temperature (100 K), the curve has a characteristic check-mark shape, the rising portion of the curve is very nearly directly proportional to pressure. At intermediate temperature (160 K), there is a smooth curve with a broad minimum; although the high pressure portion is again nearly linear, it is no longer directly proportional to pressure. Finally, at high temperature (400 K), Z is above unity at all pressures. For all curves, Z approaches the ideal gas value of unity at low pressure and exceeds that value at very high pressure.
To better understand these curves, a closer look at the behavior for low temperature and pressure is given in the second figure. All of the curves start out with Z equal to unity at zero pressure and Z initially decreases as pressure increases. N is a gas under these conditions, so the distance between molecules is large, but becomes smaller as pressure increases. This increases the attractive interactions between molecules, pulling the molecules closer together and causing the volume to be less than for an ideal gas at the same temperature and pressure. Higher temperature reduces the effect of the attractive interactions and the gas behaves in a more nearly ideal manner.
As the pressure increases, the gas eventually reaches the gas-liquid coexistence curve, shown by the dashed line in the figure. When that happens, the attractive interactions have become strong enough to overcome the tendency of thermal motion to cause the molecules to spread out; so the gas condenses to form a liquid. Points on the vertical portions of the curves correspond to N2 being partly gas and partly liquid. On the coexistence curve, there are then two possible values for Z, a larger one corresponding to the gas and a smaller value corresponding to the liquid. Once all the gas has been converted to liquid, the volume decreases only slightly with further increases in pressure; then Z is very nearly proportional to pressure.
As temperature and pressure increase along the coexistence curve, the gas becomes more like a liquid and the liquid becomes more like a gas. At the critical point, the two are the same. So for temperatures above the critical temperature (126.2 K), there is no phase transition; as pressure increases the gas gradually transforms into something more like a liquid. Just above the critical point there is a range of pressure for which Z drops quite rapidly (see the 130 K curve), but at higher temperatures the process is entirely gradual.
The final figures shows the behavior at temperatures well above the critical temperatures. The repulsive interactions are essentially unaffected by temperature, but the attractive interaction have less and less influence. Thus, at sufficiently high temperature, the repulsive interactions dominate at all pressures.
This can be seen in the graph showing the high temperature behavior. As temperature increases, the initial slope becomes less negative, the pressure at which Z is a minimum gets smaller, and the pressure at which repulsive interactions start to dominate, i.e. where Z goes from less than unity to greater than unity, gets smaller. At the Boyle temperature (327 K for N), the attractive and repulsive effects cancel each other at low pressure. Then Z remains at the ideal gas value of unity up to pressures of several tens of bar. Above the Boyle temperature, the compressibility factor is always greater than unity and increases slowly but steadily as pressure increases.
Experimental values
It is extremely difficult to generalize at what pressures or temperatures the deviation from the ideal gas becomes important. As a rule of thumb, the ideal gas law is reasonably accurate up to a pressure of about 2 atm, and even higher for small non-associating molecules. For example, methyl chloride, a highly polar molecule and therefore with significant intermolecular forces, the experimental value for the compressibility factor is at a pressure of 10 atm and temperature of 100 °C. For air (small non-polar molecules) at approximately the same conditions, the compressibility factor is only (see table below for 10 bars, 400 K).
Compressibility of air
Normal air comprises in crude numbers 80 percent nitrogen and 20 percent oxygen . Both molecules are small and non-polar (and therefore non-associating). We can therefore expect that the behaviour of air within broad temperature and pressure ranges can be approximated as an ideal gas with reasonable accuracy. Experimental values for the compressibility factor confirm this.
values are calculated from values of pressure, volume (or density), and temperature in Vasserman, Kazavchinskii, and Rabinovich, "Thermophysical Properties of Air and Air Components;' Moscow, Nauka, 1966, and NBS-NSF Trans. TT 70-50095, 1971: and Vasserman and Rabinovich, "Thermophysical Properties of Liquid Air and Its Component, "Moscow, 1968, and NBS-NSF Trans. 69-55092, 1970.
See also
Fugacity
Real gas
Theorem of corresponding states
Van der Waals equation
References
External links
Compressibility factor (gases) A Citizendium article.
Real Gases includes a discussion of compressibility factors.
Chemical engineering thermodynamics
Gas laws | 0.780849 | 0.994718 | 0.776724 |
Nucleophile | In chemistry, a nucleophile is a chemical species that forms bonds by donating an electron pair. All molecules and ions with a free pair of electrons or at least one pi bond can act as nucleophiles. Because nucleophiles donate electrons, they are Lewis bases.
Nucleophilic describes the affinity of a nucleophile to bond with positively charged atomic nuclei. Nucleophilicity, sometimes referred to as nucleophile strength, refers to a substance's nucleophilic character and is often used to compare the affinity of atoms. Neutral nucleophilic reactions with solvents such as alcohols and water are named solvolysis. Nucleophiles may take part in nucleophilic substitution, whereby a nucleophile becomes attracted to a full or partial positive charge, and nucleophilic addition. Nucleophilicity is closely related to basicity. The difference between the two is, that basicity is a thermodynamic property (i.e. relates to an equilibrium state), but nucleophilicity is a kinetic property, which relates to rates of certain chemical reactions.
History and Etymology
The terms nucleophile and electrophile were introduced by Christopher Kelk Ingold in 1933, replacing the terms anionoid and cationoid proposed earlier by A. J. Lapworth in 1925. The word nucleophile is derived from nucleus and the Greek word φιλος, philos, meaning friend.
Properties
In general, in a group across the periodic table, the more basic the ion (the higher the pKa of the conjugate acid) the more reactive it is as a nucleophile. Within a series of nucleophiles with the same attacking element (e.g. oxygen), the order of nucleophilicity will follow basicity. Sulfur is in general a better nucleophile than oxygen.
Nucleophilicity
Many schemes attempting to quantify relative nucleophilic strength have been devised. The following empirical data have been obtained by measuring reaction rates for many reactions involving many nucleophiles and electrophiles. Nucleophiles displaying the so-called alpha effect are usually omitted in this type of treatment.
Swain–Scott equation
The first such attempt is found in the Swain–Scott equation derived in 1953:
This free-energy relationship relates the pseudo first order reaction rate constant (in water at 25 °C), k, of a reaction, normalized to the reaction rate, k0, of a standard reaction with water as the nucleophile, to a nucleophilic constant n for a given nucleophile and a substrate constant s that depends on the sensitivity of a substrate to nucleophilic attack (defined as 1 for methyl bromide).
This treatment results in the following values for typical nucleophilic anions: acetate 2.7, chloride 3.0, azide 4.0, hydroxide 4.2, aniline 4.5, iodide 5.0, and thiosulfate 6.4. Typical substrate constants are 0.66 for ethyl tosylate, 0.77 for β-propiolactone, 1.00 for 2,3-epoxypropanol, 0.87 for benzyl chloride, and 1.43 for benzoyl chloride.
The equation predicts that, in a nucleophilic displacement on benzyl chloride, the azide anion reacts 3000 times faster than water.
Ritchie equation
The Ritchie equation, derived in 1972, is another free-energy relationship:
where N+ is the nucleophile dependent parameter and k0 the reaction rate constant for water. In this equation, a substrate-dependent parameter like s in the Swain–Scott equation is absent. The equation states that two nucleophiles react with the same relative reactivity regardless of the nature of the electrophile, which is in violation of the reactivity–selectivity principle. For this reason, this equation is also called the constant selectivity relationship.
In the original publication the data were obtained by reactions of selected nucleophiles with selected electrophilic carbocations such as tropylium or diazonium cations:
or (not displayed) ions based on malachite green. Many other reaction types have since been described.
Typical Ritchie N+ values (in methanol) are: 0.5 for methanol, 5.9 for the cyanide anion, 7.5 for the methoxide anion, 8.5 for the azide anion, and 10.7 for the thiophenol anion. The values for the relative cation reactivities are −0.4 for the malachite green cation, +2.6 for the benzenediazonium cation, and +4.5 for the tropylium cation.
Mayr–Patz equation
In the Mayr–Patz equation (1994):
The second order reaction rate constant k at 20 °C for a reaction is related to a nucleophilicity parameter N, an electrophilicity parameter E, and a nucleophile-dependent slope parameter s. The constant s is defined as 1 with 2-methyl-1-pentene as the nucleophile.
Many of the constants have been derived from reaction of so-called benzhydrylium ions as the electrophiles:
and a diverse collection of π-nucleophiles:
.
Typical E values are +6.2 for R = chlorine, +5.90 for R = hydrogen, 0 for R = methoxy and −7.02 for R = dimethylamine.
Typical N values with s in parentheses are −4.47 (1.32) for electrophilic aromatic substitution to toluene (1), −0.41 (1.12) for electrophilic addition to 1-phenyl-2-propene (2), and 0.96 (1) for addition to 2-methyl-1-pentene (3), −0.13 (1.21) for reaction with triphenylallylsilane (4), 3.61 (1.11) for reaction with 2-methylfuran (5), +7.48 (0.89) for reaction with isobutenyltributylstannane (6) and +13.36 (0.81) for reaction with the enamine 7.
The range of organic reactions also include SN2 reactions:
With E = −9.15 for the S-methyldibenzothiophenium ion, typical nucleophile values N (s) are 15.63 (0.64) for piperidine, 10.49 (0.68) for methoxide, and 5.20 (0.89) for water. In short, nucleophilicities towards sp2 or sp3 centers follow the same pattern.
Unified equation
In an effort to unify the above described equations the Mayr equation is rewritten as:
with sE the electrophile-dependent slope parameter and sN the nucleophile-dependent slope parameter. This equation can be rewritten in several ways:
with sE = 1 for carbocations this equation is equal to the original Mayr–Patz equation of 1994,
with sN = 0.6 for most n nucleophiles the equation becomes
or the original Scott–Swain equation written as:
with sE = 1 for carbocations and sN = 0.6 the equation becomes:
or the original Ritchie equation written as:
Types
Examples of nucleophiles are anions such as Cl−, or a compound with a lone pair of electrons such as NH3 (ammonia) and PR3.
In the example below, the oxygen of the hydroxide ion donates an electron pair to form a new chemical bond with the carbon at the end of the bromopropane molecule. The bond between the carbon and the bromine then undergoes heterolytic fission, with the bromine atom taking the donated electron and becoming the bromide ion (Br−), because a SN2 reaction occurs by backside attack. This means that the hydroxide ion attacks the carbon atom from the other side, exactly opposite the bromine ion. Because of this backside attack, SN2 reactions result in a inversion of the configuration of the electrophile. If the electrophile is chiral, it typically maintains its chirality, though the SN2 product's absolute configuration is flipped as compared to that of the original electrophile.
Ambident Nucleophile
An ambident nucleophile is one that can attack from two or more places, resulting in two or more products. For example, the thiocyanate ion (SCN−) may attack from either the sulfur or the nitrogen. For this reason, the SN2 reaction of an alkyl halide with SCN− often leads to a mixture of an alkyl thiocyanate (R-SCN) and an alkyl isothiocyanate (R-NCS). Similar considerations apply in the Kolbe nitrile synthesis.
Halogens
While the halogens are not nucleophilic in their diatomic form (e.g. I2 is not a nucleophile), their anions are good nucleophiles. In polar, protic solvents, F− is the weakest nucleophile, and I− the strongest; this order is reversed in polar, aprotic solvents.
Carbon
Carbon nucleophiles are often organometallic reagents such as those found in the Grignard reaction, Blaise reaction, Reformatsky reaction, and Barbier reaction or reactions involving organolithium reagents and acetylides. These reagents are often used to perform nucleophilic additions.
Enols are also carbon nucleophiles. The formation of an enol is catalyzed by acid or base. Enols are ambident nucleophiles, but, in general, nucleophilic at the alpha carbon atom. Enols are commonly used in condensation reactions, including the Claisen condensation and the aldol condensation reactions.
Oxygen
Examples of oxygen nucleophiles are water (H2O), hydroxide anion, alcohols, alkoxide anions, hydrogen peroxide, and carboxylate anions.
Nucleophilic attack does not take place during intermolecular hydrogen bonding.
Sulfur
Of sulfur nucleophiles, hydrogen sulfide and its salts, thiols (RSH), thiolate anions (RS−), anions of thiolcarboxylic acids (RC(O)-S−), and anions of dithiocarbonates (RO-C(S)-S−) and dithiocarbamates (R2N-C(S)-S−) are used most often.
In general, sulfur is very nucleophilic because of its large size, which makes it readily polarizable, and its lone pairs of electrons are readily accessible.
Nitrogen
Nitrogen nucleophiles include ammonia, azide, amines, nitrites, hydroxylamine, hydrazine, carbazide, phenylhydrazine, semicarbazide, and amide.
Metal centers
Although metal centers (e.g., Li+, Zn2+, Sc3+, etc.) are most commonly cationic and electrophilic (Lewis acidic) in nature, certain metal centers (particularly ones in a low oxidation state and/or carrying a negative charge) are among the strongest recorded nucleophiles and are sometimes referred to as "supernucleophiles." For instance, using methyl iodide as the reference electrophile, Ph3Sn– is about 10000 times more nucleophilic than I–, while the Co(I) form of vitamin B12 (vitamin B12s) is about 107 times more nucleophilic. Other supernucleophilic metal centers include low oxidation state carbonyl metalate anions (e.g., CpFe(CO)2–).
Examples
The following table shows the nucleophilicity of some molecules with methanol as the solvent:
See also
References
Physical organic chemistry | 0.782411 | 0.99266 | 0.776668 |
Chemical stability | In chemistry, chemical stability is the thermodynamic stability of a chemical system, in particular a chemical compound or a polymer.
Thermodynamic stability occurs when a system is in its lowest energy state, or in chemical equilibrium with its environment. This may be a dynamic equilibrium in which individual atoms or molecules change form, but their overall number in a particular form is conserved. This type of chemical thermodynamic equilibrium will persist indefinitely unless the system is changed. Chemical systems might undergo changes in the phase of matter or a set of chemical reactions.
State A is said to be more thermodynamically stable than state B if the Gibbs free energy of the change from A to B is positive.
Versus reactivity
Thermodynamic stability applies to a particular system. The reactivity of a chemical substance is a description of how it might react across a variety of potential chemical systems and, for a given system, how fast such a reaction could proceed.
Chemical substances or states can persist indefinitely even though they are not in their lowest energy state if they experience metastability - a state which is stable only if not disturbed too much. A substance (or state) might also be termed "kinetically persistent" if it is changing relatively slowly (and thus is not at thermodynamic equilibrium, but is observed anyway). Metastable and kinetically persistent species or systems are not considered truly stable in chemistry. Therefore, the term chemically stable should not be used by chemists as a synonym of unreactive because it confuses thermodynamic and kinetic concepts. On the other hand, highly chemically unstable species tend to undergo exothermic unimolar decompositions at high rates. Thus, high chemical instability may sometimes parallel unimolar decompositions at high rates.
Outside chemistry
In everyday language, and often in materials science, a chemical substance is said to be "stable" if it is not particularly reactive in the environment or during normal use, and retains its useful properties on the timescale of its expected usefulness. In particular, the usefulness is retained in the presence of air, moisture or heat, and under the expected conditions of application. In this meaning, the material is said to be unstable if it can corrode, decompose, polymerize, burn or explode under the conditions of anticipated use or normal environmental conditions.
References
Physical chemistry
Materials science | 0.793367 | 0.978937 | 0.776657 |
Thermodynamic activity | In chemical thermodynamics, activity (symbol ) is a measure of the "effective concentration" of a species in a mixture, in the sense that the species' chemical potential depends on the activity of a real solution in the same way that it would depend on concentration for an ideal solution. The term "activity" in this sense was coined by the American chemist Gilbert N. Lewis in 1907.
By convention, activity is treated as a dimensionless quantity, although its value depends on customary choices of standard state for the species. The activity of pure substances in condensed phases (solids and liquids) is taken as = 1. Activity depends on temperature, pressure and composition of the mixture, among other things. For gases, the activity is the effective partial pressure, and is usually referred to as fugacity.
The difference between activity and other measures of concentration arises because the interactions between different types of molecules in non-ideal gases or solutions are different from interactions between the same types of molecules. The activity of an ion is particularly influenced by its surroundings.
Equilibrium constants should be defined by activities but, in practice, are often defined by concentrations instead. The same is often true of equations for reaction rates. However, there are circumstances where the activity and the concentration are significantly different and, as such, it is not valid to approximate with concentrations where activities are required. Two examples serve to illustrate this point:
In a solution of potassium hydrogen iodate KH(IO3)2 at 0.02 M the activity is 40% lower than the calculated hydrogen ion concentration, resulting in a much higher pH than expected.
When a 0.1 M hydrochloric acid solution containing methyl green indicator is added to a 5 M solution of magnesium chloride, the color of the indicator changes from green to yellow—indicating increasing acidity—when in fact the acid has been diluted. Although at low ionic strength (< 0.1 M) the activity coefficient approaches unity, this coefficient can actually increase with ionic strength in a high ionic strength regime. For hydrochloric acid solutions, the minimum is around 0.4 M.
Definition
The relative activity of a species , denoted , is defined as:
where is the (molar) chemical potential of the species under the conditions of interest, is the (molar) chemical potential of that species under some defined set of standard conditions, is the gas constant, is the thermodynamic temperature and is the exponential constant.
Alternatively, this equation can be written as:
In general, the activity depends on any factor that alters the chemical potential. Such factors may include: concentration, temperature, pressure, interactions between chemical species, electric fields, etc. Depending on the circumstances, some of these factors, in particular concentration and interactions, may be more important than others.
The activity depends on the choice of standard state such that changing the standard state will also change the activity. This means that activity is a relative term that describes how "active" a compound is compared to when it is under the standard state conditions. In principle, the choice of standard state is arbitrary; however, it is often chosen out of mathematical or experimental convenience. Alternatively, it is also possible to define an "absolute activity" (i.e., the fugacity in statistical mechanics), , which is written as:
Note that this definition corresponds to setting as standard state the solution of , if the latter exists.
Activity coefficient
The activity coefficient , which is also a dimensionless quantity, relates the activity to a measured mole fraction (or in the gas phase), molality , mass fraction , molar concentration (molarity) or mass concentration :
The division by the standard molality (usually 1 mol/kg) or the standard molar concentration (usually 1 mol/L) is necessary to ensure that both the activity and the activity coefficient are dimensionless, as is conventional.
The activity depends on the chosen standard state and composition scale; for instance, in the dilute limit it approaches the mole fraction, mass fraction, or numerical value of molarity, all of which are different. However, the activity coefficients are similar.
When the activity coefficient is close to 1, the substance shows almost ideal behaviour according to Henry's law (but not necessarily in the sense of an ideal solution). In these cases, the activity can be substituted with the appropriate dimensionless measure of composition , or . It is also possible to define an activity coefficient in terms of Raoult's law: the International Union of Pure and Applied Chemistry (IUPAC) recommends the symbol for this activity coefficient, although this should not be confused with fugacity.
Standard states
Gases
In most laboratory situations, the difference in behaviour between a real gas and an ideal gas is dependent only on the pressure and the temperature, not on the presence of any other gases. At a given temperature, the "effective" pressure of a gas is given by its fugacity : this may be higher or lower than its mechanical pressure. By historical convention, fugacities have the dimension of pressure, so the dimensionless activity is given by:
where is the dimensionless fugacity coefficient of the species, is its mole fraction in the gaseous mixture ( for a pure gas) and is the total pressure. The value is the standard pressure: it may be equal to 1 atm (101.325 kPa) or 1 bar (100 kPa) depending on the source of data, and should always be quoted.
Mixtures in general
The most convenient way of expressing the composition of a generic mixture is by using the mole fractions (written in the gas phase) of the different components (or chemical species: atoms or molecules) present in the system, where
with , the number of moles of the component i, and , the total number of moles of all the different components present in the mixture.
The standard state of each component in the mixture is taken to be the pure substance, i.e. the pure substance has an activity of one. When activity coefficients are used, they are usually defined in terms of Raoult's law,
where is the Raoult's law activity coefficient: an activity coefficient of one indicates ideal behaviour according to Raoult's law.
Dilute solutions (non-ionic)
A solute in dilute solution usually follows Henry's law rather than Raoult's law, and it is more usual to express the composition of the solution in terms of the molar concentration (in mol/L) or the molality (in mol/kg) of the solute rather than in mole fractions. The standard state of a dilute solution is a hypothetical solution of concentration = 1 mol/L (or molality = 1 mol/kg) which shows ideal behaviour (also referred to as "infinite-dilution" behaviour). The standard state, and hence the activity, depends on which measure of composition is used. Molalities are often preferred as the volumes of non-ideal mixtures are not strictly additive and are also temperature-dependent: molalities do not depend on volume, whereas molar concentrations do.
The activity of the solute is given by:
Ionic solutions
When the solute undergoes ionic dissociation in solution (for example a salt), the system becomes decidedly non-ideal and we need to take the dissociation process into consideration. One can define activities for the cations and anions separately ( and ).
In a liquid solution the activity coefficient of a given ion (e.g. Ca2+) isn't measurable because it is experimentally impossible to independently measure the electrochemical potential of an ion in solution. (One cannot add cations without putting in anions at the same time). Therefore, one introduces the notions of
mean ionic activity
mean ionic molality
mean ionic activity coefficient
where represent the stoichiometric coefficients involved in the ionic dissociation process
Even though and cannot be determined separately, is a measurable quantity that can also be predicted for sufficiently dilute systems using Debye–Hückel theory. For electrolyte solutions at higher concentrations, Debye–Hückel theory needs to be extended and replaced, e.g., by a Pitzer electrolyte solution model (see external links below for examples). For the activity of a strong ionic solute (complete dissociation) we can write:
Measurement
The most direct way of measuring the activity of a volatile species is to measure its equilibrium partial vapor pressure. For water as solvent, the water activity aw is the equilibrated relative humidity. For non-volatile components, such as sucrose or sodium chloride, this approach will not work since they do not have measurable vapor pressures at most temperatures. However, in such cases it is possible to measure the vapor pressure of the solvent instead. Using the Gibbs–Duhem relation it is possible to translate the change in solvent vapor pressures with concentration into activities for the solute.
The simplest way of determining how the activity of a component depends on pressure is by measurement of densities of solution, knowing that real solutions have deviations from the additivity of (molar) volumes of pure components compared to the (molar) volume of the solution. This involves the use of partial molar volumes, which measure the change in chemical potential with respect to pressure.
Another way to determine the activity of a species is through the manipulation of colligative properties, specifically freezing point depression. Using freezing point depression techniques, it is possible to calculate the activity of a weak acid from the relation,
where is the total equilibrium molality of solute determined by any colligative property measurement (in this case ), is the nominal molality obtained from titration and is the activity of the species.
There are also electrochemical methods that allow the determination of activity and its coefficient.
The value of the mean ionic activity coefficient of ions in solution can also be estimated with the Debye–Hückel equation, the Davies equation or the Pitzer equations.
Single ion activity measurability revisited
The prevailing view that single ion activities are unmeasurable, or perhaps even physically meaningless, has its roots in the work of Edward A. Guggenheim in the late 1920s. However, chemists have not given up the idea of single ion activities. For example, pH is defined as the negative logarithm of the hydrogen ion activity. By implication, if the prevailing view on the physical meaning and measurability of single ion activities is correct it relegates pH to the category of thermodynamically unmeasurable quantities. For this reason the International Union of Pure and Applied Chemistry (IUPAC) states that the activity-based definition of pH is a notional definition only and further states that the establishment of primary pH standards requires the application of the concept of 'primary method of measurement' tied to the Harned cell. Nevertheless, the concept of single ion activities continues to be discussed in the literature, and at least one author purports to define single ion activities in terms of purely thermodynamic quantities. The same author also proposes a method of measuring single ion activity coefficients based on purely thermodynamic processes.
Use
Chemical activities should be used to define chemical potentials, where the chemical potential depends on the temperature , pressure and the activity according to the formula:
where is the gas constant and is the value of under standard conditions. Note that the choice of concentration scale affects both the activity and the standard state chemical potential, which is especially important when the reference state is the infinite dilution of a solute in a solvent. Chemical potential has units of joules per mole (J/mol), or energy per amount of matter. Chemical potential can be used to characterize the specific Gibbs free energy changes occurring in chemical reactions or other transformations.
Formulae involving activities can be simplified by considering that:
For a chemical solution:
the solvent has an activity of unity (only a valid approximation for rather dilute solutions)
At a low concentration, the activity of a solute can be approximated to the ratio of its concentration over the standard concentration:
Therefore, it is approximately equal to its concentration.
For a mix of gas at low pressure, the activity is equal to the ratio of the partial pressure of the gas over the standard pressure: Therefore, it is equal to the partial pressure in atmospheres (or bars), compared to a standard pressure of 1 atmosphere (or 1 bar).
For a solid body, a uniform, single species solid has an activity of unity at standard conditions. The same thing holds for a pure liquid.
The latter follows from any definition based on Raoult's law, because if we let the solute concentration go to zero, the vapor pressure of the solvent will go to . Thus its activity will go to unity. This means that if during a reaction in dilute solution more solvent is generated (the reaction produces water for example) we can typically set its activity to unity.
Solid and liquid activities do not depend very strongly on pressure because their molar volumes are typically small. Graphite at 100 bars has an activity of only 1.01 if we choose as standard state. Only at very high pressures do we need to worry about such changes. Activity expressed in terms of pressure is called fugacity.
Example values
Example values of activity coefficients of sodium chloride in aqueous solution are given in the table. In an ideal solution, these values would all be unity. The deviations tend to become larger with increasing molality and temperature, but with some exceptions.
See also
Fugacity, the equivalent of activity for partial pressure
Chemical equilibrium
Electrochemical potential
Excess chemical potential
Partial molar property
Thermodynamic equilibrium
Thermal expansion
Virial expansion
Water activity
Non-random two-liquid model (NRTL model) – phase equilibrium calculations
UNIQUAC model – phase equilibrium calculations
References
External links
Equivalences among different forms of activity coefficients and chemical potentials
Calculate activity coefficients of common inorganic electrolytes and their mixtures
AIOMFAC online-model: calculator for activity coefficients of inorganic ions, water, and organic compounds in aqueous solutions and multicomponent mixtures with organic compounds.
Dimensionless numbers of chemistry
Physical chemistry
Thermodynamic properties | 0.786173 | 0.987868 | 0.776636 |
Standard electrode potential (data page) | The data below tabulates standard electrode potentials (E°), in volts relative to the standard hydrogen electrode (SHE), at:
Temperature ;
Effective concentration (activity) 1 mol/L for each aqueous or amalgamated (mercury-alloyed) species;
Unit activity for each solvent and pure solid or liquid species; and
Absolute partial pressure for each gaseous reagent — the convention in most literature data but not the current standard state (100 kPa).
Variations from these ideal conditions affect measured voltage via the Nernst equation.
Electrode potentials of successive elementary half-reactions cannot be directly added. However, the corresponding Gibbs free energy changes (∆G°) must satisfy
,
where electrons are transferred, and the Faraday constant is the conversion factor describing Coulombs transferred per mole electrons. Those Gibbs free energies can be added.
For example, from , the energy to form one neutral atom of Fe(s) from one Fe ion and two electrons is or 84 907 J/(mol ). That value is also the standard formation energy (∆Gf°) for an Fe ion, since and Fe(s) both have zero formation energy.
Data from different sources may cause table inconsistencies. For example: From additivity of Gibbs energies, one must have But that equation does not hold exactly with the cited values.
Table of standard electrode potentials
Legend: (s) – solid; (l) – liquid; (g) – gas; (aq) – aqueous (default for all charged species); (Hg) – amalgam; bold – water electrolysis equations.
See also
Galvanic series lists electrode potentials in saltwater
Standard apparent reduction potentials in biochemistry at pH 7
Reactivity series#Comparison with standard electrode potentials
Notes
References
External links
http://www.jesuitnola.org/upload/clark/Refs/red_pot.htm
https://web.archive.org/web/20150924015049/http://www.fptl.ru/biblioteka/spravo4niki/handbook-of-Chemistry-and-Physics.pdf
http://hyperphysics.phy-astr.gsu.edu/Hbase/tables/electpot.html#c1
Electrochemistry
Electrochemical potentials
Chemistry-related lists | 0.779587 | 0.996209 | 0.776631 |
Reification (fallacy) | Reification (also known as concretism, hypostatization, or the fallacy of misplaced concreteness) is a fallacy of ambiguity, when an abstraction (abstract belief or hypothetical construct) is treated as if it were a concrete real event or physical entity.
In other words, it is the error of treating something that is not concrete, such as an idea, as a concrete thing. A common case of reification is the confusion of a model with reality: "the map is not the territory".
Reification is part of normal usage of natural language, as well as of literature, where a reified abstraction is intended as a figure of speech, and actually understood as such. But the use of reification in logical reasoning or rhetoric is misleading and usually regarded as a fallacy.
A potential consequence of reification is exemplified by Goodhart's law, where changes in the measurement of a phenomenon are mistaken for changes to the phenomenon itself.
Etymology
The term "reification" originates from the combination of the Latin terms res ("thing") and -fication, a suffix related to facere ("to make"). Thus reification can be loosely translated as "thing-making"; the turning of something abstract into a concrete thing or object.
Theory
Reification takes place when natural or social processes are misunderstood or simplified; for example, when human creations are described as "facts of nature, results of cosmic laws, or manifestations of divine will".
Reification may derive from an innate tendency to simplify experience by assuming constancy as much as possible.
Fallacy of misplaced concreteness
According to Alfred North Whitehead, one commits the fallacy of misplaced concreteness when one mistakes an abstract belief, opinion, or concept about the way things are for a physical or "concrete" reality: "There is an error; but it is merely the accidental error of mistaking the abstract for the concrete. It is an example of what might be called the 'Fallacy of Misplaced Concreteness. Whitehead proposed the fallacy in a discussion of the relation of spatial and temporal location of objects. He rejects the notion that a concrete physical object in the universe can be ascribed a simple spatial or temporal extension, that is, without reference to its relations to other spatial or temporal extensions.
[...] apart from any essential reference of the relations of [a] bit of matter to other regions of space [...] there is no element whatever which possesses this character of simple location. [... Instead,] I hold that by a process of constructive abstraction we can arrive at abstractions which are the simply located bits of material, and at other abstractions which are the minds included in the scientific scheme. Accordingly, the real error is an example of what I have termed: The Fallacy of Misplaced Concreteness.
Vicious abstractionism
William James used the notion of "vicious abstractionism" and "vicious intellectualism" in various places, especially to criticize Immanuel Kant's and Georg Wilhelm Friedrich Hegel's idealistic philosophies. In The Meaning of Truth, James wrote:
Let me give the name of "vicious abstractionism" to a way of using concepts which may be thus described: We conceive a concrete situation by singling out some salient or important feature in it, and classing it under that; then, instead of adding to its previous characters all the positive consequences which the new way of conceiving it may bring, we proceed to use our concept privatively; reducing the originally rich phenomenon to the naked suggestions of that name abstractly taken, treating it as a case of "nothing but" that concept, and acting as if all the other characters from out of which the concept is abstracted were expunged. Abstraction, functioning in this way, becomes a means of arrest far more than a means of advance in thought. ... The viciously privative employment of abstract characters and class names is, I am persuaded, one of the great original sins of the rationalistic mind.
In a chapter on "The Methods and Snares of Psychology" in The Principles of Psychology, James describes a related fallacy, the psychologist's fallacy, thus: "The great snare of the psychologist is the confusion of his own standpoint with that of the mental fact about which he is making his report. I shall hereafter call this the "psychologist's fallacy" par excellence" (volume 1, p. 196). John Dewey followed James in describing a variety of fallacies, including "the philosophic fallacy", "the analytic fallacy", and "the fallacy of definition".
Use of constructs in science
The concept of a "construct" has a long history in science; it is used in many, if not most, areas of science. A construct is a hypothetical explanatory variable that is not directly observable. For example, the concepts of motivation in psychology, utility in economics, and gravitational field in physics are constructs; they are not directly observable, but instead are tools to describe natural phenomena.
The degree to which a construct is useful and accepted as part of the current paradigm in a scientific community depends on empirical research that has demonstrated that a scientific construct has construct validity (especially, predictive validity).
Stephen Jay Gould draws heavily on the idea of fallacy of reification in his book The Mismeasure of Man. He argues that the error in using intelligence quotient scores to judge people's intelligence is that, just because a quantity called "intelligence" or "intelligence quotient" is defined as a measurable thing does not mean that intelligence is real; thus denying the validity of the construct "intelligence."
Relation to other fallacies
Pathetic fallacy (also known as anthropomorphic fallacy or anthropomorphization) is a specific type of reification. Just as reification is the attribution of concrete characteristics to an abstract idea, a pathetic fallacy is committed when those characteristics are specifically human characteristics, especially thoughts or feelings. Pathetic fallacy is also related to personification, which is a direct and explicit ascription of life and sentience to the thing in question, whereas the pathetic fallacy is much broader and more allusive.
The animistic fallacy involves attributing personal intention to an event or situation.
Reification fallacy should not be confused with other fallacies of ambiguity:
Accentus, where the ambiguity arises from the emphasis (accent) placed on a word or phrase
Amphiboly, a verbal fallacy arising from ambiguity in the grammatical structure of a sentence
Composition, when one assumes that a whole has a property solely because its various parts have that property
Division, when one assumes that various parts have a property solely because the whole has that same property
Equivocation, the misleading use of a word with more than one meaning
As a rhetorical device
The rhetorical devices of metaphor and personification express a form of reification, but short of a fallacy. These devices, by definition, do not apply literally and thus exclude any fallacious conclusion that the formal reification is real. For example, the metaphor known as the pathetic fallacy, "the sea was angry" reifies anger, but does not imply that anger is a concrete substance, or that water is sentient. The distinction is that a fallacy inhabits faulty reasoning, and not the mere illustration or poetry of rhetoric.
Counterexamples
Reification, while usually fallacious, is sometimes considered a valid argument. Thomas Schelling, a game theorist during the Cold War, argued that for many purposes an abstraction shared between disparate people caused itself to become real. Some examples include the effect of round numbers in stock prices, the importance placed on the Dow Jones Industrial index, national borders, preferred numbers, and many others. (Compare the theory of social constructionism.)
See also
All models are wrong
Counterfactual definiteness
Idolatry
Objectification
Philosophical realism
Problem of universals, a debate about the reality of categories
Surrogation
Hypostatic abstraction
References
Informal fallacies | 0.781757 | 0.993398 | 0.776596 |
Nernst–Planck equation | The Nernst–Planck equation is a conservation of mass equation used to describe the motion of a charged chemical species in a fluid medium. It extends Fick's law of diffusion for the case where the diffusing particles are also moved with respect to the fluid by electrostatic forces. It is named after Walther Nernst and Max Planck.
Equation
The Nernst–Planck equation is a continuity equation for the time-dependent concentration of a chemical species:
where is the flux. It is assumed that the total flux is composed of three elements: diffusion, advection, and electromigration. This implies that the concentration is affected by an ionic concentration gradient , flow velocity , and an electric field :
where is the diffusivity of the chemical species, is the valence of ionic species, is the elementary charge, is the Boltzmann constant, and is the absolute temperature. The electric field may be further decomposed as:
where is the electric potential and is the magnetic vector potential. Therefore, the Nernst–Planck equation is given by:
Simplifications
Assuming that the concentration is at equilibrium and the flow velocity is zero, meaning that only the ion species moves, the Nernst–Planck equation takes the form:
Rather than a general electric field, if we assume that only the electrostatic component is significant, the equation is further simplified by removing the time derivative of the magnetic vector potential:
Finally, in units of mol/(m2·s) and the gas constant , one obtains the more familiar form:
where is the Faraday constant equal to ; the product of Avogadro constant and the elementary charge.
Applications
The Nernst–Planck equation is applied in describing the ion-exchange kinetics in soils. It has also been applied to membrane electrochemistry.
See also
Goldman–Hodgkin–Katz equation
Bioelectrochemistry
References
Walther Nernst
Diffusion
Physical chemistry
Electrochemical equations
Statistical mechanics
Max Planck
Transport phenomena
Electrochemistry | 0.788635 | 0.984719 | 0.776584 |
Photophosphorylation | In the process of photosynthesis, the phosphorylation of ADP to form ATP using the energy of sunlight is called photophosphorylation. Cyclic photophosphorylation occurs in both aerobic and anaerobic conditions, driven by the main primary source of energy available to living organisms, which is sunlight. All organisms produce a phosphate compound, ATP, which is the universal energy currency of life. In photophosphorylation, light energy is used to pump protons across a biological membrane, mediated by flow of electrons through an electron transport chain. This stores energy in a proton gradient. As the protons flow back through an enzyme called ATP synthase, ATP is generated from ADP and inorganic phosphate. ATP is essential in the Calvin cycle to assist in the synthesis of carbohydrates from carbon dioxide and NADPH.
ATP and reactions
Both the structure of ATP synthase and its underlying gene are remarkably similar in all known forms of life. ATP synthase is powered by a transmembrane electrochemical potential gradient, usually in the form of a proton gradient. In all living organisms, a series of redox reactions is used to produce a transmembrane electrochemical potential gradient, or a so-called proton motive force (pmf).
Redox reactions are chemical reactions in which electrons are transferred from a donor molecule to an acceptor molecule. The underlying force driving these reactions is the Gibbs free energy of the reactants relative to the products. If donor and acceptor (the reactants) are of higher free energy than the reaction products, the electron transfer may occur spontaneously. The Gibbs free energy is the energy available ("free") to do work. Any reaction that decreases the overall Gibbs free energy of a system will proceed spontaneously (given that the system is isobaric and also at constant temperature), although the reaction may proceed slowly if it is kinetically inhibited.
The fact that a reaction is thermodynamically possible does not mean that it will actually occur. A mixture of hydrogen gas and oxygen gas does not spontaneously ignite. It is necessary either to supply an activation energy or to lower the intrinsic activation energy of the system, in order to make most biochemical reactions proceed at a useful rate. Living systems use complex macromolecular structures to lower the activation energies of biochemical reactions.
It is possible to couple a thermodynamically favorable reaction (a transition from a high-energy state to a lower-energy state) to a thermodynamically unfavorable reaction (such as a separation of charges, or the creation of an osmotic gradient), in such a way that the overall free energy of the system decreases (making it thermodynamically possible), while useful work is done at the same time. The principle that biological macromolecules catalyze a thermodynamically unfavorable reaction if and only if a thermodynamically favorable reaction occurs simultaneously, underlies all known forms of life.
The transfer of electrons from a donor molecule to an acceptor molecule can be spatially separated into a series of intermediate redox reactions. This is an electron transport chain (ETC). Electron transport chains often produce energy in the form of a transmembrane electrochemical potential gradient. The gradient can be used to transport molecules across membranes. Its energy can be used to produce ATP or to do useful work, for instance mechanical work of a rotating bacterial flagella.
Cyclic photophosphorylation
This form of photophosphorylation occurs on the stroma lamella, or fret channels. In cyclic photophosphorylation, the high-energy electron released from P700, a pigment in a complex called photosystem I, flows in a cyclic pathway. The electron starts in photosystem I, passes from the primary electron acceptor to ferredoxin and then to plastoquinone, next to cytochrome bf (a similar complex to that found in mitochondria), and finally to plastocyanin before returning to photosystem I. This transport chain produces a proton-motive force, pumping H ions across the membrane and producing a concentration gradient that can be used to power ATP synthase during chemiosmosis. This pathway is known as cyclic photophosphorylation, and it produces neither O nor NADPH. Unlike non-cyclic photophosphorylation, NADP does not accept the electrons; they are instead sent back to the cytochrome bf complex.
In bacterial photosynthesis, a single photosystem is used, and therefore is involved in cyclic photophosphorylation.
It is favored in anaerobic conditions and conditions of high irradiance and CO compensation points.
Non-cyclic photophosphorylation
The other pathway, non-cyclic photophosphorylation, is a two-stage process involving two different chlorophyll photosystems in the thylakoid membrane. First, a photon is absorbed by chlorophyll pigments surrounding the reaction core center of photosystem II. The light excites an electron in the pigment P680 at the core of photosystem II, which is transferred to the primary electron acceptor, pheophytin, leaving behind P680. The energy of P680 is used in two steps to split a water molecule into 2H + 1/2 O + 2e (photolysis or light-splitting). An electron from the water molecule reduces P680 back to P680, while the H and oxygen are released. The electron transfers from pheophytin to plastoquinone (PQ), which takes 2e (in two steps) from pheophytin, and two H Ions from the stroma to form PQH. This plastoquinol is later oxidized back to PQ, releasing the 2e to the cytochrome bf complex and the two H ions into the thylakoid lumen. The electrons then pass through Cyt b and Cyt f to plastocyanin, using energy from photosystem I to pump hydrogen ions (H) into the thylakoid space. This creates a H gradient, making H ions flow back into the stroma of the chloroplast, providing the energy for the (re)generation of ATP.
The photosystem II complex replaced its lost electrons from HO, so electrons are not returned to photosystem II as they would in the analogous cyclic pathway. Instead, they are transferred to the photosystem I complex, which boosts their energy to a higher level using a second solar photon. The excited electrons are transferred to a series of acceptor molecules, but this time are passed on to an enzyme called ferredoxin-NADP reductase, which uses them to catalyze the reaction
NADP + 2H + 2e → NADPH + H
This consumes the H ions produced by the splitting of water, leading to a net production of 1/2O, ATP, and NADPH + H with the consumption of solar photons and water.
The concentration of NADPH in the chloroplast may help regulate which pathway electrons take through the light reactions. When the chloroplast runs low on ATP for the Calvin cycle, NADPH will accumulate and the plant may shift from noncyclic to cyclic electron flow.
Early history of research
In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation.
In 1954, Daniel I. Arnon et.al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P32.
His first review on the early research of photophosphorylation was published in 1956.
References
Professor Luis Gordillo
Fenchel T, King GM, Blackburn TH. Bacterial Biogeochemistry: The Ecophysiology of Mineral Cycling. 2nd ed. Elsevier; 1998.
Lengeler JW, Drews G, Schlegel HG, editors. Biology of the Prokaryotes. Blackwell Sci; 1999.
Nelson DL, Cox MM. Lehninger Principles of Biochemistry. 4th ed. Freeman; 2005.
Stumm W, Morgan JJ. Aquatic Chemistry. 3rd ed. Wiley; 1996.
Thauer RK, Jungermann K, Decker K. Energy Conservation in Chemotrophic Anaerobic Bacteria. Bacteriol. Rev. 41:100–180; 1977.
White D. The Physiology and Biochemistry of Prokaryotes. 2nd ed. Oxford University Press; 2000.
Voet D, Voet JG. Biochemistry. 3rd ed. Wiley; 2004.
Cj C. Enverg
Photosynthesis
Light reactions | 0.785641 | 0.98841 | 0.776536 |
Metamorphism | Metamorphism is the transformation of existing rock (the protolith) to rock with a different mineral composition or texture. Metamorphism takes place at temperatures in excess of , and often also at elevated pressure or in the presence of chemically active fluids, but the rock remains mostly solid during the transformation. Metamorphism is distinct from weathering or diagenesis, which are changes that take place at or just beneath Earth's surface.
Various forms of metamorphism exist, including regional, contact, hydrothermal, shock, and dynamic metamorphism. These differ in the characteristic temperatures, pressures, and rate at which they take place and in the extent to which reactive fluids are involved. Metamorphism occurring at increasing pressure and temperature conditions is known as prograde metamorphism, while decreasing temperature and pressure characterize retrograde metamorphism.
Metamorphic petrology is the study of metamorphism. Metamorphic petrologists rely heavily on statistical mechanics and experimental petrology to understand metamorphic processes.
Metamorphic processes
Metamorphism is the set of processes by which existing rock is transformed physically or chemically at elevated temperature, without actually melting to any great degree. The importance of heating in the formation of metamorphic rock was first recognized by the pioneering Scottish naturalist, James Hutton, who is often described as the father of modern geology. Hutton wrote in 1795 that some rock beds of the Scottish Highlands had originally been sedimentary rock, but had been transformed by great heat.
Hutton also speculated that pressure was important in metamorphism. This hypothesis was tested by his friend, James Hall, who sealed chalk into a makeshift pressure vessel constructed from a cannon barrel and heated it in an iron foundry furnace. Hall found that this produced a material strongly resembling marble, rather than the usual quicklime produced by heating of chalk in the open air. French geologists subsequently added metasomatism, the circulation of fluids through buried rock, to the list of processes that help bring about metamorphism. However, metamorphism can take place without metasomatism (isochemical metamorphism) or at depths of just a few hundred meters where pressures are relatively low (for example, in contact metamorphism).
Rock can be transformed without melting because heat causes atomic bonds to break, freeing the atoms to move and form new bonds with other atoms. Pore fluid present between mineral grains is an important medium through which atoms are exchanged. This permits recrystallization of existing minerals or crystallization of new minerals with different crystalline structures or chemical compositions (neocrystallization). The transformation converts the minerals in the protolith into forms that are more stable (closer to chemical equilibrium) under the conditions of pressure and temperature at which metamorphism takes place.
Metamorphism is generally regarded to begin at temperatures of . This excludes diagenetic changes due to compaction and lithification, which result in the formation of sedimentary rocks. The upper boundary of metamorphic conditions lies at the solidus of the rock, which is the temperature at which the rock begins to melt. At this point, the process becomes an igneous process. The solidus temperature depends on the composition of the rock, the pressure, and whether the rock is saturated with water. Typical solidus temperatures range from for wet granite at a few hundred megapascals (MPa) of pressure to about for wet basalt at atmospheric pressure. Migmatites are rocks formed at this upper limit, which contains pods and veins of material that has started to melt but has not fully segregated from the refractory residue.
The metamorphic process can occur at almost any pressure, from near surface pressure (for contact metamorphism) to pressures in excess of 16 kbar (1600 MPa).
Recrystallization
The change in the grain size and orientation in the rock during the process of metamorphism is called recrystallization. For instance, the small calcite crystals in the sedimentary rocks limestone and chalk change into larger crystals in the metamorphic rock marble. In metamorphosed sandstone, recrystallization of the original quartz sand grains results in very compact quartzite, also known as metaquartzite, in which the often larger quartz crystals are interlocked. Both high temperatures and pressures contribute to recrystallization. High temperatures allow the atoms and ions in solid crystals to migrate, thus reorganizing the crystals, while high pressures cause solution of the crystals within the rock at their points of contact (pressure solution) and redeposition in pore space.
During recrystallization, the identity of the mineral does not change, only its texture. Recrystallization generally begins when temperatures reach above half the melting point of the mineral on the Kelvin scale.
Pressure solution begins during diagenesis (the process of lithification of sediments into sedimentary rock) but is completed during early stages of metamorphism. For a sandstone protolith, the dividing line between diagenesis and metamorphism can be placed at the point where strained quartz grains begin to be replaced by new, unstrained, small quartz grains, producing a mortar texture that can be identified in thin sections under a polarizing microscope. With increasing grade of metamorphism, further recrystallization produces foam texture, characterized by polygonal grains meeting at triple junctions, and then porphyroblastic texture, characterized by coarse, irregular grains, including some larger grains (porphyroblasts.)
Metamorphic rocks are typically more coarsely crystalline than the protolith from which they formed. Atoms in the interior of a crystal are surrounded by a stable arrangement of neighboring atoms. This is partially missing at the surface of the crystal, producing a surface energy that makes the surface thermodynamically unstable. Recrystallization to coarser crystals reduces the surface area and so minimizes the surface energy.
Although grain coarsening is a common result of metamorphism, rock that is intensely deformed may eliminate strain energy by recrystallizing as a fine-grained rock called mylonite. Certain kinds of rock, such as those rich in quartz, carbonate minerals, or olivine, are particularly prone to form mylonites, while feldspar and garnet are resistant to mylonitization.
Phase change
Phase change metamorphism is the creating of a new mineral with the same chemical formula as a mineral of the protolith. This involves a rearrangement of the atoms in the crystals. An example is provided by the aluminium silicate minerals, kyanite, andalusite, and sillimanite. All three have the identical composition, . Kyanite is stable at surface conditions. However, at atmospheric pressure, kyanite transforms to andalusite at a temperature of about . Andalusite, in turn, transforms to sillimanite when the temperature reaches about . At pressures above about 4 kbar (400 MPa), kyanite transforms directly to sillimanite as the temperature increases. A similar phase change is sometimes seen between calcite and aragonite, with calcite transforming to aragonite at elevated pressure and relatively low temperature.
Neocrystallization
Neocrystallization involves the creation of new mineral crystals different from the protolith. Chemical reactions digest the minerals of the protolith which yields new minerals. This is a very slow process as it can also involve the diffusion of atoms through solid crystals.
An example of a neocrystallization reaction is the reaction of fayalite with plagioclase at elevated pressure and temperature to form garnet. The reaction is:
Many complex high-temperature reactions may take place between minerals without them melting, and each mineral assemblage produced provides us with a clue as to the temperatures and pressures at the time of metamorphism. These reactions are possible because of rapid diffusion of atoms at elevated temperature. Pore fluid between mineral grains can be an important medium through which atoms are exchanged.
A particularly important group of neocrystallization reactions are those that release volatiles such as water and carbon dioxide. During metamorphism of basalt to eclogite in subduction zones, hydrous minerals break down, producing copious quantities of water. The water rises into the overlying mantle, where it lowers the melting temperature of the mantle rock, generating magma via flux melting. The mantle-derived magmas can ultimately reach the Earth's surface, resulting in volcanic eruptions. The resulting arc volcanoes tend to produce dangerous eruptions, because their high water content makes them extremely explosive.
Examples of dehydration reactions that release water include:
An example of a decarbonation reaction is:
Plastic deformation
In plastic deformation pressure is applied to the protolith, which causes it to shear or bend, but not break. In order for this to happen temperatures must be high enough that brittle fractures do not occur, but not so high that diffusion of crystals takes place.
As with pressure solution, the early stages of plastic deformation begin during diagenesis.
Types
Regional
Regional metamorphism is a general term for metamorphism that affects entire regions of the Earth's crust. It most often refers to dynamothermal metamorphism, which takes place in orogenic belts (regions where mountain building is taking place), but also includes burial metamorphism, which results simply from rock being buried to great depths below the Earth's surface in a subsiding basin.
Dynamothermal
To many geologists, regional metamorphism is practically synonymous with dynamothermal metamorphism. This form of metamorphism takes place at convergent plate boundaries, where two continental plates or a continental plate and an island arc collide. The collision zone becomes a belt of mountain formation called an orogeny. The orogenic belt is characterized by thickening of the Earth's crust, during which the deeply buried crustal rock is subjected to high temperatures and pressures and is intensely deformed. Subsequent erosion of the mountains exposes the roots of the orogenic belt as extensive outcrops of metamorphic rock, characteristic of mountain chains.
Metamorphic rock formed in these settings tends to shown well-developed foliation. Foliation develops when a rock is being shortened along one axis during metamorphism. This causes crystals of platy minerals, such as mica and chlorite, to become rotated such that their short axes are parallel to the direction of shortening. This results in a banded, or foliated, rock, with the bands showing the colors of the minerals that formed them. Foliated rock often develops planes of cleavage. Slate is an example of a foliated metamorphic rock, originating from shale, and it typically shows well-developed cleavage that allows slate to be split into thin plates.
The type of foliation that develops depends on the metamorphic grade. For instance, starting with a mudstone, the following sequence develops with increasing temperature: The mudstone is first converted to slate, which is a very fine-grained, foliated metamorphic rock, characteristic of very low grade metamorphism. Slate in turn is converted to phyllite, which is fine-grained and found in areas of low grade metamorphism. Schist is medium to coarse-grained and found in areas of medium grade metamorphism. High-grade metamorphism transforms the rock to gneiss, which is coarse to very coarse-grained.
Rocks that were subjected to uniform pressure from all sides, or those that lack minerals with distinctive growth habits, will not be foliated. Marble lacks platy minerals and is generally not foliated, which allows its use as a material for sculpture and architecture.
Collisional orogenies are preceded by subduction of oceanic crust. The conditions within the subducting slab as it plunges toward the mantle in a subduction zone produce their own distinctive regional metamorphic effects, characterized by paired metamorphic belts.
The pioneering work of George Barrow on regional metamorphism in the Scottish Highlands showed that some regional metamorphism produces well-defined, mappable zones of increasing metamorphic grade. This Barrovian metamorphism is the most recognized metamorphic series in the world. However, Barrovian metamorphism is specific to pelitic rock, formed from mudstone or siltstone, and it is not unique even in pelitic rock. A different sequence in the northeast of Scotland defines Buchan metamorphism, which took place at lower pressure than the Barrovian.
Burial
Burial metamorphism takes place simply through rock being buried to great depths below the Earth's surface in a subsiding basin. Here the rock is subjected to high temperatures and the great pressure caused by the immense weight of the rock layers above. Burial metamorphism tends to produce low-grade metamorphic rock. This shows none of the effects of deformation and folding so characteristic of dynamothermal metamorphism.
Examples of metamorphic rocks formed by burial metamorphism include some of the rocks of the Midcontinent Rift System of North America, such as the Sioux Quartzite, and in the Hamersley Basin of Australia.
Contact
Contact metamorphism occurs typically around intrusive igneous rocks as a result of the temperature increase caused by the intrusion of magma into cooler country rock. The area surrounding the intrusion where the contact metamorphism effects are present is called the metamorphic aureole, the contact aureole, or simply the aureole. Contact metamorphic rocks are usually known as hornfels. Rocks formed by contact metamorphism may not present signs of strong deformation and are often fine-grained and extremely tough. The Yule Marble used on the Lincoln Memorial exterior and the Tomb of the Unknown Soldier in Arlington National Cemetery was formed by contact metamorphism.
Contact metamorphism is greater adjacent to the intrusion and dissipates with distance from the contact. The size of the aureole depends on the heat of the intrusion, its size, and the temperature difference with the wall rocks. Dikes generally have small aureoles with minimal metamorphism, extending not more than one or two dike thicknesses into the surrounding rock, whereas the aureoles around batholiths can be up to several kilometers wide.
The metamorphic grade of an aureole is measured by the peak metamorphic mineral which forms in the aureole. This is usually related to the metamorphic temperatures of pelitic or aluminosilicate rocks and the minerals they form. The metamorphic grades of aureoles at shallow depth are albite-epidote hornfels, hornblende hornfels, pyroxene hornfels, and sillimanite hornfels, in increasing order of temperature of formation. However, the albite-epidote hornfels is often not formed, even though it is the lowest temperature grade.
Magmatic fluids coming from the intrusive rock may also take part in the metamorphic reactions. An extensive addition of magmatic fluids can significantly modify the chemistry of the affected rocks. In this case the metamorphism grades into metasomatism. If the intruded rock is rich in carbonate the result is a skarn. Fluorine-rich magmatic waters which leave a cooling granite may often form greisens within and adjacent to the contact of the granite. Metasomatic altered aureoles can localize the deposition of metallic ore minerals and thus are of economic interest.
Fenitization, or Na-metasomatism, is a distinctive form of contact metamorphism accompanied by metasomatism. It takes place around intrusions of a rare type of magma called a carbonatite that is highly enriched in carbonates and low in silica. Cooling bodies of carbonatite magma give off highly alkaline fluids rich in sodium as they solidify, and the hot, reactive fluid replaces much of the mineral content in the aureole with sodium-rich minerals.
A special type of contact metamorphism, associated with fossil fuel fires, is known as pyrometamorphism.
Hydrothermal
Hydrothermal metamorphism is the result of the interaction of a rock with a high-temperature fluid of variable composition. The difference in composition between an existing rock and the invading fluid triggers a set of metamorphic and metasomatic reactions. The hydrothermal fluid may be magmatic (originate in an intruding magma), circulating groundwater, or ocean water. Convective circulation of hydrothermal fluids in the ocean floor basalts produces extensive hydrothermal metamorphism adjacent to spreading centers and other submarine volcanic areas. The fluids eventually escape through vents on the ocean floor known as black smokers. The patterns of this hydrothermal alteration are used as a guide in the search for deposits of valuable metal ores.
Shock
Shock metamorphism occurs when an extraterrestrial object (a meteorite for instance) collides with the Earth's surface. Impact metamorphism is, therefore, characterized by ultrahigh pressure conditions and low temperature. The resulting minerals (such as SiO2 polymorphs coesite and stishovite) and textures are characteristic of these conditions.
Dynamic
Dynamic metamorphism is associated with zones of high strain such as fault zones. In these environments, mechanical deformation is more important than chemical reactions in transforming the rock. The minerals present in the rock often do not reflect conditions of chemical equilibrium, and the textures produced by dynamic metamorphism are more significant than the mineral makeup.
There are three deformation mechanisms by which rock is mechanically deformed. These are cataclasis, the deformation of rock via the fracture and rotation of mineral grains; plastic deformation of individual mineral crystals; and movement of individual atoms by diffusive processes. The textures of dynamic metamorphic zones are dependent on the depth at which they were formed, as the temperature and confining pressure determine the deformation mechanisms which predominate.
At the shallowest depths, a fault zone will be filled with various kinds of unconsolidated cataclastic rock, such as fault gouge or fault breccia. At greater depths, these are replaced by consolidated cataclastic rock, such as crush breccia, in which the larger rock fragments are cemented together by calcite or quartz. At depths greater than about , cataclasites appear; these are quite hard rocks consist of crushed rock fragments in a flinty matrix, which forms only at elevated temperature. At still greater depths, where temperatures exceed , plastic deformation takes over, and the fault zone is composed of mylonite. Mylonite is distinguished by its strong foliation, which is absent in most cataclastic rock. It is distinguished from the surrounding rock by its finer grain size.
There is considerable evidence that cataclasites form as much through plastic deformation and recrystallization as brittle fracture of grains, and that the rock may never fully lose cohesion during the process. Different minerals become ductile at different temperatures, with quartz being among the first to become ductile, and sheared rock composed of different minerals may simultaneously show both plastic deformation and brittle fracture.
The strain rate also affects the way in which rocks deform. Ductile deformation is more likely at low strain rates (less than 10−14 sec−1) in the middle and lower crust, but high strain rates can cause brittle deformation. At the highest strain rates, the rock may be so strongly heated that it briefly melts, forming a glassy rock called pseudotachylite. Pseudotachylites seem to be restricted to dry rock, such as granulite.
Classification of metamorphic rocks
Metamorphic rocks are classified by their protolith, if this can be determined from the properties of the rock itself. For example, if examination of a metamorphic rock shows that its protolith was basalt, it will be described as a metabasalt. When the protolith cannot be determined, the rock is classified by its mineral composition or its degree of foliation.
Metamorphic grades
Metamorphic grade is an informal indication of the amount or degree of metamorphism.
In the Barrovian sequence (described by George Barrow in zones of progressive metamorphism in Scotland), metamorphic grades are also classified by mineral assemblage based on the appearance of key minerals in rocks of pelitic (shaly, aluminous) origin:
Low grade ------------------- Intermediate --------------------- High grade
Greenschist ------------- Amphibolite ----------------------- Granulite
Slate --- Phyllite ---------- Schist ---------------------- Gneiss --- Migmatite
Chlorite zone
Biotite zone
Garnet zone
Staurolite zone
Kyanite zone
Sillimanite zone
A more complete indication of this intensity or degree is provided by the concept of metamorphic facies.
Metamorphic facies
Metamorphic facies are recognizable terranes or zones with an assemblage of key minerals that were in equilibrium under specific range of temperature and pressure during a metamorphic event. The facies are named after the metamorphic rock formed under those facies conditions from basalt.
The particular mineral assemblage is somewhat dependent on the composition of that protolith, so that (for example) the amphibolite facies of a marble will not be identical with the amphibolite facies of a pellite. However, the facies are defined such that metamorphic rock with as broad a range of compositions as is practical can be assigned to a particular facies. The present definition of metamorphic facies is largely based on the work of the Finnish geologist, Pentti Eskola in 1921, with refinements based on subsequent experimental work. Eskola drew upon the zonal schemes, based on index minerals, that were pioneered by the British geologist, George Barrow.
The metamorphic facies is not usually considered when classifying metamorphic rock based on protolith, mineral mode, or texture. However, a few metamorphic facies produce rock of such distinctive character that the facies name is used for the rock when more precise classification is not possible. The chief examples are amphibolite and eclogite. The British Geological Survey strongly discourages use of granulite as a classification for rock metamorphosed to the granulite facies. Instead, such rock will often be classified as a granofels. However, this is not universally accepted.
See diagram for more detail.
Prograde and retrograde
Metamorphism is further divided into prograde and retrograde metamorphism. Prograde metamorphism involves the change of mineral assemblages (paragenesis) with increasing temperature and (usually) pressure conditions. These are solid state dehydration reactions, and involve the loss of volatiles such as water or carbon dioxide. Prograde metamorphism results in rock characteristic of the maximum pressure and temperature experienced. Metamorphic rocks usually do not undergo further change when they are brought back to the surface.
Retrograde metamorphism involves the reconstitution of a rock via revolatisation under decreasing temperatures (and usually pressures), allowing the mineral assemblages formed in prograde metamorphism to revert to those more stable at less extreme conditions. This is a relatively uncommon process, because volatiles produced during prograde metamorphism usually migrate out of the rock and are not available to recombine with the rock during cooling. Localized retrograde metamorphism can take place when fractures in the rock provide a pathway for groundwater to enter the cooling rock.
Equilibrium mineral assemblages
Metamorphic processes act to bring the protolith closer to thermodynamic equilibrium, which is its state of maximum stability. For example, shear stress (nonhydrodynamic stress) is incompatible with thermodynamic equilibrium, so sheared rock will tend to deform in ways that relieve the shear stress. The most stable assemblage of minerals for a rock of a given composition is that which minimizes the Gibbs free energy
where:
U is the internal energy (SI unit: joule),
p is pressure (SI unit: pascal),
V is volume (SI unit: m3),
T is the temperature (SI unit: kelvin),
S is the entropy (SI unit: joule per kelvin),
In other words, a metamorphic reaction will take place only if it lowers the total Gibbs free energy of the protolith. Recrystallization to coarser crystals lowers the Gibbs free energy by reducing surface energy, while phase changes and neocrystallization reduce the bulk Gibbs free energy. A reaction will begin at the temperature and pressure where the Gibbs free energy of the reagents becomes greater than that of the products.
A mineral phase will generally be more stable if it has a lower internal energy, reflecting tighter binding between its atoms. Phases with a higher density (expressed as a lower molar volume V) are more stable at higher pressure, while minerals with a less ordered structure (expressed as a higher entropy S) are favored at high temperature. Thus andalusite is stable only at low pressure, since it has the lowest density of any aluminium silicate polymorph, while sillimanite is the stable form at higher temperatures, since it has the least ordered structure.
The Gibbs free energy of a particular mineral at a specified temperature and pressure can be expressed by various analytic formulas. These are calibrated against experimentally measured properties and phase boundaries of mineral assemblages. The equilibrium mineral assemblage for a given bulk composition of rock at a specified temperature and pressure can then be calculated on a computer.
However, it is often very useful to represent equilibrium mineral assemblages using various kinds of diagrams. These include petrogenetic grids and compatibility diagrams (compositional phase diagrams.)
Petrogenetic grids
A petrogenetic grid is a geologic phase diagram that plots experimentally derived metamorphic reactions at their pressure and temperature conditions for a given rock composition. This allows metamorphic petrologists to determine the pressure and temperature conditions under which rocks metamorphose. The Al2SiO5 nesosilicate phase diagram shown is a very simple petrogenetic grid for rocks that only have a composition consisting of aluminum (Al), silicon (Si), and oxygen (O). As the rock undergoes different temperatures and pressure, it could be any of the three given polymorphic minerals. For a rock that contains multiple phases, the boundaries between many phase transformations may be plotted, though the petrogenetic grid quickly becomes complicated. For example, a petrogenetic grid might show both the aluminium silicate phase transitions and the transition from aluminum silicate plus potassium feldspar to muscovite plus quartz.
Compatibility diagrams
Whereas a petrogenetic grid shows phases for a single composition over a range of temperature and pressure, a compatibility diagram shows how the mineral assemblage varies with composition at a fixed temperature and pressure. Compatibility diagrams provide an excellent way to analyze how variations in the rock's composition affect the mineral paragenesis that develops in a rock at particular pressure and temperature conditions. Because of the difficulty of depicting more than three components (as a ternary diagram), usually only the three most important components are plotted, though occasionally a compatibility diagram for four components is plotted as a projected tetrahedron.
See also
Metamorphosis of snow
Footnotes
References
Eskola P., 1920, The Mineral Facies of Rocks, Norsk. Geol. Tidsskr., 6, 143–194
Further reading
Winter J.D., 2001, An Introduction to Igneous and Metamorphic Petrology, Prentice-Hall .
External links
Recommendations by the IUGS Subcommission on the Systematics of Metamorphic Rocks, 1. How to Name a Metamorphic Rock
Recommendations by the IUGS Subcommission on the Systematics of Metamorphic Rocks, 2. Types, Grade, and Facies of Metamorphism
Recommendations by the IUGS Subcommission on the Systematics of Metamorphic Rocks, 3. Structural terms including fault rock terms
Recommendations by the IUGS Subcommission on the Systematics of Metamorphic Rocks, 4. High P/T Metamorphic Rocks
James Madison University: Metamorphism
Barrovian Metamorphism: Brock Univ.
Metamorphism of Carbonate Rocks: University of Wisconsin – Green Bay
Metamorphic Petrology Database (MetPetDB) – Department of Earth and Environmental Sciences, Rensselaer Polytechnic Institute
Geological processes
Metamorphic petrology | 0.780532 | 0.99482 | 0.776489 |
Elemental analysis | Elemental analysis is a process where a sample of some material (e.g., soil, waste or drinking water, bodily fluids, minerals, chemical compounds) is analyzed for its elemental and sometimes isotopic composition. Elemental analysis can be qualitative (determining what elements are present), and it can be quantitative (determining how much of each is present). Elemental analysis falls within the ambit of analytical chemistry, the instruments involved in deciphering the chemical nature of our world.
History
Antoine Lavoisier is regarded as the inventor of elemental analysis as a quantitative, experimental tool to assess the chemical composition of a compound. At the time, elemental analysis was based on the gravimetric determination of specific absorbent materials before and after selective adsorption of the combustion gases. Today fully automated systems based on thermal conductivity or infrared spectroscopy detection of the combustion gases, or other spectroscopic methods are used.
CHNX analysis
For organic chemists, elemental analysis or "EA" almost always refers to CHNX analysis—the determination of the mass fractions of carbon, hydrogen, nitrogen, and heteroatoms (X) (halogens, sulfur) of a sample. This information is important to help determine the structure of an unknown compound, as well as to help ascertain the structure and purity of a synthesized compound. In present-day organic chemistry, spectroscopic techniques (NMR, both 1H and 13C), mass spectrometry and chromatographic procedures have replaced EA as the primary technique for structural determination. However, it still gives very useful complementary information.
The most common form of elemental analysis, CHNS analysis, is accomplished by combustion analysis. Modern elemental analyzers are also capable of simultaneous determination of sulfur along with CHN in the same measurement run.
Quantitative analysis
Quantitative analysis determines the mass of each element or compound present. Other quantitative methods include gravimetry, optical atomic spectroscopy, and neutron activation analysis.
Gravimetry is where the sample is dissolved, the element of interest is precipitated and its mass measured, or the element of interest is volatilized, and the mass loss is measured.
Optical atomic spectroscopy includes flame atomic absorption, graphite furnace atomic absorption, and inductively coupled plasma atomic emission spectroscopy, which probe the outer electronic structure of atoms.
Neutron activation analysis involves the activation of a sample matrix through the process of neutron capture. The resulting radioactive target nuclei of the sample begin to decay, emitting gamma rays of specific energies that identify the radioisotopes present in the sample. The concentration of each analyte can be determined by comparison to an irradiated standard with known concentrations of each analyte.
Qualitative analysis
To qualitatively determine which elements exist in a sample, the methods are mass spectrometric atomic spectroscopy, such as inductively coupled plasma mass spectrometry, which probes the mass of atoms; other spectroscopy, which probes the inner electronic structure of atoms such as X-ray fluorescence, particle-induced X-ray emission, X-ray photoelectron spectroscopy, and Auger electron spectroscopy; and chemical methods such as the sodium fusion test and Schöniger oxidation.
Analysis of results
The analysis of results is performed by determining the ratio of elements from within the sample and working out a chemical formula that fits with those results. This process is useful as it helps determine if a sample sent is the desired compound and confirms the purity of a compound. The accepted deviation of elemental analysis results from the calculated is 0.3%.
See also
Dumas method of molecular weight determination
References
Analytical chemistry
Materials science | 0.789639 | 0.983279 | 0.776435 |
Thermodynamics | Thermodynamics is a branch of physics that deals with heat, work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed by the four laws of thermodynamics, which convey a quantitative description using measurable macroscopic physical quantities, but may be explained in terms of microscopic constituents by statistical mechanics. Thermodynamics applies to a wide variety of topics in science and engineering, especially physical chemistry, biochemistry, chemical engineering and mechanical engineering, but also in other complex fields such as meteorology.
Historically, thermodynamics developed out of a desire to increase the efficiency of early steam engines, particularly through the work of French physicist Sadi Carnot (1824) who believed that engine efficiency was the key that could help France win the Napoleonic Wars. Scots-Irish physicist Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854 which stated, "Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency." German physicist and mathematician Rudolf Clausius restated Carnot's principle known as the Carnot cycle and gave to the theory of heat a truer and sounder basis. His most important paper, "On the Moving Force of Heat", published in 1850, first stated the second law of thermodynamics. In 1865 he introduced the concept of entropy. In 1870 he introduced the virial theorem, which applied to heat.
The initial application of thermodynamics to mechanical heat engines was quickly extended to the study of chemical compounds and chemical reactions. Chemical thermodynamics studies the nature of the role of entropy in the process of chemical reactions and has provided the bulk of expansion and knowledge of the field. Other formulations of thermodynamics emerged. Statistical thermodynamics, or statistical mechanics, concerns itself with statistical predictions of the collective motion of particles from their microscopic behavior. In 1909, Constantin Carathéodory presented a purely mathematical approach in an axiomatic formulation, a description often referred to as geometrical thermodynamics.
Introduction
A description of any thermodynamic system employs the four laws of thermodynamics that form an axiomatic basis. The first law specifies that energy can be transferred between physical systems as heat, as work, and with transfer of matter. The second law defines the existence of a quantity called entropy, that describes the direction, thermodynamically, that a system can evolve and quantifies the state of order of a system and that can be used to quantify the useful work that can be extracted from the system.
In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of the thermodynamic system and its surroundings. A system is composed of particles, whose average motions define its properties, and those properties are in turn related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes.
With these tools, thermodynamics can be used to describe how systems respond to changes in their environment. This can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. The results of thermodynamics are essential for other fields of physics and for chemistry, chemical engineering, corrosion engineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, materials science, and economics, to name a few.
This article is focused mainly on classical thermodynamics which primarily studies systems in thermodynamic equilibrium. Non-equilibrium thermodynamics is often treated as an extension of the classical treatment, but statistical mechanics has brought many advances to that field.
History
The history of thermodynamics as a scientific discipline generally begins with Otto von Guericke who, in 1650, built and designed the world's first vacuum pump and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the Anglo-Irish physicist and chemist Robert Boyle had learned of Guericke's designs and, in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyle's Law was formulated, which states that pressure and volume are inversely proportional. Then, in 1679, based on these concepts, an associate of Boyle's named Denis Papin built a steam digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated.
Later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine, followed by Thomas Newcomen in 1712. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time.
The fundamental concepts of heat capacity and latent heat, which were necessary for the development of thermodynamics, were developed by Professor Joseph Black at the University of Glasgow, where James Watt was employed as an instrument maker. Black and Watt performed experiments together, but it was Watt who conceived the idea of the external condenser which resulted in a large increase in steam engine efficiency. Drawing on all the previous work led Sadi Carnot, the "father of thermodynamics", to publish Reflections on the Motive Power of Fire (1824), a discourse on heat, power, energy and engine efficiency. The book outlined the basic energetic relations between the Carnot engine, the Carnot cycle, and motive power. It marked the start of thermodynamics as a modern science.
The first thermodynamic textbook was written in 1859 by William Rankine, originally trained as a physicist and a civil and mechanical engineering professor at the University of Glasgow. The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin).
The foundations of statistical thermodynamics were set out by physicists such as James Clerk Maxwell, Ludwig Boltzmann, Max Planck, Rudolf Clausius and J. Willard Gibbs.
Clausius, who first stated the basic ideas of the second law in his paper "On the Moving Force of Heat", published in 1850, and is called "one of the founding fathers of thermodynamics", introduced the concept of entropy in 1865.
During the years 1873–76 the American mathematical physicist Josiah Willard Gibbs published a series of three papers, the most famous being On the Equilibrium of Heterogeneous Substances, in which he showed how thermodynamic processes, including chemical reactions, could be graphically analyzed, by studying the energy, entropy, volume, temperature and pressure of the thermodynamic system in such a manner, one can determine if a process would occur spontaneously. Also Pierre Duhem in the 19th century wrote about chemical thermodynamics. During the early 20th century, chemists such as Gilbert N. Lewis, Merle Randall, and E. A. Guggenheim applied the mathematical methods of Gibbs to the analysis of chemical processes.
Etymology
Thermodynamics has an intricate etymology.
By a surface-level analysis, the word consists of two parts that can be traced back to Ancient Greek. Firstly, ("of heat"; used in words such as thermometer) can be traced back to the root θέρμη therme, meaning "heat". Secondly, the word ("science of force [or power]") can be traced back to the root δύναμις dynamis, meaning "power".
In 1849, the adjective thermo-dynamic is used by William Thomson.
In 1854, the noun thermo-dynamics is used by Thomson and William Rankine to represent the science of generalized heat engines.
Pierre Perrot claims that the term thermodynamics was coined by James Joule in 1858 to designate the science of relations between heat and power, however, Joule never used that term, but used instead the term perfect thermo-dynamic engine in reference to Thomson's 1849 phraseology.
Branches of thermodynamics
The study of thermodynamical systems has developed into several related branches, each using a different fundamental model as a theoretical or experimental basis, or applying the principles to varying types of systems.
Classical thermodynamics
Classical thermodynamics is the description of the states of thermodynamic systems at near-equilibrium, that uses macroscopic, measurable properties. It is used to model exchanges of energy, work and heat based on the laws of thermodynamics. The qualifier classical reflects the fact that it represents the first level of understanding of the subject as it developed in the 19th century and describes the changes of a system in terms of macroscopic empirical (large scale, and measurable) parameters. A microscopic interpretation of these concepts was later provided by the development of statistical mechanics.
Statistical mechanics
Statistical mechanics, also known as statistical thermodynamics, emerged with the development of atomic and molecular theories in the late 19th century and early 20th century, and supplemented classical thermodynamics with an interpretation of the microscopic interactions between individual particles or quantum-mechanical states. This field relates the microscopic properties of individual atoms and molecules to the macroscopic, bulk properties of materials that can be observed on the human scale, thereby explaining classical thermodynamics as a natural result of statistics, classical mechanics, and quantum theory at the microscopic level.
Chemical thermodynamics
Chemical thermodynamics is the study of the interrelation of energy with chemical reactions or with a physical change of state within the confines of the laws of thermodynamics. The primary objective of chemical thermodynamics is determining the spontaneity of a given transformation.
Equilibrium thermodynamics
Equilibrium thermodynamics is the study of transfers of matter and energy in systems or bodies that, by agencies in their surroundings, can be driven from one state of thermodynamic equilibrium to another. The term 'thermodynamic equilibrium' indicates a state of balance, in which all macroscopic flows are zero; in the case of the simplest systems or bodies, their intensive properties are homogeneous, and their pressures are perpendicular to their boundaries. In an equilibrium state there are no unbalanced potentials, or driving forces, between macroscopically distinct parts of the system. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial equilibrium state, and given its surroundings, and given its constitutive walls, to calculate what will be the final equilibrium state of the system after a specified thermodynamic operation has changed its walls or surroundings.
Non-equilibrium thermodynamics
Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are not in stationary states, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods.
Laws of thermodynamics
Thermodynamics is principally based on a set of four laws which are universally valid when applied to systems that fall within the constraints implied by each. In the various theoretical descriptions of thermodynamics these laws may be expressed in seemingly differing forms, but the most prominent formulations are the following.
Zeroth law
The zeroth law of thermodynamics states: If two systems are each in thermal equilibrium with a third, they are also in thermal equilibrium with each other.
This statement implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems under consideration. Systems are said to be in equilibrium if the small, random exchanges between them (e.g. Brownian motion) do not lead to a net change in energy. This law is tacitly assumed in every measurement of temperature. Thus, if one seeks to decide whether two bodies are at the same temperature, it is not necessary to bring them into contact and measure any changes of their observable properties in time. The law provides an empirical definition of temperature, and justification for the construction of practical thermometers.
The zeroth law was not initially recognized as a separate law of thermodynamics, as its basis in thermodynamical equilibrium was implied in the other laws. The first, second, and third laws had been explicitly stated already, and found common acceptance in the physics community before the importance of the zeroth law for the definition of temperature was realized. As it was impractical to renumber the other laws, it was named the zeroth law.
First law
The first law of thermodynamics states: In a process without transfer of matter, the change in internal energy, , of a thermodynamic system is equal to the energy gained as heat, , less the thermodynamic work, , done by the system on its surroundings.
.
where denotes the change in the internal energy of a closed system (for which heat or work through the system boundary are possible, but matter transfer is not possible), denotes the quantity of energy supplied to the system as heat, and denotes the amount of thermodynamic work done by the system on its surroundings. An equivalent statement is that perpetual motion machines of the first kind are impossible; work done by a system on its surrounding requires that the system's internal energy decrease or be consumed, so that the amount of internal energy lost by that work must be resupplied as heat by an external energy source or as work by an external machine acting on the system (so that is recovered) to make the system work continuously.
For processes that include transfer of matter, a further statement is needed: With due account of the respective fiducial reference states of the systems, when two systems, which may be of different chemical compositions, initially separated only by an impermeable wall, and otherwise isolated, are combined into a new system by the thermodynamic operation of removal of the wall, then
,
where denotes the internal energy of the combined system, and and denote the internal energies of the respective separated systems.
Adapted for thermodynamics, this law is an expression of the principle of conservation of energy, which states that energy can be transformed (changed from one form to another), but cannot be created or destroyed.
Internal energy is a principal property of the thermodynamic state, while heat and work are modes of energy transfer by which a process may change this state. A change of internal energy of a system may be achieved by any combination of heat added or removed and work performed on or by the system. As a function of state, the internal energy does not depend on the manner, or on the path through intermediate steps, by which the system arrived at its state.
Second law
A traditional version of the second law of thermodynamics states: Heat does not spontaneously flow from a colder body to a hotter body.
The second law refers to a system of matter and radiation, initially with inhomogeneities in temperature, pressure, chemical potential, and other intensive properties, that are due to internal 'constraints', or impermeable rigid walls, within it, or to externally imposed forces. The law observes that, when the system is isolated from the outside world and from those forces, there is a definite thermodynamic quantity, its entropy, that increases as the constraints are removed, eventually reaching a maximum value at thermodynamic equilibrium, when the inhomogeneities practically vanish. For systems that are initially far from thermodynamic equilibrium, though several have been proposed, there is known no general physical principle that determines the rates of approach to thermodynamic equilibrium, and thermodynamics does not deal with such rates. The many versions of the second law all express the general irreversibility of the transitions involved in systems approaching thermodynamic equilibrium.
In macroscopic thermodynamics, the second law is a basic observation applicable to any actual thermodynamic process; in statistical thermodynamics, the second law is postulated to be a consequence of molecular chaos.
Third law
The third law of thermodynamics states: As the temperature of a system approaches absolute zero, all processes cease and the entropy of the system approaches a minimum value.
This law of thermodynamics is a statistical law of nature regarding entropy and the impossibility of reaching absolute zero of temperature. This law provides an absolute reference point for the determination of entropy. The entropy determined relative to this point is the absolute entropy. Alternate definitions include "the entropy of all systems and of all states of a system is smallest at absolute zero," or equivalently "it is impossible to reach the absolute zero of temperature by any finite number of processes".
Absolute zero, at which all activity would stop if it were possible to achieve, is −273.15 °C (degrees Celsius), or −459.67 °F (degrees Fahrenheit), or 0 K (kelvin), or 0° R (degrees Rankine).
System models
An important concept in thermodynamics is the thermodynamic system, which is a precisely defined region of the universe under study. Everything in the universe except the system is called the surroundings. A system is separated from the remainder of the universe by a boundary which may be a physical or notional, but serve to confine the system to a finite volume. Segments of the boundary are often described as walls; they have respective defined 'permeabilities'. Transfers of energy as work, or as heat, or of matter, between the system and the surroundings, take place through the walls, according to their respective permeabilities.
Matter or energy that pass across the boundary so as to effect a change in the internal energy of the system need to be accounted for in the energy balance equation. The volume contained by the walls can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. The system could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics. When a looser viewpoint is adopted, and the requirement of thermodynamic equilibrium is dropped, the system can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics, or the event horizon of a black hole.
Boundaries are of four types: fixed, movable, real, and imaginary. For example, in an engine, a fixed boundary means the piston is locked at its position, within which a constant volume process might occur. If the piston is allowed to move that boundary is movable while the cylinder and cylinder head boundaries are fixed. For closed systems, boundaries are real while for open systems boundaries are often imaginary. In the case of a jet engine, a fixed imaginary boundary might be assumed at the intake of the engine, fixed boundaries along the surface of the case and a second fixed imaginary boundary across the exhaust nozzle.
Generally, thermodynamics distinguishes three classes of systems, defined in terms of what is allowed to cross their boundaries:
As time passes in an isolated system, internal differences of pressures, densities, and temperatures tend to even out. A system in which all equalizing processes have gone to completion is said to be in a state of thermodynamic equilibrium.
Once in thermodynamic equilibrium, a system's properties are, by definition, unchanging in time. Systems in equilibrium are much simpler and easier to understand than are systems which are not in equilibrium. Often, when analysing a dynamic thermodynamic process, the simplifying assumption is made that each intermediate state in the process is at equilibrium, producing thermodynamic processes which develop so slowly as to allow each intermediate step to be an equilibrium state and are said to be reversible processes.
States and processes
When a system is at equilibrium under a given set of conditions, it is said to be in a definite thermodynamic state. The state of the system can be described by a number of state quantities that do not depend on the process by which the system arrived at its state. They are called intensive variables or extensive variables according to how they change when the size of the system changes. The properties of the system can be described by an equation of state which specifies the relationship between these variables. State may be thought of as the instantaneous quantitative description of a system with a set number of variables held constant.
A thermodynamic process may be defined as the energetic evolution of a thermodynamic system proceeding from an initial state to a final state. It can be described by process quantities. Typically, each thermodynamic process is distinguished from other processes in energetic character according to what parameters, such as temperature, pressure, or volume, etc., are held fixed; Furthermore, it is useful to group these processes into pairs, in which each variable held constant is one member of a conjugate pair.
Several commonly studied thermodynamic processes are:
Adiabatic process: occurs without loss or gain of energy by heat
Isenthalpic process: occurs at a constant enthalpy
Isentropic process: a reversible adiabatic process, occurs at a constant entropy
Isobaric process: occurs at constant pressure
Isochoric process: occurs at constant volume (also called isometric/isovolumetric)
Isothermal process: occurs at a constant temperature
Steady state process: occurs without a change in the internal energy
Instrumentation
There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device which measures any parameter of a thermodynamic system. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law pV=nRT, the volume of such a sample can be used as an indicator of temperature; in this manner it defines temperature. Although pressure is defined mechanically, a pressure-measuring device, called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device which is used to measure and define the internal energy of a system.
A thermodynamic reservoir is a system which is so large that its state parameters are not appreciably altered when it is brought into contact with the system of interest. When the reservoir is brought into contact with the system, the system is brought into equilibrium with the reservoir. For example, a pressure reservoir is a system at a particular pressure, which imposes that pressure upon the system to which it is mechanically connected. The Earth's atmosphere is often used as a pressure reservoir. The ocean can act as temperature reservoir when used to cool power plants.
Conjugate variables
The central concept of thermodynamics is that of energy, the ability to do work. By the First Law, the total energy of a system and its surroundings is conserved. Energy may be transferred into a system by heating, compression, or addition of matter, and extracted from a system by cooling, expansion, or extraction of matter. In mechanics, for example, energy transfer equals the product of the force applied to a body and the resulting displacement.
Conjugate variables are pairs of thermodynamic concepts, with the first being akin to a "force" applied to some thermodynamic system, the second being akin to the resulting "displacement", and the product of the two equaling the amount of energy transferred. The common conjugate variables are:
Pressure-volume (the mechanical parameters);
Temperature-entropy (thermal parameters);
Chemical potential-particle number (material parameters).
Potentials
Thermodynamic potentials are different quantitative measures of the stored energy in a system. Potentials are used to measure the energy changes in systems as they evolve from an initial state to a final state. The potential used depends on the constraints of the system, such as constant temperature or pressure. For example, the Helmholtz and Gibbs energies are the energies available in a system to do useful work when the temperature and volume or the pressure and temperature are fixed, respectively. Thermodynamic potentials cannot be measured in laboratories, but can be computed using molecular thermodynamics.
The five most well known potentials are:
where is the temperature, the entropy, the pressure, the volume, the chemical potential, the number of particles in the system, and is the count of particles types in the system.
Thermodynamic potentials can be derived from the energy balance equation applied to a thermodynamic system. Other thermodynamic potentials can also be obtained through Legendre transformation.
Axiomatic thermodynamics
Axiomatic thermodynamics is a mathematical discipline that aims to describe thermodynamics in terms of rigorous axioms, for example by finding a mathematically rigorous way to express the familiar laws of thermodynamics.
The first attempt at an axiomatic theory of thermodynamics was Constantin Carathéodory's 1909 work Investigations on the Foundations of Thermodynamics, which made use of Pfaffian systems and the concept of adiabatic accessibility, a notion that was introduced by Carathéodory himself. In this formulation, thermodynamic concepts such as heat, entropy, and temperature are derived from quantities that are more directly measurable. Theories that came after, differed in the sense that they made assumptions regarding thermodynamic processes with arbitrary initial and final states, as opposed to considering only neighboring states.
Applied fields
See also
Thermodynamic process path
Lists and timelines
List of important publications in thermodynamics
List of textbooks on thermodynamics and statistical mechanics
List of thermal conductivities
List of thermodynamic properties
Table of thermodynamic equations
Timeline of thermodynamics
Thermodynamic equations
Notes
References
Further reading
A nontechnical introduction, good on historical and interpretive matters.
Vol. 1, pp. 55–349.
5th ed. (in Russian)
The following titles are more technical:
External links
Thermodynamics Data & Property Calculation Websites
Thermodynamics Educational Websites
Biochemistry Thermodynamics
Thermodynamics and Statistical Mechanics
Engineering Thermodynamics – A Graphical Approach
Thermodynamics and Statistical Mechanics by Richard Fitzpatrick
Energy
Chemical engineering | 0.777201 | 0.998881 | 0.776331 |
Energy-based model | An energy-based model (EBM) (also called a Canonical Ensemble Learning(CEL) or Learning via Canonical Ensemble (LCE)) is an application of canonical ensemble formulation of statistical physics for learning from data problems. The approach prominently appears in generative models (GMs).
EBMs provide a unified framework for many probabilistic and non-probabilistic approaches to such learning, particularly for training graphical and other structured models.
An EBM learns the characteristics of a target dataset and generates a similar but larger dataset. EBMs detect the latent variables of a dataset and generate new datasets with a similar distribution.
Energy-based generative neural networks is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models whose energy functions are parameterized by modern deep neural networks.
Boltzmann machines are a special form of energy-based models with a specific parametrization of the energy.
Description
For a given input , the model describes an energy such that the Boltzmann distribution is a probability (density) and typically .
Since the normalization constant , also known as partition function, depends on all the Boltzmann factors of all possible inputs it cannot be easily computed or reliably estimated during training simply using standard maximum likelihood estimation.
However for maximizing the likelihood during training, the gradient of the log likelihood of a single training example is given by using the chain rule
The expectation in the above formula for the gradient can be approximately estimated by drawing samples from the distribution using Markov chain Monte Carlo (MCMC)
Early energy-based models like the 2003 Boltzmann machine by Hinton estimated this expectation using block Gibbs sampler. Newer approaches make use of more efficient Stochastic Gradient Langevin Dynamics (LD) drawing samples using:
and . A replay buffer of past values is used with LD to initialize the optimization module.
The parameters of the neural network are, therefore, trained in a generative manner by MCMC-based maximum likelihood estimation:
The learning process follows an "analysis by synthesis" scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method, e.g., Langevin dynamics or Hybrid Monte Carlo, and then updates the model parameters based on the difference between the training examples and the synthesized ones, see equation .
This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation.
In the end, the model learns a function that associates low energies to correct values, and higher energies to incorrect values.
After training, given a converged energy model , the Metropolis–Hastings algorithm can be used to draw new samples.
The acceptance probability is given by:
History
The term "energy-based models" was first coined in a 2003 JMLR paper where the authors defined a generalisation of independent components analysis to the overcomplete setting using EBMs.
Other early work on EBMs proposed models that represented energy as a composition of latent and observable variables.
Characteristics
EBMs demonstrate useful properties:
Simplicity and stability–The EBM is the only object that needs to be designed and trained. Separate networks need not be trained to ensure balance.
Adaptive computation time–An EBM can generate sharp, diverse samples or (more quickly) coarse, less diverse samples. Given infinite time, this procedure produces true samples.
Flexibility–In Variational Autoencoders (VAE) and flow-based models, the generator learns a map from a continuous space to a (possibly) discontinuous space containing different data modes. EBMs can learn to assign low energies to disjoint regions (multiple modes).
Adaptive generation–EBM generators are implicitly defined by the probability distribution, and automatically adapt as the distribution changes (without training), allowing EBMs to address domains where generator training is impractical, as well as minimizing mode collapse and avoiding spurious modes from out-of-distribution samples.
Compositionality–Individual models are unnormalized probability distributions, allowing models to be combined through product of experts or other hierarchical techniques.
Experimental results
On image datasets such as CIFAR-10 and ImageNet 32x32, an EBM model generated high-quality images relatively quickly. It supported combining features learned from one type of image for generating other types of images. It was able to generalize using out-of-distribution datasets, outperforming flow-based and autoregressive models. EBM was relatively resistant to adversarial perturbations, behaving better than models explicitly trained against them with training for classification.
Applications
Target applications include natural language processing, robotics and computer vision.
The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis,
3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ).
Alternatives
EBMs compete with techniques such as variational autoencoders (VAEs), generative adversarial networks (GANs) or normalizing flows.
Extensions
Joint energy-based models
Joint energy-based models (JEM), proposed in 2020 by Grathwohl et al., allow any classifier with softmax output to be interpreted as energy-based model. The key observation is that such a classifier is trained to predict the conditional probability
where is the y-th index of the logits corresponding to class y.
Without any change to the logits it was proposed to reinterpret the logits to describe a joint probability density:
with unknown partition function and energy .
By marginalization, we obtain the unnormalized density
therefore,
so that any classifier can be used to define an energy function .
See also
Empirical likelihood
Posterior predictive distribution
Contrastive learning
Literature
Implicit Generation and Generalization in Energy-Based Models Yilun Du, Igor Mordatch https://arxiv.org/abs/1903.08689
Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One, Will Grathwohl, Kuan-Chieh Wang, Jörn-Henrik Jacobsen, David Duvenaud, Mohammad Norouzi, Kevin Swersky https://arxiv.org/abs/1912.03263
References
External links
Statistical models
Machine learning
Statistical mechanics
Hamiltonian mechanics | 0.78811 | 0.984995 | 0.776284 |
Carbonation | Carbonation is the chemical reaction of carbon dioxide to give carbonates, bicarbonates, and carbonic acid. In chemistry, the term is sometimes used in place of carboxylation, which refers to the formation of carboxylic acids.
In inorganic chemistry and geology, carbonation is common. Metal hydroxides (MOH) and metal oxides (M'O) react with CO2 to give bicarbonates and carbonates:
MOH + CO2 → M(HCO3)
M'O + CO2 → M'CO3
Selected carbonations
Carbonic anhydrase
In mammalian physiology, transport of carbon dioxide to the lungs involves a carbonation reaction catalyzed by the enzyme carbonic anhydrase. In the absence of such catalysts, carbon dioxide cannot be expelled sufficient rate to support metabolic needs. The enzyme harbors a zinc aquo complex, which captures carbon dioxide to give a zinc bicarbonate:
Behavior of concrete
In reinforced concrete, the chemical reaction between carbon dioxide
In the air and calcium hydroxide and hydrated calcium silicate in the concrete is known as neutralisation. The similar reaction in which calcium hydroxide from cement reacts with carbon dioxide and forms insoluble calcium carbonate is carbonatation.
Urea production
Carbonation of ammonia is one step in the industrial production of urea:In 2020, worldwide production capacity was approximately 180 million tonnes. As a fertilizer, it is a source of nitrogen for plants.
Urea production plants are almost always located adjacent to the site where the ammonia is manufactured.
In the subsequent urea conversion: the ammonium carbamate is decomposed into urea, releasing water:
Solubility
Henry's law states that P=KBx where P is the partial pressure of gas above the solution. KB is Henry's law constant. KB increases as temperature increases. x is the mole fraction of gas in the solution. According to Henry's law carbonation increases in a solution as temperature decreases.
Since carbonation is the process of giving compounds like carbonic acid (liq) from CO2 (gas) {i.e. making liquid from gasses} thus the partial pressure of CO2 has to decrease or the mole fraction of CO2 in solution has to increase {P/x = KB} and both these two conditions support increase in carbonation.
References
Inorganic chemistry
Transition metals
Coordination complexes | 0.783316 | 0.990987 | 0.776256 |
In silico | In biology and other experimental sciences, an in silico experiment is one performed on a computer or via computer simulation software. The phrase is pseudo-Latin for 'in silicon' (correct ), referring to silicon in computer chips. It was coined in 1987 as an allusion to the Latin phrases , , and , which are commonly used in biology (especially systems biology). The latter phrases refer, respectively, to experiments done in living organisms, outside living organisms, and where they are found in nature.
History
The earliest known use of the phrase was by Christopher Langton to describe artificial life, in the announcement of a workshop on that subject at the Center for Nonlinear Studies at the Los Alamos National Laboratory in 1987. The expression in silico was first used to characterize biological experiments carried out entirely in a computer in 1989, in the workshop "Cellular Automata: Theory and Applications" in Los Alamos, New Mexico, by Pedro Miramontes, a mathematician from National Autonomous University of Mexico (UNAM), presenting the report "DNA and RNA Physicochemical Constraints, Cellular Automata and Molecular Evolution". The work was later presented by Miramontes as his dissertation.
In silico has been used in white papers written to support the creation of bacterial genome programs by the Commission of the European Community. The first referenced paper where in silico appears was written by a French team in 1991. The first referenced book chapter where in silico appears was written by Hans B. Sieburg in 1990 and presented during a Summer School on Complex Systems at the Santa Fe Institute.
The phrase in silico originally applied only to computer simulations that modeled natural or laboratory processes (in all the natural sciences), and did not refer to calculations done by computer generically.
Drug discovery with virtual screening
In silico study in medicine is thought to have the potential to speed the rate of discovery while reducing the need for expensive lab work and clinical trials. One way to achieve this is by producing and screening drug candidates more effectively. In 2010, for example, using the protein docking algorithm EADock (see Protein-ligand docking), researchers found potential inhibitors to an enzyme associated with cancer activity in silico. Fifty percent of the molecules were later shown to be active inhibitors in vitro. This approach differs from use of expensive high-throughput screening (HTS) robotic labs to physically test thousands of diverse compounds a day, often with an expected hit rate on the order of 1% or less, with still fewer expected to be real leads following further testing (see drug discovery).
As an example, the technique was utilized for a drug repurposing study in order to search for potential cures for COVID-19 (SARS-CoV-2).
Cell models
Efforts have been made to establish computer models of cellular behavior. For example, in 2007 researchers developed an in silico model of tuberculosis to aid in drug discovery, with the prime benefit of its being faster than real time simulated growth rates, allowing phenomena of interest to be observed in minutes rather than months. More work can be found that focus on modeling a particular cellular process such as the growth cycle of Caulobacter crescentus.
These efforts fall far short of an exact, fully predictive computer model of a cell's entire behavior. Limitations in the understanding of molecular dynamics and cell biology, as well as the absence of available computer processing power, force large simplifying assumptions that constrain the usefulness of present in silico cell models.
Genetics
Digital genetic sequences obtained from DNA sequencing may be stored in sequence databases, be analyzed (see Sequence analysis), be digitally altered or be used as templates for creating new actual DNA using artificial gene synthesis.
Other examples
In silico computer-based modeling technologies have also been applied in:
Whole cell analysis of prokaryotic and eukaryotic hosts e.g. E. coli, B. subtilis, yeast, CHO- or human cell lines
Discovery of potential cure for COVID-19.
Bioprocess development and optimization e.g. optimization of product yields
Simulation of oncological clinical trials exploiting grid computing infrastructures, such as the European Grid Infrastructure, for improving the performance and effectiveness of the simulations.
Analysis, interpretation and visualization of heterologous data sets from various sources e.g. genome, transcriptome or proteome data
Validation of taxonomic assignment steps in herbivore metagenomics study.
Protein design. One example is RosettaDesign, a software package under development and free for academic use.
See also
Virtual screening
Computational biology
Computational biomodeling
Computer experiment
Folding@home
Exscalate4Cov
Cellular model
Nonclinical studies
Organ-on-a-chip
In silico molecular design programs
In silico medicine
Dry lab
References
External links
World Wide Words: In silico
CADASTER Seventh Framework Programme project aimed to develop in silico computational methods to minimize experimental tests for REACH Registration, Evaluation, Authorisation and Restriction of Chemicals
In Silico Biology. Journal of Biological Systems Modeling and Simulation
In Silico Pharmacology
Pharmaceutical industry
Latin biological phrases
Alternatives to animal testing
Animal test conditions | 0.783691 | 0.990375 | 0.776148 |
Disulfide | In chemistry, a disulfide (or disulphide in British English) is a compound containing a functional group or the anion. The linkage is also called an SS-bond or sometimes a disulfide bridge and usually derived from two thiol groups.
In inorganic chemistry, the anion appears in a few rare minerals, but the functional group has tremendous importance in biochemistry. Disulfide bridges formed between thiol groups in two cysteine residues are an important component of the tertiary and quaternary structure of proteins.
Compounds of the form are usually called persulfides instead.
Organic disulfides
Structure
Disulfides have a C-S-S-C dihedral angle approaching 90°. The S-S bond length is 2.03 Å in diphenyl disulfide, similar to that in elemental sulfur.
Two kinds of disulfides are recognized, symmetric and unsymmetric. Symmetrical disulfides are compounds of the formula . Most disulfides encountered in organo sulfur chemistry are symmetrical disulfides. Unsymmetrical disulfides (also called heterodisulfides or mixed disulfides) are compounds of the formula . Unsymmetrical disulfide are less common in organic chemistry, but many disulfides in nature are unsymmetrical. Illustrative of a symmetric disulfide is cystine.
Properties
The disulfide bonds are strong, with a typical bond dissociation energy of 60 kcal/mol (251 kJ mol−1). However, being about 40% weaker than and bonds, the disulfide bond is often the "weak link" in many molecules. Furthermore, reflecting the polarizability of divalent sulfur, the bond is susceptible to scission by polar reagents, both electrophiles and especially nucleophiles (Nu):
RS-SR + Nu- -> RS-Nu + RS-
The disulfide bond is about 2.05 Å in length, about 0.5 Å longer than a bond. Rotation about the axis is subject to a low barrier. Disulfides show a distinct preference for dihedral angles approaching 90°. When the angle approaches 0° or 180°, then the disulfide is a significantly better oxidant.
Disulfides where the two R groups are the same are called symmetric, examples being diphenyl disulfide and dimethyl disulfide. When the two R groups are not identical, the compound is said to be an asymmetric or mixed disulfide.
Although the hydrogenation of disulfides is usually not practical, the equilibrium constant for the reaction provides a measure of the standard redox potential for disulfides:
RSSR + H2 -> 2 RSH
This value is about −250 mV versus the standard hydrogen electrode (pH = 7). By comparison, the standard reduction potential for ferrodoxins is about −430 mV.
Synthesis
Disulfide bonds are usually formed from the oxidation of sulfhydryl groups, especially in biological contexts. The transformation is depicted as follows:
2 RSH <=> RS-SR + 2 H+ + 2 e-
A variety of oxidants participate in this reaction including oxygen and hydrogen peroxide. Such reactions are thought to proceed via sulfenic acid intermediates. In the laboratory, iodine in the presence of base is commonly employed to oxidize thiols to disulfides. Several metals, such as copper(II) and iron(III) complexes affect this reaction. Alternatively, disulfide bonds in proteins often formed by thiol-disulfide exchange:
RS-SR + R'SH <=> R'S-SR + RSH
Such reactions are mediated by enzymes in some cases and in other cases are under equilibrium control, especially in the presence of a catalytic amount of base.
The alkylation of alkali metal di- and polysulfides gives disulfides. "Thiokol" polymers arise when sodium polysulfide is treated with an alkyl dihalide. In the converse reaction, carbanionic reagents react with elemental sulfur to afford mixtures of the thioether, disulfide, and higher polysulfides. These reactions are often unselective but can be optimized for specific applications.
Synthesis of unsymmetrical disulfides (heterodisulfides)
Many specialized methods have been developed for forming unsymmetrical disulfides. Reagents that deliver the equivalent of "" react with thiols to give asymmetrical disulfides:
RSH + R'SNR''_2 -> RS-SR' + HNR''_2
where is the phthalimido group.
Bunte salts, derivatives of the type are also used to generate unsymmetrical disulfides:
Na[O3S2R] + NaSR' -> RSSR' + Na2SO3
Reactions
The most important aspect of disulfide bonds is their scission, as the bond is usually the weakest bond in a molecule. Many specialized organic reactions have been developed to cleave the bond.
A variety of reductants reduce disulfides to thiols. Hydride agents are typical reagents, and a common laboratory demonstration "uncooks" eggs with sodium borohydride. Alkali metals effect the same reaction more aggressively: RS-SR + 2 Na -> 2 NaSR, followed by protonation of the resulting metal thiolate: NaSR + HCl -> HSR + NaCl
In biochemistry labwork, thiols such as β-mercaptoethanol (β-ME) or dithiothreitol (DTT) serve as reductants through thiol-disulfide exchange. The thiol reagents are used in excess to drive the equilibrium to the right: RS-SR + 2 HOCH2CH2SH <=> HOCH2CH2S-SCH2CH2OH + 2 RSH
The reductant tris(2-carboxyethyl)phosphine (TCEP) is useful, beside being odorless compared to β-ME and DTT, because it is selective, working at both alkaline and acidic conditions (unlike DTT), is more hydrophilic and more resistant to oxidation in air. Furthermore, it is often not needed to remove TCEP before modification of protein thiols.
In Zincke cleavage, halogens oxidize disulfides to a sulfenyl halide:ArSSAr + Cl2 -> 2 ArSCl
More unusually, oxidation of disulfides gives first thiosulfinates and then thiosulfonates:
RSSR + [O] → RS(=O)SR
RS(=O)SR + [O] → RS(=O)2SR
Thiol-disulfide exchange
In thiol–disulfide exchange, a thiolate group displaces one sulfur atom in a disulfide bond . The original disulfide bond is broken, and its other sulfur atom is released as a new thiolate, carrying away the negative charge. Meanwhile, a new disulfide bond forms between the attacking thiolate and the original sulfur atom.
Thiolates, not thiols, attack disulfide bonds. Hence, thiol–disulfide exchange is inhibited at low pH (typically, below 8) where the protonated thiol form is favored relative to the deprotonated thiolate form. (The pKa of a typical thiol group is roughly 8.3, but can vary due to its environment.)
Thiol–disulfide exchange is the principal reaction by which disulfide bonds are formed and rearranged in a protein. The rearrangement of disulfide bonds within a protein generally occurs via intra-protein thiol–disulfide exchange reactions; a thiolate group of a cysteine residue attacks one of the protein's own disulfide bonds. This process of disulfide rearrangement (known as disulfide shuffling) does not change the number of disulfide bonds within a protein, merely their location (i.e., which cysteines are bonded). Disulfide reshuffling is generally much faster than oxidation/reduction reactions, which change the number of disulfide bonds within a protein. The oxidation and reduction of protein disulfide bonds in vitro also generally occurs via thiol–disulfide exchange reactions. Typically, the thiolate of a redox reagent such as glutathione, dithiothreitol attacks the disulfide bond on a protein forming a mixed disulfide bond between the protein and the reagent. This mixed disulfide bond when attacked by another thiolate from the reagent, leaves the cysteine oxidized. In effect, the disulfide bond is transferred from the protein to the reagent in two steps, both thiol–disulfide exchange reactions.
The in vivo oxidation and reduction of protein disulfide bonds by thiol–disulfide exchange is facilitated by a protein called thioredoxin. This small protein, essential in all known organisms, contains two cysteine amino acid residues in a vicinal arrangement (i.e., next to each other), which allows it to form an internal disulfide bond, or disulfide bonds with other proteins. As such, it can be used as a repository of reduced or oxidized disulfide bond moieties.
Occurrence in biology
Occurrence in proteins
Disulfide bonds can be formed under oxidising conditions and play an important role in the folding and stability of some proteins, usually proteins secreted to the extracellular medium. Since most cellular compartments are reducing environments, in general, disulfide bonds are unstable in the cytosol, with some exceptions as noted below, unless a sulfhydryl oxidase is present.
Disulfide bonds in proteins are formed between the thiol groups of cysteine residues by the process of oxidative folding. The other sulfur-containing amino acid, methionine, cannot form disulfide bonds. A disulfide bond is typically denoted by hyphenating the abbreviations for cysteine, e.g., when referring to ribonuclease A the "Cys26–Cys84 disulfide bond", or the "26–84 disulfide bond", or most simply as "C26–C84" where the disulfide bond is understood and does not need to be mentioned. The prototype of a protein disulfide bond is the two-amino-acid peptide cystine, which is composed of two cysteine amino acids joined by a disulfide bond. The structure of a disulfide bond can be described by its χss dihedral angle between the Cβ−Sγ−Sγ−Cβ atoms, which is usually close to ±90°.
The disulfide bond stabilizes the folded form of a protein in several ways:
It holds two portions of the protein together, biasing the protein towards the folded topology. That is, the disulfide bond destabilizes the unfolded form of the protein by lowering its entropy.
The disulfide bond may form the nucleus of a hydrophobic core of the folded protein, i.e., local hydrophobic residues may condense around the disulfide bond and onto each other through hydrophobic interactions.
Related to 1 and 2, the disulfide bond links two segments of the protein chain, increases the effective local concentration of protein residues, and lowers the effective local concentration of water molecules. Since water molecules attack amide-amide hydrogen bonds and break up secondary structure, a disulfide bond stabilizes secondary structure in its vicinity. For example, researchers have identified several pairs of peptides that are unstructured in isolation, but adopt stable secondary and tertiary structure upon formation of a disulfide bond between them.
A disulfide species is a particular pairing of cysteines in a disulfide-bonded protein and is usually depicted by listing the disulfide bonds in parentheses, e.g., the "(26–84, 58–110) disulfide species". A disulfide ensemble is a grouping of all disulfide species with the same number of disulfide bonds, and is usually denoted as the 1S ensemble, the 2S ensemble, etc. for disulfide species having one, two, etc. disulfide bonds. Thus, the (26–84) disulfide species belongs to the 1S ensemble, whereas the (26–84, 58–110) species belongs to the 2S ensemble. The single species with no disulfide bonds is usually denoted as R for "fully reduced". Under typical conditions, disulfide reshuffling is much faster than the formation of new disulfide bonds or their reduction; hence, the disulfide species within an ensemble equilibrate more quickly than between ensembles.
The native form of a protein is usually a single disulfide species, although some proteins may cycle between a few disulfide states as part of their function, e.g., thioredoxin. In proteins with more than two cysteines, non-native disulfide species may be formed, which are almost always misfolded. As the number of cysteines increases, the number of nonnative species increases factorially.
In bacteria and archaea
Disulfide bonds play an important protective role for bacteria as a reversible switch that turns a protein on or off when bacterial cells are exposed to oxidation reactions. Hydrogen peroxide (H2O2) in particular could severely damage DNA and kill the bacterium at low concentrations if not for the protective action of the SS-bond. Archaea typically have fewer disulfides than higher organisms.
In eukaryotes
In eukaryotic cells, in general, stable disulfide bonds are formed in the lumen of the RER (rough endoplasmic reticulum) and the mitochondrial intermembrane space but not in the cytosol. This is due to the more oxidizing environment of the aforementioned compartments and more reducing environment of the cytosol (see glutathione). Thus disulfide bonds are mostly found in secretory proteins, lysosomal proteins, and the exoplasmic domains of membrane proteins.
There are notable exceptions to this rule. For example, many nuclear and cytosolic proteins can become disulfide-crosslinked during necrotic cell death. Similarly, a number of cytosolic proteins which have cysteine residues in proximity to each other that function as oxidation sensors or redox catalysts; when the reductive potential of the cell fails, they oxidize and trigger cellular response mechanisms. The virus Vaccinia also produces cytosolic proteins and peptides that have many disulfide bonds; although the reason for this is unknown presumably they have protective effects against intracellular proteolysis machinery.
Disulfide bonds are also formed within and between protamines in the sperm chromatin of many mammalian species.
Disulfides in regulatory proteins
As disulfide bonds can be reversibly reduced and re-oxidized, the redox state of these bonds has evolved into a signaling element. In chloroplasts, for example, the enzymatic reduction of disulfide bonds has been linked to the control of numerous metabolic pathways as well as gene expression. The reductive signaling activity has been shown, thus far, to be carried by the ferredoxin-thioredoxin system, channeling electrons from the light reactions of photosystem I to catalytically reduce disulfides in regulated proteins in a light dependent manner. In this way chloroplasts adjust the activity of key processes such as the Calvin–Benson cycle, starch degradation, ATP production and gene expression according to light intensity. Additionally, It has been reported that disulfides plays a significant role on redox state regulation of Two-component systems (TCSs), which could be found in certain bacteria including photogenic strain. A unique intramolecular cysteine disulfide bonds in the ATP-binding domain of SrrAB TCs found in Staphylococcus aureus is a good example of disulfides in regulatory proteins, which the redox state of SrrB molecule is controlled by cysteine disulfide bonds, leading to the modification of SrrA activity including gene regulation.
In hair and feathers
Over 90% of the dry weight of hair comprises proteins called keratins, which have a high disulfide content, from the amino acid cysteine. The robustness conferred in part by disulfide linkages is illustrated by the recovery of virtually intact hair from ancient Egyptian tombs. Feathers have similar keratins and are extremely resistant to protein digestive enzymes. The stiffness of hair and feather is determined by the disulfide content. Manipulating disulfide bonds in hair is the basis for the permanent wave in hairstyling. Reagents that affect the making and breaking of S−S bonds are key, e.g., ammonium thioglycolate. The high disulfide content of feathers dictates the high sulfur content of bird eggs. The high sulfur content of hair and feathers contributes to the disagreeable odor that results when they are burned.
In disease
Cystinosis is a condition where cystine precipitates as a solid in various organs. This accumulation interferes with bodily function and can be fatal. This disorder can be resolved by treatment with cysteamine. Cysteamine acts to solubilize the cystine by (1) forming the mixed disulfide cysteine-cysteamine, which is more soluble and exportable, and (2) reducing cystine to cysteine.
Inorganic disulfides
The disulfide anion is , or −S−S−. In disulfide, sulfur exists in the reduced state with oxidation number −1. Its electron configuration then resembles that of a chlorine atom. It thus tends to form a covalent bond with another S− center to form group, similar to elemental chlorine existing as the diatomic Cl2. Oxygen may also behave similarly, e.g. in peroxides such as H2O2. Examples:
Hydrogen disulfide (S2H2), the simplest inorganic disulfide
Disulfur dichloride (S2Cl2), a distillable liquid.
Iron disulfide (FeS2), or pyrite.
Related compounds
Thiosulfoxides are orthogonally isomeric with disulfides, having the second sulfur branching from the first and not partaking in a continuous chain, i.e. >S=S rather than −S−S−.
Disulfide bonds are analogous but more common than related peroxide, thioselenide, and diselenide bonds. Intermediate compounds of these also exist, for example thioperoxides (also known as oxasulfides) such as hydrogen thioperoxide, have the formula R1OSR2 (equivalently R2SOR1). These are isomeric to sulfoxides in a similar manner to the above; i.e. >S=O rather than −S−O−.
Thiuram disulfides, with the formula (R2NCSS)2, are disulfides but they behave distinctly because of the thiocarbonyl group.
Compounds with three sulfur atoms, such as CH3S−S−SCH3, are called trisulfides, or trisulfide bonds.
Misnomers
Disulfide is also used to refer to compounds that contain two sulfide (S2−) centers. The compound carbon disulfide, CS2 is described with the structural formula i.e. S=C=S. This molecule is not a disulfide in the sense that it lacks a S-S bond. Similarly, molybdenum disulfide, MoS2, is not a disulfide in the sense again that its sulfur atoms are not linked.
Applications
Rubber manufacturing
The vulcanization of rubber results in crosslinking groups which consist of disulfide (and polysulfide) bonds; in analogy to the role of disulfides in proteins, the S−S linkages in rubber strongly affect the stability and rheology of the material. Although the exact mechanism underlying the vulcanization process is not entirely understood (as multiple reaction pathways are present but the predominant one is unknown), it has been extensively shown that the extent to which the process is allowed to proceed determines the physical properties of the resulting rubber- namely, a greater degree of crosslinking corresponds to a stronger and more rigid material. The current conventional methods of rubber manufacturing are typically irreversible, as the unregulated reaction mechanisms can result in complex networks of sulfide linkages; as such, rubber is considered to be a thermoset material.
Covalent adaptable networks
Due to their relatively weak bond dissociation energy (in comparison to C−C bonds and the like), disulfides have been employed in covalent adaptable network (CAN) systems in order to allow for dynamic breakage and reformation of crosslinks. By incorporating disulfide functional groups as crosslinks between polymer chains, materials can be produced which are stable at room temperature while also allowing for reversible crosslink dissociation upon application of elevated temperature. The mechanism behind this reaction can be attributed to the cleavage of disulfide linkages (RS−SR) into thiyl radicals (2 RS•) which can subsequently reassociate into new bonds, resulting in reprocessability and self-healing characteristics for the bulk material. However, since the bond dissociation energy of the disulfide bond is still fairly high, it is typically necessary to augment the bond with adjacent chemistry that can stabilize the unpaired electron of the intermediate state. As such, studies usually employ aromatic disulfides or disulfidediamine (RNS−SNR) functional groups to encourage the dynamic dissociation of the S−S bond; these chemistries can result in the bond dissociation energy being reduced to half (or even less) of its prior magnitude.
In practical terms, disulfide-containing CANs can be used to impart recyclability to polymeric materials while still exhibiting physical properties similar to that of thermosets. Typically, recyclability is restricted to thermoplastic materials, as said materials consist of polymer chains which are not bonded to each other at the molecular level; as a result, they can be melted down and reformed (as the addition of thermal energy allows the chains to untangle, move past each other, and adopt new configurations), but this comes at the expense of their physical robustness. Meanwhile, conventional thermosets contain permanent crosslinks which bolster their strength, toughness, creep resistance, and the like (as the bonding between chains provides resistance to deformation at the macroscopic level), but due to the permanence of said crosslinks, these materials cannot be reprocessed akin to thermoplastics. However, due to the dynamic nature of the crosslinks in disulfide CANs, they can be designed to exhibit the best attributes of both of the aforementioned material types. Studies have shown that disulfide CANs can be reprocessed multiple times with negligible degradation in performance while also exhibiting creep resistance, glass transition, and dynamic modulus values comparable to those observed in similar conventional thermoset systems.
See also
Diselenides in organoselenium chemistry
References
Further reading
External links
Protein structure
Post-translational modification
Sulfur
Functional groups | 0.781599 | 0.993006 | 0.776132 |
Mathematical and theoretical biology | Mathematical and theoretical biology, or biomathematics, is a branch of biology which employs theoretical analysis, mathematical models and abstractions of living organisms to investigate the principles that govern the structure, development and behavior of the systems, as opposed to experimental biology which deals with the conduction of experiments to test scientific theories. The field is sometimes called mathematical biology or biomathematics to stress the mathematical side, or theoretical biology to stress the biological side. Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms are sometimes interchanged.
Mathematical biology aims at the mathematical representation and modeling of biological processes, using techniques and tools of applied mathematics. It can be useful in both theoretical and practical research. Describing systems in a quantitative manner means their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter. This requires precise mathematical models.
Because of the complexity of the living systems, theoretical biology employs several fields of mathematics, and has contributed to the development of new techniques.
History
Early history
Mathematics has been used in biology as early as the 13th century, when Fibonacci used the famous Fibonacci series to describe a growing population of rabbits. In the 18th century, Daniel Bernoulli applied mathematics to describe the effect of smallpox on the human population. Thomas Malthus' 1789 essay on the growth of the human population was based on the concept of exponential growth. Pierre François Verhulst formulated the logistic growth model in 1836.
Fritz Müller described the evolutionary benefits of what is now called Müllerian mimicry in 1879, in an account notable for being the first use of a mathematical argument in evolutionary ecology to show how powerful the effect of natural selection would be, unless one includes Malthus's discussion of the effects of population growth that influenced Charles Darwin: Malthus argued that growth would be exponential (he uses the word "geometric") while resources (the environment's carrying capacity) could only grow arithmetically.
The term "theoretical biology" was first used as a monograph title by Johannes Reinke in 1901, and soon after by Jakob von Uexküll in 1920. One founding text is considered to be On Growth and Form (1917) by D'Arcy Thompson, and other early pioneers include Ronald Fisher, Hans Leo Przibram, Vito Volterra, Nicolas Rashevsky and Conrad Hal Waddington.
Recent growth
Interest in the field has grown rapidly from the 1960s onwards. Some reasons for this include:
The rapid growth of data-rich information sets, due to the genomics revolution, which are difficult to understand without the use of analytical tools
Recent development of mathematical tools such as chaos theory to help understand complex, non-linear mechanisms in biology
An increase in computing power, which facilitates calculations and simulations not previously possible
An increasing interest in in silico experimentation due to ethical considerations, risk, unreliability and other complications involved in human and animal research
Areas of research
Several areas of specialized research in mathematical and theoretical biology as well as external links to related projects in various universities are concisely presented in the following subsections, including also a large number of appropriate validating references from a list of several thousands of published authors contributing to this field. Many of the included examples are characterised by highly complex, nonlinear, and supercomplex mechanisms, as it is being increasingly recognised that the result of such interactions may only be understood through a combination of mathematical, logical, physical/chemical, molecular and computational models.
Abstract relational biology
Abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957–1958 as abstract, relational models of cellular and organismal organization.
Other approaches include the notion of autopoiesis developed by Maturana and Varela, Kauffman's Work-Constraints cycles, and more recently the notion of closure of constraints.
Algebraic biology
Algebraic biology (also known as symbolic systems biology) applies the algebraic methods of symbolic computation to the study of biological problems, especially in genomics, proteomics, analysis of molecular structures and study of genes.
Complex systems biology
An elaboration of systems biology to understand the more complex life processes was developed since 1970 in connection with molecular set theory, relational biology and algebraic biology.
Computer models and automata theory
A monograph on this topic summarizes an extensive amount of published research in this area up to 1986, including subsections in the following areas: computer modeling in biology and medicine, arterial system models, neuron models, biochemical and oscillation networks, quantum automata, quantum computers in molecular biology and genetics, cancer modelling, neural nets, genetic networks, abstract categories in relational biology, metabolic-replication systems, category theory applications in biology and medicine, automata theory, cellular automata, tessellation models and complete self-reproduction, chaotic systems in organisms, relational biology and organismic theories.
Modeling cell and molecular biology
This area has received a boost due to the growing importance of molecular biology.
Mechanics of biological tissues
Theoretical enzymology and enzyme kinetics
Cancer modelling and simulation
Modelling the movement of interacting cell populations
Mathematical modelling of scar tissue formation
Mathematical modelling of intracellular dynamics
Mathematical modelling of the cell cycle
Mathematical modelling of apoptosis
Modelling physiological systems
Modelling of arterial disease
Multi-scale modelling of the heart
Modelling electrical properties of muscle interactions, as in bidomain and monodomain models
Computational neuroscience
Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is the theoretical study of the nervous system.
Evolutionary biology
Ecology and evolutionary biology have traditionally been the dominant fields of mathematical biology.
Evolutionary biology has been the subject of extensive mathematical theorizing. The traditional approach in this area, which includes complications from genetics, is population genetics. Most population geneticists consider the appearance of new alleles by mutation, the appearance of new genotypes by recombination, and changes in the frequencies of existing alleles and genotypes at a small number of gene loci. When infinitesimal effects at a large number of gene loci are considered, together with the assumption of linkage equilibrium or quasi-linkage equilibrium, one derives quantitative genetics. Ronald Fisher made fundamental advances in statistics, such as analysis of variance, via his work on quantitative genetics. Another important branch of population genetics that led to the extensive development of coalescent theory is phylogenetics. Phylogenetics is an area that deals with the reconstruction and analysis of phylogenetic (evolutionary) trees and networks based on inherited characteristics Traditional population genetic models deal with alleles and genotypes, and are frequently stochastic.
Many population genetics models assume that population sizes are constant. Variable population sizes, often in the absence of genetic variation, are treated by the field of population dynamics. Work in this area dates back to the 19th century, and even as far as 1798 when Thomas Malthus formulated the first principle of population dynamics, which later became known as the Malthusian growth model. The Lotka–Volterra predator-prey equations are another famous example. Population dynamics overlap with another active area of research in mathematical biology: mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread of infections have been proposed and analyzed, and provide important results that may be applied to health policy decisions.
In evolutionary game theory, developed first by John Maynard Smith and George R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been mathematically refined to produce the field of adaptive dynamics.
Mathematical biophysics
The earlier stages of mathematical biology were dominated by mathematical biophysics, described as the application of mathematics in biophysics, often involving specific physical/mathematical models of biosystems and their components or compartments.
The following is a list of mathematical descriptions and their assumptions.
Deterministic processes (dynamical systems)
A fixed mapping between an initial state and a final state. Starting from an initial condition and moving forward in time, a deterministic process always generates the same trajectory, and no two trajectories cross in state space.
Difference equations/Maps – discrete time, continuous state space.
Ordinary differential equations – continuous time, continuous state space, no spatial derivatives. See also: Numerical ordinary differential equations.
Partial differential equations – continuous time, continuous state space, spatial derivatives. See also: Numerical partial differential equations.
Logical deterministic cellular automata – discrete time, discrete state space. See also: Cellular automaton.
Stochastic processes (random dynamical systems)
A random mapping between an initial state and a final state, making the state of the system a random variable with a corresponding probability distribution.
Non-Markovian processes – generalized master equation – continuous time with memory of past events, discrete state space, waiting times of events (or transitions between states) discretely occur.
Jump Markov process – master equation – continuous time with no memory of past events, discrete state space, waiting times between events discretely occur and are exponentially distributed. See also: Monte Carlo method for numerical simulation methods, specifically dynamic Monte Carlo method and Gillespie algorithm.
Continuous Markov process – stochastic differential equations or a Fokker–Planck equation – continuous time, continuous state space, events occur continuously according to a random Wiener process.
Spatial modelling
One classic work in this area is Alan Turing's paper on morphogenesis entitled The Chemical Basis of Morphogenesis, published in 1952 in the Philosophical Transactions of the Royal Society.
Travelling waves in a wound-healing assay
Swarming behaviour
A mechanochemical theory of morphogenesis
Biological pattern formation
Spatial distribution modeling using plot samples
Turing patterns
Mathematical methods
A model of a biological system is converted into a system of equations, although the word 'model' is often used synonymously with the system of corresponding equations. The solution of the equations, by either analytical or numerical means, describes how the biological system behaves either over time or at equilibrium. There are many different types of equations and the type of behavior that can occur is dependent on both the model and the equations used. The model often makes assumptions about the system. The equations may also make assumptions about the nature of what may occur.
Molecular set theory
Molecular set theory (MST) is a mathematical formulation of the wide-sense chemical kinetics of biomolecular reactions in terms of sets of molecules and their chemical transformations represented by set-theoretical mappings between molecular sets. It was introduced by Anthony Bartholomay, and its applications were developed in mathematical biology and especially in mathematical medicine.
In a more general sense, MST is the theory of molecular categories defined as categories of molecular sets and their chemical transformations represented as set-theoretical mappings of molecular sets. The theory has also contributed to biostatistics and the formulation of clinical biochemistry problems in mathematical formulations of pathological, biochemical changes of interest to Physiology, Clinical Biochemistry and Medicine.
Organizational biology
Theoretical approaches to biological organization aim to understand the interdependence between the parts of organisms. They emphasize the circularities that these interdependences lead to. Theoretical biologists developed several concepts to formalize this idea.
For example, abstract relational biology (ARB) is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957–1958 as abstract, relational models of cellular and organismal organization.
Model example: the cell cycle
The eukaryotic cell cycle is very complex and has been the subject of intense study, since its misregulation leads to cancers.
It is possibly a good example of a mathematical model as it deals with simple calculus but gives valid results. Two research groups have produced several models of the cell cycle simulating several organisms. They have recently produced a generic eukaryotic cell cycle model that can represent a particular eukaryote depending on the values of the parameters, demonstrating that the idiosyncrasies of the individual cell cycles are due to different protein concentrations and affinities, while the underlying mechanisms are conserved (Csikasz-Nagy et al., 2006).
By means of a system of ordinary differential equations these models show the change in time (dynamical system) of the protein inside a single typical cell; this type of model is called a deterministic process (whereas a model describing a statistical distribution of protein concentrations in a population of cells is called a stochastic process).
To obtain these equations an iterative series of steps must be done: first the several models and observations are combined to form a consensus diagram and the appropriate kinetic laws are chosen to write the differential equations, such as rate kinetics for stoichiometric reactions, Michaelis-Menten kinetics for enzyme substrate reactions and Goldbeter–Koshland kinetics for ultrasensitive transcription factors, afterwards the parameters of the equations (rate constants, enzyme efficiency coefficients and Michaelis constants) must be fitted to match observations; when they cannot be fitted the kinetic equation is revised and when that is not possible the wiring diagram is modified. The parameters are fitted and validated using observations of both wild type and mutants, such as protein half-life and cell size.
To fit the parameters, the differential equations must be studied. This can be done either by simulation or by analysis. In a simulation, given a starting vector (list of the values of the variables), the progression of the system is calculated by solving the equations at each time-frame in small increments.
In analysis, the properties of the equations are used to investigate the behavior of the system depending on the values of the parameters and variables. A system of differential equations can be represented as a vector field, where each vector described the change (in concentration of two or more protein) determining where and how fast the trajectory (simulation) is heading. Vector fields can have several special points: a stable point, called a sink, that attracts in all directions (forcing the concentrations to be at a certain value), an unstable point, either a source or a saddle point, which repels (forcing the concentrations to change away from a certain value), and a limit cycle, a closed trajectory towards which several trajectories spiral towards (making the concentrations oscillate).
A better representation, which handles the large number of variables and parameters, is a bifurcation diagram using bifurcation theory. The presence of these special steady-state points at certain values of a parameter (e.g. mass) is represented by a point and once the parameter passes a certain value, a qualitative change occurs, called a bifurcation, in which the nature of the space changes, with profound consequences for the protein concentrations: the cell cycle has phases (partially corresponding to G1 and G2) in which mass, via a stable point, controls cyclin levels, and phases (S and M phases) in which the concentrations change independently, but once the phase has changed at a bifurcation event (Cell cycle checkpoint), the system cannot go back to the previous levels since at the current mass the vector field is profoundly different and the mass cannot be reversed back through the bifurcation event, making a checkpoint irreversible. In particular the S and M checkpoints are regulated by means of special bifurcations called a Hopf bifurcation and an infinite period bifurcation.
See also
Biological applications of bifurcation theory
Biophysics
Biostatistics
Entropy and life
Ewens's sampling formula
Journal of Theoretical Biology
Logistic function
Mathematical modelling of infectious disease
Metabolic network modelling
Molecular modelling
Morphometrics
Population genetics
Spring school on theoretical biology
Statistical genetics
Theoretical ecology
Turing pattern
Notes
References
"Biologist Salary | Payscale". Payscale.Com, 2021, Biologist Salary | PayScale. Accessed 3 May 2021.
Theoretical biology
Further reading
External links
The Society for Mathematical Biology
The Collection of Biostatistics Research Archive | 0.781392 | 0.99322 | 0.776094 |
Research design | Research design refers to the overall strategy utilized to answer research questions. A research design typically outlines the theories and models underlying a project; the research question(s) of a project; a strategy for gathering data and information; and a strategy for producing answers from the data. A strong research design yields valid answers to research questions while weak designs yield unreliable, imprecise or irrelevant answers.
Incorporated in the design of a research study will depend on the standpoint of the researcher over their beliefs in the nature of knowledge (see epistemology) and reality (see ontology), often shaped by the disciplinary areas the researcher belongs to.
The design of a study defines the study type (descriptive, correlational, semi-experimental, experimental, review, meta-analytic) and sub-type (e.g., descriptive-longitudinal case study), research problem, hypotheses, independent and dependent variables, experimental design, and, if applicable, data collection methods and a statistical analysis plan. A research design is a framework that has been created to find answers to research questions.
Design types and sub-types
There are many ways to classify research designs. Nonetheless, the list below offers a number of useful distinctions between possible research designs. A research design is an arrangement of conditions or collection.
Descriptive (e.g., case-study, naturalistic observation, survey)
Correlational (e.g., case-control study, observational study)
Experimental (e.g., field experiment, controlled experiment, quasi-experiment)
Review (literature review, systematic review)
Meta-analytic (meta-analysis)
Sometimes a distinction is made between "fixed" and "flexible" designs. In some cases, these types coincide with quantitative and qualitative research designs respectively, though this need not be the case. In fixed designs, the design of the study is fixed before the main stage of data collection takes place. Fixed designs are normally theory-driven; otherwise, it is impossible to know in advance which variables need to be controlled and measured. Often, these variables are measured quantitatively. Flexible designs allow for more freedom during the data collection process. One reason for using a flexible research design can be that the variable of interest is not quantitatively measurable, such as culture. In other cases, the theory might not be available before one starts the research.
Grouping
The choice of how to group participants depends on the research hypothesis and on how the participants are sampled. In a typical experimental study, there will be at least one "experimental" condition (e.g., "treatment") and one "control" condition ("no treatment"), but the appropriate method of grouping may depend on factors such as the duration of measurement phase and participant characteristics:
Cohort study
Cross-sectional study
Cross-sequential study
Longitudinal study
Confirmatory versus exploratory research
research tests a priori hypotheses — outcome predictions that are made before the measurement phase begins. Such a priori hypotheses are usually derived from a theory or the results of previous studies. The advantage of confirmatory research is that the result is more meaningful, in the sense that it is much harder to claim that a certain result is generalizable beyond the data set. The reason for this is that in confirmatory research, one ideally strives to reduce the probability of falsely reporting a coincidental result as meaningful. This probability is known as α-level or the probability of a type I error.
research, on the other hand, seeks to generate a posteriori hypotheses by examining a data-set and looking for potential relations between variables. It is also possible to have an idea about a relation between variables but to lack knowledge of the direction and strength of the relation. If the researcher does not have any specific hypotheses beforehand, the study is exploratory with respect to the variables in question (although it might be confirmatory for others). The advantage of exploratory research is that it is easier to make new discoveries due to the less stringent methodological restrictions. Here, the researcher does not want to miss a potentially interesting relation and therefore aims to minimize the probability of rejecting a real effect or relation; this probability is sometimes referred to as β and the associated error is of type II. In other words, if the researcher simply wants to see whether some measured variables could be related, he would want to increase the chances of finding a significant result by lowering the threshold of what is deemed to be significant.
Sometimes, a researcher may conduct exploratory research but report it as if it had been confirmatory ('Hypothesizing After the Results are Known', HARKing—see Hypotheses suggested by the data); this is a questionable research practice bordering on fraud.
State problems versus process problems
A distinction can be made between state problems and process problems. State problems aim to answer what the state of a phenomenon is at a given time, while process problems deal with the change of phenomena over time. Examples of state problems are the level of mathematical skills of sixteen-year-old children, the computer skills of the elderly, the depression level of a person, etc. Examples of process problems are the development of mathematical skills from puberty to adulthood, the change in computer skills when people get older, and how depression symptoms change during therapy.
State problems are easier to measure than process problems. State problems just require one measurement of the phenomena of interest, while process problems always require multiple measurements. Research designs such as repeated measurements and longitudinal study are needed to address process problems.
Examples of fixed designs
Experimental research designs
In an experimental design, the researcher actively tries to change the situation, circumstances, or experience of participants (manipulation), which may lead to a change in behavior or outcomes for the participants of the study. The researcher randomly assigns participants to different conditions, measures the variables of interest, and tries to control for confounding variables. Therefore, experiments are often highly fixed even before the data collection starts.
In a good experimental design, a few things are of great importance. First of all, it is necessary to think of the best way to operationalize the variables that will be measured, as well as which statistical methods would be most appropriate to answer the research question. Thus, the researcher should consider what the expectations of the study are as well as how to analyze any potential results. Finally, in an experimental design, the researcher must think of the practical limitations including the availability of participants as well as how representative the participants are to the target population. It is important to consider each of these factors before beginning the experiment. Additionally, many researchers employ power analysis before they conduct an experiment, in order to determine how large the sample must be to find an effect of a given size with a given design at the desired probability of making a Type I or Type II error. The researcher has the advantage of minimizing resources in experimental research designs.
Non-experimental research designs
Non-experimental research designs do not involve a manipulation of the situation, circumstances or experience of the participants. Non-experimental research designs can be broadly classified into three categories. First, in relational designs, a range of variables are measured. These designs are also called correlation studies because correlation data are most often used in the analysis. Since correlation does not imply causation, such studies simply identify co-movements of variables. Correlational designs are helpful in identifying the relation of one variable to another, and seeing the frequency of co-occurrence in two natural groups (see Correlation and dependence). The second type is comparative research. These designs compare two or more groups on one or more variable, such as the effect of gender on grades. The third type of non-experimental research is a longitudinal design. A longitudinal design examines variables such as performance exhibited by a group or groups over time (see Longitudinal study).
Examples of flexible research designs
Case study
Famous case studies are for example the descriptions about the patients of Freud, who were thoroughly analysed and described.
Bell (1999) states "a case study approach is particularly appropriate for individual researchers because it gives an opportunity for one aspect of a problem to be studied in some depth within a limited time scale".
Grounded theory study
Grounded theory research is a systematic research process that works to develop "a process, and action or an interaction about a substantive topic".
See also
Bold hypothesis
Clinical study design
Design of experiments
Grey box completion and validation
Research proposal
Royal Commission on Animal Magnetism
References
design | 0.778846 | 0.996344 | 0.775999 |
Signal transduction | Signal transduction is the process by which a chemical or physical signal is transmitted through a cell as a series of molecular events. Proteins responsible for detecting stimuli are generally termed receptors, although in some cases the term sensor is used. The changes elicited by ligand binding (or signal sensing) in a receptor give rise to a biochemical cascade, which is a chain of biochemical events known as a signaling pathway.
When signaling pathways interact with one another they form networks, which allow cellular responses to be coordinated, often by combinatorial signaling events. At the molecular level, such responses include changes in the transcription or translation of genes, and post-translational and conformational changes in proteins, as well as changes in their location. These molecular events are the basic mechanisms controlling cell growth, proliferation, metabolism and many other processes. In multicellular organisms, signal transduction pathways regulate cell communication in a wide variety of ways.
Each component (or node) of a signaling pathway is classified according to the role it plays with respect to the initial stimulus. Ligands are termed first messengers, while receptors are the signal transducers, which then activate primary effectors. Such effectors are typically proteins and are often linked to second messengers, which can activate secondary effectors, and so on. Depending on the efficiency of the nodes, a signal can be amplified (a concept known as signal gain), so that one signaling molecule can generate a response involving hundreds to millions of molecules. As with other signals, the transduction of biological signals is characterised by delay, noise, signal feedback and feedforward and interference, which can range from negligible to pathological. With the advent of computational biology, the analysis of signaling pathways and networks has become an essential tool to understand cellular functions and disease, including signaling rewiring mechanisms underlying responses to acquired drug resistance.
Stimuli
The basis for signal transduction is the transformation of a certain stimulus into a biochemical signal. The nature of such stimuli can vary widely, ranging from extracellular cues, such as the presence of EGF, to intracellular events, such as the DNA damage resulting from replicative telomere attrition. Traditionally, signals that reach the central nervous system are classified as senses. These are transmitted from neuron to neuron in a process called synaptic transmission. Many other intercellular signal relay mechanisms exist in multicellular organisms, such as those that govern embryonic development.
Ligands
The majority of signal transduction pathways involve the binding of signaling molecules, known as ligands, to receptors that trigger events inside the cell. The binding of a signaling molecule with a receptor causes a change in the conformation of the receptor, known as receptor activation. Most ligands are soluble molecules from the extracellular medium which bind to cell surface receptors. These include growth factors, cytokines and neurotransmitters. Components of the extracellular matrix such as fibronectin and hyaluronan can also bind to such receptors (integrins and CD44, respectively). In addition, some molecules such as steroid hormones are lipid-soluble and thus cross the plasma membrane to reach cytoplasmic or nuclear receptors. In the case of steroid hormone receptors, their stimulation leads to binding to the promoter region of steroid-responsive genes.
Not all classifications of signaling molecules take into account the molecular nature of each class member. For example, odorants belong to a wide range of molecular classes, as do neurotransmitters, which range in size from small molecules such as dopamine to neuropeptides such as endorphins. Moreover, some molecules may fit into more than one class, e.g. epinephrine is a neurotransmitter when secreted by the central nervous system and a hormone when secreted by the adrenal medulla.
Some receptors such as HER2 are capable of ligand-independent activation when overexpressed or mutated. This leads to constitutive activation of the pathway, which may or may not be overturned by compensation mechanisms. In the case of HER2, which acts as a dimerization partner of other EGFRs, constitutive activation leads to hyperproliferation and cancer.
Mechanical forces
The prevalence of basement membranes in the tissues of Eumetazoans means that most cell types require attachment to survive. This requirement has led to the development of complex mechanotransduction pathways, allowing cells to sense the stiffness of the substratum. Such signaling is mainly orchestrated in focal adhesions, regions where the integrin-bound actin cytoskeleton detects changes and transmits them downstream through YAP1. Calcium-dependent cell adhesion molecules such as cadherins and selectins can also mediate mechanotransduction. Specialised forms of mechanotransduction within the nervous system are responsible for mechanosensation: hearing, touch, proprioception and balance.
Osmolarity
Cellular and systemic control of osmotic pressure (the difference in osmolarity between the cytosol and the extracellular medium) is critical for homeostasis. There are three ways in which cells can detect osmotic stimuli: as changes in macromolecular crowding, ionic strength, and changes in the properties of the plasma membrane or cytoskeleton (the latter being a form of mechanotransduction). These changes are detected by proteins known as osmosensors or osmoreceptors. In humans, the best characterised osmosensors are transient receptor potential channels present in the primary cilium of human cells. In yeast, the HOG pathway has been extensively characterised.
Temperature
The sensing of temperature in cells is known as thermoception and is primarily mediated by transient receptor potential channels. Additionally, animal cells contain a conserved mechanism to prevent high temperatures from causing cellular damage, the heat-shock response. Such response is triggered when high temperatures cause the dissociation of inactive HSF1 from complexes with heat shock proteins Hsp40/Hsp70 and Hsp90. With help from the ncRNA hsr1, HSF1 then trimerizes, becoming active and upregulating the expression of its target genes. Many other thermosensory mechanisms exist in both prokaryotes and eukaryotes.
Light
In mammals, light controls the sense of sight and the circadian clock by activating light-sensitive proteins in photoreceptor cells in the eye's retina. In the case of vision, light is detected by rhodopsin in rod and cone cells. In the case of the circadian clock, a different photopigment, melanopsin, is responsible for detecting light in intrinsically photosensitive retinal ganglion cells.
Receptors
Receptors can be roughly divided into two major classes: intracellular and extracellular receptors.
Extracellular receptors
Extracellular receptors are integral transmembrane proteins and make up most receptors. They span the plasma membrane of the cell, with one part of the receptor on the outside of the cell and the other on the inside. Signal transduction occurs as a result of a ligand binding to the outside region of the receptor (the ligand does not pass through the membrane). Ligand-receptor binding induces a change in the conformation of the inside part of the receptor, a process sometimes called "receptor activation". This results in either the activation of an enzyme domain of the receptor or the exposure of a binding site for other intracellular signaling proteins within the cell, eventually propagating the signal through the cytoplasm.
In eukaryotic cells, most intracellular proteins activated by a ligand/receptor interaction possess an enzymatic activity; examples include tyrosine kinase and phosphatases. Often such enzymes are covalently linked to the receptor. Some of them create second messengers such as cyclic AMP and IP3, the latter controlling the release of intracellular calcium stores into the cytoplasm. Other activated proteins interact with adaptor proteins that facilitate signaling protein interactions and coordination of signaling complexes necessary to respond to a particular stimulus. Enzymes and adaptor proteins are both responsive to various second messenger molecules.
Many adaptor proteins and enzymes activated as part of signal transduction possess specialized protein domains that bind to specific secondary messenger molecules. For example, calcium ions bind to the EF hand domains of calmodulin, allowing it to bind and activate calmodulin-dependent kinase. PIP3 and other phosphoinositides do the same thing to the Pleckstrin homology domains of proteins such as the kinase protein AKT.
G protein–coupled receptors
G protein–coupled receptors (GPCRs) are a family of integral transmembrane proteins that possess seven transmembrane domains and are linked to a heterotrimeric G protein. With nearly 800 members, this is the largest family of membrane proteins and receptors in mammals. Counting all animal species, they add up to over 5000. Mammalian GPCRs are classified into 5 major families: rhodopsin-like, secretin-like, metabotropic glutamate, adhesion and frizzled/smoothened, with a few GPCR groups being difficult to classify due to low sequence similarity, e.g. vomeronasal receptors. Other classes exist in eukaryotes, such as the Dictyostelium cyclic AMP receptors and fungal mating pheromone receptors.
Signal transduction by a GPCR begins with an inactive G protein coupled to the receptor; the G protein exists as a heterotrimer consisting of Gα, Gβ, and Gγ subunits. Once the GPCR recognizes a ligand, the conformation of the receptor changes to activate the G protein, causing Gα to bind a molecule of GTP and dissociate from the other two G-protein subunits. The dissociation exposes sites on the subunits that can interact with other molecules. The activated G protein subunits detach from the receptor and initiate signaling from many downstream effector proteins such as phospholipases and ion channels, the latter permitting the release of second messenger molecules. The total strength of signal amplification by a GPCR is determined by the lifetimes of the ligand-receptor complex and receptor-effector protein complex and the deactivation time of the activated receptor and effectors through intrinsic enzymatic activity; e.g. via protein kinase phosphorylation or b-arrestin-dependent internalization.
A study was conducted where a point mutation was inserted into the gene encoding the chemokine receptor CXCR2; mutated cells underwent a malignant transformation due to the expression of CXCR2 in an active conformation despite the absence of chemokine-binding. This meant that chemokine receptors can contribute to cancer development.
Tyrosine, Ser/Thr and Histidine-specific protein kinases
Receptor tyrosine kinases (RTKs) are transmembrane proteins with an intracellular kinase domain and an extracellular domain that binds ligands; examples include growth factor receptors such as the insulin receptor. To perform signal transduction, RTKs need to form dimers in the plasma membrane; the dimer is stabilized by ligands binding to the receptor. The interaction between the cytoplasmic domains stimulates the autophosphorylation of tyrosine residues within the intracellular kinase domains of the RTKs, causing conformational changes. Subsequent to this, the receptors' kinase domains are activated, initiating phosphorylation signaling cascades of downstream cytoplasmic molecules that facilitate various cellular processes such as cell differentiation and metabolism. Many Ser/Thr and dual-specificity protein kinases are important for signal transduction, either acting downstream of [receptor tyrosine kinases], or as membrane-embedded or cell-soluble versions in their own right. The process of signal transduction involves around 560 known protein kinases and pseudokinases, encoded by the human kinome
As is the case with GPCRs, proteins that bind GTP play a major role in signal transduction from the activated RTK into the cell. In this case, the G proteins are members of the Ras, Rho, and Raf families, referred to collectively as small G proteins. They act as molecular switches usually tethered to membranes by isoprenyl groups linked to their carboxyl ends. Upon activation, they assign proteins to specific membrane subdomains where they participate in signaling. Activated RTKs in turn activate small G proteins that activate guanine nucleotide exchange factors such as SOS1. Once activated, these exchange factors can activate more small G proteins, thus amplifying the receptor's initial signal. The mutation of certain RTK genes, as with that of GPCRs, can result in the expression of receptors that exist in a constitutively activated state; such mutated genes may act as oncogenes.
Histidine-specific protein kinases are structurally distinct from other protein kinases and are found in prokaryotes, fungi, and plants as part of a two-component signal transduction mechanism: a phosphate group from ATP is first added to a histidine residue within the kinase, then transferred to an aspartate residue on a receiver domain on a different protein or the kinase itself, thus activating the aspartate residue.
Integrins
Integrins are produced by a wide variety of cells; they play a role in cell attachment to other cells and the extracellular matrix and in the transduction of signals from extracellular matrix components such as fibronectin and collagen. Ligand binding to the extracellular domain of integrins changes the protein's conformation, clustering it at the cell membrane to initiate signal transduction. Integrins lack kinase activity; hence, integrin-mediated signal transduction is achieved through a variety of intracellular protein kinases and adaptor molecules, the main coordinator being integrin-linked kinase. As shown in the adjacent picture, cooperative integrin-RTK signaling determines the timing of cellular survival, apoptosis, proliferation, and differentiation.
Important differences exist between integrin-signaling in circulating blood cells and non-circulating cells such as epithelial cells; integrins of circulating cells are normally inactive. For example, cell membrane integrins on circulating leukocytes are maintained in an inactive state to avoid epithelial cell attachment; they are activated only in response to stimuli such as those received at the site of an inflammatory response. In a similar manner, integrins at the cell membrane of circulating platelets are normally kept inactive to avoid thrombosis. Epithelial cells (which are non-circulating) normally have active integrins at their cell membrane, helping maintain their stable adhesion to underlying stromal cells that provide signals to maintain normal functioning.
In plants, there are no bona fide integrin receptors identified to date; nevertheless, several integrin-like proteins were proposed based on structural homology with the metazoan receptors. Plants contain integrin-linked kinases that are very similar in their primary structure with the animal ILKs. In the experimental model plant Arabidopsis thaliana, one of the integrin-linked kinase genes, ILK1, has been shown to be a critical element in the plant immune response to signal molecules from bacterial pathogens and plant sensitivity to salt and osmotic stress. ILK1 protein interacts with the high-affinity potassium transporter HAK5 and with the calcium sensor CML9.
Toll-like receptors
When activated, toll-like receptors (TLRs) take adapter molecules within the cytoplasm of cells in order to propagate a signal. Four adaptor molecules are known to be involved in signaling, which are Myd88, TIRAP, TRIF, and TRAM. These adapters activate other intracellular molecules such as IRAK1, IRAK4, TBK1, and IKKi that amplify the signal, eventually leading to the induction or suppression of genes that cause certain responses. Thousands of genes are activated by TLR signaling, implying that this method constitutes an important gateway for gene modulation.
Ligand-gated ion channels
A ligand-gated ion channel, upon binding with a ligand, changes conformation to open a channel in the cell membrane through which ions relaying signals can pass. An example of this mechanism is found in the receiving cell of a neural synapse. The influx of ions that occurs in response to the opening of these channels induces action potentials, such as those that travel along nerves, by depolarizing the membrane of post-synaptic cells, resulting in the opening of voltage-gated ion channels.
An example of an ion allowed into the cell during a ligand-gated ion channel opening is Ca2+; it acts as a second messenger initiating signal transduction cascades and altering the physiology of the responding cell. This results in amplification of the synapse response between synaptic cells by remodelling the dendritic spines involved in the synapse.
Intracellular receptors
Intracellular receptors, such as nuclear receptors and cytoplasmic receptors, are soluble proteins localized within their respective areas. The typical ligands for nuclear receptors are non-polar hormones like the steroid hormones testosterone and progesterone and derivatives of vitamins A and D. To initiate signal transduction, the ligand must pass through the plasma membrane by passive diffusion. On binding with the receptor, the ligands pass through the nuclear membrane into the nucleus, altering gene expression.
Activated nuclear receptors attach to the DNA at receptor-specific hormone-responsive element (HRE) sequences, located in the promoter region of the genes activated by the hormone-receptor complex. Due to their enabling gene transcription, they are alternatively called inductors of gene expression. All hormones that act by regulation of gene expression have two consequences in their mechanism of action; their effects are produced after a characteristically long period of time and their effects persist for another long period of time, even after their concentration has been reduced to zero, due to a relatively slow turnover of most enzymes and proteins that would either deactivate or terminate ligand binding onto the receptor.
Nucleic receptors have DNA-binding domains containing zinc fingers and a ligand-binding domain; the zinc fingers stabilize DNA binding by holding its phosphate backbone. DNA sequences that match the receptor are usually hexameric repeats of any kind; the sequences are similar but their orientation and distance differentiate them. The ligand-binding domain is additionally responsible for dimerization of nucleic receptors prior to binding and providing structures for transactivation used for communication with the translational apparatus.
Steroid receptors are a subclass of nuclear receptors located primarily within the cytosol. In the absence of steroids, they associate in an aporeceptor complex containing chaperone or heatshock proteins (HSPs). The HSPs are necessary to activate the receptor by assisting the protein to fold in a way such that the signal sequence enabling its passage into the nucleus is accessible. Steroid receptors, on the other hand, may be repressive on gene expression when their transactivation domain is hidden. Receptor activity can be enhanced by phosphorylation of serine residues at their N-terminal as a result of another signal transduction pathway, a process called crosstalk.
Retinoic acid receptors are another subset of nuclear receptors. They can be activated by an endocrine-synthesized ligand that entered the cell by diffusion, a ligand synthesised from a precursor like retinol brought to the cell through the bloodstream or a completely intracellularly synthesised ligand like prostaglandin. These receptors are located in the nucleus and are not accompanied by HSPs. They repress their gene by binding to their specific DNA sequence when no ligand binds to them, and vice versa.
Certain intracellular receptors of the immune system are cytoplasmic receptors; recently identified NOD-like receptors (NLRs) reside in the cytoplasm of some eukaryotic cells and interact with ligands using a leucine-rich repeat (LRR) motif similar to TLRs. Some of these molecules like NOD2 interact with RIP2 kinase that activates NF-κB signaling, whereas others like NALP3 interact with inflammatory caspases and initiate processing of particular cytokines like interleukin-1β.
Second messengers
First messengers are the signaling molecules (hormones, neurotransmitters, and paracrine/autocrine agents) that reach the cell from the extracellular fluid and bind to their specific receptors. Second messengers are the substances that enter the cytoplasm and act within the cell to trigger a response. In essence, second messengers serve as chemical relays from the plasma membrane to the cytoplasm, thus carrying out intracellular signal transduction.
Calcium
The release of calcium ions from the endoplasmic reticulum into the cytosol results in its binding to signaling proteins that are then activated; it is then sequestered in the smooth endoplasmic reticulum and the mitochondria. Two combined receptor/ion channel proteins control the transport of calcium: the InsP3-receptor that transports calcium upon interaction with inositol triphosphate on its cytosolic side; and the ryanodine receptor named after the alkaloid ryanodine, similar to the InsP3 receptor but having a feedback mechanism that releases more calcium upon binding with it. The nature of calcium in the cytosol means that it is active for only a very short time, meaning its free state concentration is very low and is mostly bound to organelle molecules like calreticulin when inactive.
Calcium is used in many processes including muscle contraction, neurotransmitter release from nerve endings, and cell migration. The three main pathways that lead to its activation are GPCR pathways, RTK pathways, and gated ion channels; it regulates proteins either directly or by binding to an enzyme.
Lipid messengers
Lipophilic second messenger molecules are derived from lipids residing in cellular membranes; enzymes stimulated by activated receptors activate the lipids by modifying them. Examples include diacylglycerol and ceramide, the former required for the activation of protein kinase C.
Nitric oxide
Nitric oxide (NO) acts as a second messenger because it is a free radical that can diffuse through the plasma membrane and affect nearby cells. It is synthesised from arginine and oxygen by the NO synthase and works through activation of soluble guanylyl cyclase, which when activated produces another second messenger, cGMP. NO can also act through covalent modification of proteins or their metal co-factors; some have a redox mechanism and are reversible. It is toxic in high concentrations and causes damage during stroke, but is the cause of many other functions like the relaxation of blood vessels, apoptosis, and penile erections.
Redox signaling
In addition to nitric oxide, other electronically activated species are also signal-transducing agents in a process called redox signaling. Examples include superoxide, hydrogen peroxide, carbon monoxide, and hydrogen sulfide. Redox signaling also includes active modulation of electronic flows in semiconductive biological macromolecules.
Cellular responses
Gene activations and metabolism alterations are examples of cellular responses to extracellular stimulation that require signal transduction. Gene activation leads to further cellular effects, since the products of responding genes include instigators of activation; transcription factors produced as a result of a signal transduction cascade can activate even more genes. Hence, an initial stimulus can trigger the expression of a large number of genes, leading to physiological events like the increased uptake of glucose from the blood stream and the migration of neutrophils to sites of infection. The set of genes and their activation order to certain stimuli is referred to as a genetic program.
Mammalian cells require stimulation for cell division and survival; in the absence of growth factor, apoptosis ensues. Such requirements for extracellular stimulation are necessary for controlling cell behavior in unicellular and multicellular organisms; signal transduction pathways are perceived to be so central to biological processes that a large number of diseases are attributed to their dysregulation.
Three basic signals determine cellular growth:
Stimulatory (growth factors)
Transcription dependent responseFor example, steroids act directly as transcription factor (gives slow response, as transcription factor must bind DNA, which needs to be transcribed. Produced mRNA needs to be translated, and the produced protein/peptide can undergo posttranslational modification (PTM))
Transcription independent responseFor example, epidermal growth factor (EGF) binds the epidermal growth factor receptor (EGFR), which causes dimerization and autophosphorylation of the EGFR, which in turn activates the intracellular signaling pathway .
Inhibitory (cell-cell contact)
Permissive (cell-matrix interactions)
The combination of these signals is integrated into altered cytoplasmic machinery which leads to altered cell behaviour.
Major pathways
Following are some major signaling pathways, demonstrating how ligands binding to their receptors can affect second messengers and eventually result in altered cellular responses.
MAPK/ERK pathway: A pathway that couples intracellular responses to the binding of growth factors to cell surface receptors. This pathway is very complex and includes many protein components. In many cell types, activation of this pathway promotes cell division, and many forms of cancer are associated with aberrations in it.
cAMP-dependent pathway: In humans, cAMP works by activating protein kinase A (PKA, cAMP-dependent protein kinase) (see picture), and, thus, further effects depend mainly on cAMP-dependent protein kinase, which vary based on the type of cell.
IP3/DAG pathway: PLC cleaves the phospholipid phosphatidylinositol 4,5-bisphosphate (PIP2), yielding diacyl glycerol (DAG) and inositol 1,4,5-triphosphate (IP3). DAG remains bound to the membrane, and IP3 is released as a soluble structure into the cytosol. IP3 then diffuses through the cytosol to bind to IP3 receptors, particular calcium channels in the endoplasmic reticulum (ER). These channels are specific to calcium and allow the passage of only calcium to move through. This causes the cytosolic concentration of Calcium to increase, causing a cascade of intracellular changes and activity. In addition, calcium and DAG together works to activate PKC, which goes on to phosphorylate other molecules, leading to altered cellular activity. End-effects include taste, manic depression, tumor promotion, etc.
History
The earliest notion of signal transduction can be traced back to 1855, when Claude Bernard proposed that ductless glands such as the spleen, the thyroid and adrenal glands, were responsible for the release of "internal secretions" with physiological effects. Bernard's "secretions" were later named "hormones" by Ernest Starling in 1905. Together with William Bayliss, Starling had discovered secretin in 1902. Although many other hormones, most notably insulin, were discovered in the following years, the mechanisms remained largely unknown.
The discovery of nerve growth factor by Rita Levi-Montalcini in 1954, and epidermal growth factor by Stanley Cohen in 1962, led to more detailed insights into the molecular basis of cell signaling, in particular growth factors. Their work, together with Earl Wilbur Sutherland's discovery of cyclic AMP in 1956, prompted the redefinition of endocrine signaling to include only signaling from glands, while the terms autocrine and paracrine began to be used. Sutherland was awarded the 1971 Nobel Prize in Physiology or Medicine, while Levi-Montalcini and Cohen shared it in 1986.
In 1970, Martin Rodbell examined the effects of glucagon on a rat's liver cell membrane receptor. He noted that guanosine triphosphate disassociated glucagon from this receptor and stimulated the G-protein, which strongly influenced the cell's metabolism. Thus, he deduced that the G-protein is a transducer that accepts glucagon molecules and affects the cell. For this, he shared the 1994 Nobel Prize in Physiology or Medicine with Alfred G. Gilman. Thus, the characterization of RTKs and GPCRs led to the formulation of the concept of "signal transduction", a word first used in 1972. Some early articles used the terms signal transmission and sensory transduction. In 2007, a total of 48,377 scientific papers—including 11,211 review papers—were published on the subject. The term first appeared in a paper's title in 1979. Widespread use of the term has been traced to a 1980 review article by Rodbell: Research papers focusing on signal transduction first appeared in large numbers in the late 1980s and early 1990s.
Signal transduction in Immunology
The purpose of this section is to briefly describe some developments in immunology in the 1960s and 1970s, relevant to the initial stages of transmembrane signal transduction, and how they impacted our understanding of immunology, and ultimately of other areas of cell biology.
The relevant events begin with the sequencing of myeloma protein light chains, which are found in abundance in the urine of individuals with multiple myeloma. Biochemical experiments revealed that these so-called Bence Jones proteins consisted of 2 discrete domains –one that varied from one molecule to the next (the V domain) and one that did not (the Fc domain or the Fragment crystallizable region). An analysis of multiple V region sequences by Wu and Kabat identified locations within the V region that were hypervariable and which, they hypothesized, combined in the folded protein to form the antigen recognition site. Thus, within a relatively short time a plausible model was developed for the molecular basis of immunological specificity, and for mediation of biological function through the Fc domain. Crystallization of an IgG molecule soon followed ) confirming the inferences based on sequencing, and providing an understanding of immunological specificity at the highest level of resolution.
The biological significance of these developments was encapsulated in the theory of clonal selection which holds that a B cell has on its surface immunoglobulin receptors whose antigen-binding site is identical to that of antibodies that are secreted by the cell when it encounters an antigen, and more specifically a particular B cell clone secretes antibodies with identical sequences. The final piece of the story, the Fluid mosaic model of the plasma membrane provided all the ingredients for a new model for the initiation of signal transduction; viz, receptor dimerization.
The first hints of this were obtained by Becker et al who demonstrated that the extent to which human basophils—for which bivalent Immunoglobulin E (IgE) functions as a surface receptor – degranulate, depends on the concentration of anti IgE antibodies to which they are exposed, and results in a redistribution of surface molecules, which is absent when monovalent ligand is used. The latter observation was consistent with earlier findings by Fanger et al. These observations tied a biological response to events and structural details of molecules on the cell surface. A preponderance of evidence soon developed that receptor dimerization initiates responses (reviewed in ) in a variety of cell types, including B cells.
Such observations led to a number of theoretical (mathematical) developments. The first of these was a simple model proposed by Bell which resolved an apparent paradox: clustering forms stable networks; i.e. binding is essentially irreversible, whereas the affinities of antibodies secreted by B cells increase as the immune response progresses. A theory of the dynamics of cell surface clustering on lymphocyte membranes was developed by DeLisi and Perelson who found the size distribution of clusters as a function of time, and its dependence on the affinity and valence of the ligand. Subsequent theories for basophils and mast cells were developed by Goldstein and Sobotka and their collaborators, all aimed at the analysis of dose-response patterns of immune cells and their biological correlates. For a recent review of clustering in immunological systems see.
Ligand binding to cell surface receptors is also critical to motility, a phenomenon that is best understood in single-celled organisms. An example is a detection and response to concentration gradients by bacteria -–the classic mathematical theory appearing in. A recent account can be found in
See also
Adaptor protein
Scaffold protein
Biosemiotics
Cell signaling
Gene regulatory network
Hormonal imprinting
Metabolic pathway
Protein–protein interaction
Two-component regulatory system
References
External links
Netpath - A curated resource of signal transduction pathways in humans
Signal Transduction - The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
TRANSPATH(R) - A database about signal transduction pathways
[https://www.science.org/journal/signaling Science'''s STKE - Signal Transduction Knowledge Environment], from the journal Science'', published by AAAS.
UCSD-Nature Signaling Gateway , from Nature Publishing Group
LitInspector - Signal transduction pathway mining in PubMed abstracts
Huaxian Chen, et al. A Cell Based Immunocytochemical Assay For Monitoring Kinase Signaling Pathways And Drug Efficacy (PDF) Analytical Biochemistry 338 (2005) 136-142
www.Redoxsignaling.com
Signaling PAthway Database - Kyushu University
Cell cycle - Homo sapiens (human) - KEGG PATHWAY
Pathway Interaction Database - NCI
Literature-curated human signaling network, the largest human signaling network database
Cell biology
Cell signaling
Neurochemistry | 0.779617 | 0.995235 | 0.775902 |
Non-equilibrium thermodynamics | Non-equilibrium thermodynamics is a branch of thermodynamics that deals with physical systems that are not in thermodynamic equilibrium but can be described in terms of macroscopic quantities (non-equilibrium state variables) that represent an extrapolation of the variables used to specify the system in thermodynamic equilibrium. Non-equilibrium thermodynamics is concerned with transport processes and with the rates of chemical reactions.
Almost all systems found in nature are not in thermodynamic equilibrium, for they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems and to chemical reactions. Many systems and processes can, however, be considered to be in equilibrium locally, thus allowing description by currently known equilibrium thermodynamics. Nevertheless, some natural systems and processes remain beyond the scope of equilibrium thermodynamic methods due to the existence of non variational dynamics, where the concept of free energy is lost.
The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. One fundamental difference between equilibrium thermodynamics and non-equilibrium thermodynamics lies in the behaviour of inhomogeneous systems, which require for their study knowledge of rates of reaction which are not considered in equilibrium thermodynamics of homogeneous systems. This is discussed below. Another fundamental and very important difference is the difficulty, in defining entropy at an instant of time in macroscopic terms for systems not in thermodynamic equilibrium. However, it can be done locally, and the macroscopic entropy will then be given by the integral of the locally defined entropy density. It has been found that many systems far outside global equilibrium still obey the concept of local equilibrium.
Scope
Difference between equilibrium and non-equilibrium thermodynamics
A profound difference separates equilibrium from non-equilibrium thermodynamics. Equilibrium thermodynamics ignores the time-courses of physical processes. In contrast, non-equilibrium thermodynamics attempts to describe their time-courses in continuous detail.
Equilibrium thermodynamics restricts its considerations to processes that have initial and final states of thermodynamic equilibrium; the time-courses of processes are deliberately ignored. Non-equilibrium thermodynamics, on the other hand, attempting to describe continuous time-courses, needs its state variables to have a very close connection with those of equilibrium thermodynamics. This conceptual issue is overcome under the assumption of local equilibrium, which entails that the relationships that hold between macroscopic state variables at equilibrium hold locally, also outside equilibrium. Throughout the past decades, the assumption of local equilibrium has been tested, and found to hold, under increasingly extreme conditions, such as in the shock front of violent explosions, on reacting surfaces, and under extreme thermal gradients.
Thus, non-equilibrium thermodynamics provides a consistent framework for modelling not only the initial and final states of a system, but also the evolution of the system in time. Together with the concept of entropy production, this provides a powerful tool in process optimisation, and provides a theoretical foundation for exergy analysis.
Non-equilibrium state variables
The suitable relationship that defines non-equilibrium thermodynamic state variables is as follows. When the system is in local equilibrium, non-equilibrium state variables are such that they can be measured locally with sufficient accuracy by the same techniques as are used to measure thermodynamic state variables, or by corresponding time and space derivatives, including fluxes of matter and energy. In general, non-equilibrium thermodynamic systems are spatially and temporally non-uniform, but their non-uniformity still has a sufficient degree of smoothness to support the existence of suitable time and space derivatives of non-equilibrium state variables.
Because of the spatial non-uniformity, non-equilibrium state variables that correspond to extensive thermodynamic state variables have to be defined as spatial densities of the corresponding extensive equilibrium state variables. When the system is in local equilibrium, intensive non-equilibrium state variables, for example temperature and pressure, correspond closely with equilibrium state variables. It is necessary that measuring probes be small enough, and rapidly enough responding, to capture relevant non-uniformity. Further, the non-equilibrium state variables are required to be mathematically functionally related to one another in ways that suitably resemble corresponding relations between equilibrium thermodynamic state variables. In reality, these requirements, although strict, have been shown to be fulfilled even under extreme conditions, such as during phase transitions, at reacting interfaces, and in plasma droplets surrounded by ambient air. There are, however, situations where there are appreciable non-linear effects even at the local scale.
Overview
Some concepts of particular importance for non-equilibrium thermodynamics include time rate of dissipation of energy (Rayleigh 1873, Onsager 1931, also), time rate of entropy production (Onsager 1931), thermodynamic fields, dissipative structure, and non-linear dynamical structure.
One problem of interest is the thermodynamic study of non-equilibrium steady states, in which entropy production and some flows are non-zero, but there is no time variation of physical variables.
One initial approach to non-equilibrium thermodynamics is sometimes called 'classical irreversible thermodynamics'. There are other approaches to non-equilibrium thermodynamics, for example extended irreversible thermodynamics, and generalized thermodynamics, but they are hardly touched on in the present article.
Quasi-radiationless non-equilibrium thermodynamics of matter in laboratory conditions
According to Wildt (see also Essex), current versions of non-equilibrium thermodynamics ignore radiant heat; they can do so because they refer to laboratory quantities of matter under laboratory conditions with temperatures well below those of stars. At laboratory temperatures, in laboratory quantities of matter, thermal radiation is weak and can be practically nearly ignored. But, for example, atmospheric physics is concerned with large amounts of matter, occupying cubic kilometers, that, taken as a whole, are not within the range of laboratory quantities; then thermal radiation cannot be ignored.
Local equilibrium thermodynamics
The terms 'classical irreversible thermodynamics' and 'local equilibrium thermodynamics' are sometimes used to refer to a version of non-equilibrium thermodynamics that demands certain simplifying assumptions, as follows. The assumptions have the effect of making each very small volume element of the system effectively homogeneous, or well-mixed, or without an effective spatial structure. Even within the thought-frame of classical irreversible thermodynamics, care is needed in choosing the independent variables for systems. In some writings, it is assumed that the intensive variables of equilibrium thermodynamics are sufficient as the independent variables for the task (such variables are considered to have no 'memory', and do not show hysteresis); in particular, local flow intensive variables are not admitted as independent variables; local flows are considered as dependent on quasi-static local intensive variables.
Also it is assumed that the local entropy density is the same function of the other local intensive variables as in equilibrium; this is called the local thermodynamic equilibrium assumption (see also Keizer (1987)). Radiation is ignored because it is transfer of energy between regions, which can be remote from one another. In the classical irreversible thermodynamic approach, there is allowed spatial variation from infinitesimal volume element to adjacent infinitesimal volume element, but it is assumed that the global entropy of the system can be found by simple spatial integration of the local entropy density. This approach assumes spatial and temporal continuity and even differentiability of locally defined intensive variables such as temperature and internal energy density. While these demands may appear severely constrictive, it has been found that the assumptions of local equilibrium hold for a wide variety of systems, including reacting interfaces, on the surfaces of catalysts, in confined systems such as zeolites, under temperature gradients as large as K m, and even in shock fronts moving at up to six times the speed of sound.
In other writings, local flow variables are considered; these might be considered as classical by analogy with the time-invariant long-term time-averages of flows produced by endlessly repeated cyclic processes; examples with flows are in the thermoelectric phenomena known as the Seebeck and the Peltier effects, considered by Kelvin in the nineteenth century and by Lars Onsager in the twentieth. These effects occur at metal junctions, which were originally effectively treated as two-dimensional surfaces, with no spatial volume, and no spatial variation.
Local equilibrium thermodynamics with materials with "memory"
A further extension of local equilibrium thermodynamics is to allow that materials may have "memory", so that their constitutive equations depend not only on present values but also on past values of local equilibrium variables. Thus time comes into the picture more deeply than for time-dependent local equilibrium thermodynamics with memoryless materials, but fluxes are not independent variables of state.
Extended irreversible thermodynamics
Extended irreversible thermodynamics is a branch of non-equilibrium thermodynamics that goes outside the restriction to the local equilibrium hypothesis. The space of state variables is enlarged by including the fluxes of mass, momentum and energy and eventually higher order fluxes.
The formalism is well-suited for describing high-frequency processes and small-length scales materials.
Basic concepts
There are many examples of stationary non-equilibrium systems, some very simple, like a system confined between two thermostats at different temperatures or the ordinary Couette flow, a fluid enclosed between two flat walls moving in opposite directions and defining non-equilibrium conditions at the walls. Laser action is also a non-equilibrium process, but it depends on departure from local thermodynamic equilibrium and is thus beyond the scope of classical irreversible thermodynamics; here a strong temperature difference is maintained between two molecular degrees of freedom (with molecular laser, vibrational and rotational molecular motion), the requirement for two component 'temperatures' in the one small region of space, precluding local thermodynamic equilibrium, which demands that only one temperature be needed. Damping of acoustic perturbations or shock waves are non-stationary non-equilibrium processes. Driven complex fluids, turbulent systems and glasses are other examples of non-equilibrium systems.
The mechanics of macroscopic systems depends on a number of extensive quantities. It should be stressed that all systems are permanently interacting with their surroundings, thereby causing unavoidable fluctuations of extensive quantities. Equilibrium conditions of thermodynamic systems are related to the maximum property of the entropy. If the only extensive quantity that is allowed to fluctuate is the internal energy, all the other ones being kept strictly constant, the temperature of the system is measurable and meaningful. The system's properties are then most conveniently described using the thermodynamic potential Helmholtz free energy (A = U - TS), a Legendre transformation of the energy. If, next to fluctuations of the energy, the macroscopic dimensions (volume) of the system are left fluctuating, we use the Gibbs free energy (G = U + PV - TS), where the system's properties are determined both by the temperature and by the pressure.
Non-equilibrium systems are much more complex and they may undergo fluctuations of more extensive quantities. The boundary conditions impose on them particular intensive variables, like temperature gradients or distorted collective motions (shear motions, vortices, etc.), often called thermodynamic forces. If free energies are very useful in equilibrium thermodynamics, it must be stressed that there is no general law defining stationary non-equilibrium properties of the energy as is the second law of thermodynamics for the entropy in equilibrium thermodynamics. That is why in such cases a more generalized Legendre transformation should be considered. This is the extended Massieu potential.
By definition, the entropy (S) is a function of the collection of extensive quantities . Each extensive quantity has a conjugate intensive variable (a restricted definition of intensive variable is used here by comparison to the definition given in this link) so that:
We then define the extended Massieu function as follows:
where is the Boltzmann constant, whence
The independent variables are the intensities.
Intensities are global values, valid for the system as a whole. When boundaries impose to the system different local conditions, (e.g. temperature differences), there are intensive variables representing the average value and others representing gradients or higher moments. The latter are the thermodynamic forces driving fluxes of extensive properties through the system.
It may be shown that the Legendre transformation changes the maximum condition of the entropy (valid at equilibrium) in a minimum condition of the extended Massieu function for stationary states, no matter whether at equilibrium or not.
Stationary states, fluctuations, and stability
In thermodynamics one is often interested in a stationary state of a process, allowing that the stationary state include the occurrence of unpredictable and experimentally unreproducible fluctuations in the state of the system. The fluctuations are due to the system's internal sub-processes and to exchange of matter or energy with the system's surroundings that create the constraints that define the process.
If the stationary state of the process is stable, then the unreproducible fluctuations involve local transient decreases of entropy. The reproducible response of the system is then to increase the entropy back to its maximum by irreversible processes: the fluctuation cannot be reproduced with a significant level of probability. Fluctuations about stable stationary states are extremely small except near critical points (Kondepudi and Prigogine 1998, page 323). The stable stationary state has a local maximum of entropy and is locally the most reproducible state of the system. There are theorems about the irreversible dissipation of fluctuations. Here 'local' means local with respect to the abstract space of thermodynamic coordinates of state of the system.
If the stationary state is unstable, then any fluctuation will almost surely trigger the virtually explosive departure of the system from the unstable stationary state. This can be accompanied by increased export of entropy.
Local thermodynamic equilibrium
The scope of present-day non-equilibrium thermodynamics does not cover all physical processes. A condition for the validity of many studies in non-equilibrium thermodynamics of matter is that they deal with what is known as local thermodynamic equilibrium.
Ponderable matter
Local thermodynamic equilibrium of matter (see also Keizer (1987) means that conceptually, for study and analysis, the system can be spatially and temporally divided into 'cells' or 'micro-phases' of small (infinitesimal) size, in which classical thermodynamical equilibrium conditions for matter are fulfilled to good approximation. These conditions are unfulfilled, for example, in very rarefied gases, in which molecular collisions are infrequent; and in the boundary layers of a star, where radiation is passing energy to space; and for interacting fermions at very low temperature, where dissipative processes become ineffective. When these 'cells' are defined, one admits that matter and energy may pass freely between contiguous 'cells', slowly enough to leave the 'cells' in their respective individual local thermodynamic equilibria with respect to intensive variables.
One can think here of two 'relaxation times' separated by order of magnitude. The longer relaxation time is of the order of magnitude of times taken for the macroscopic dynamical structure of the system to change. The shorter is of the order of magnitude of times taken for a single 'cell' to reach local thermodynamic equilibrium. If these two relaxation times are not well separated, then the classical non-equilibrium thermodynamical concept of local thermodynamic equilibrium loses its meaning and other approaches have to be proposed, see for instance Extended irreversible thermodynamics. For example, in the atmosphere, the speed of sound is much greater than the wind speed; this favours the idea of local thermodynamic equilibrium of matter for atmospheric heat transfer studies at altitudes below about 60 km where sound propagates, but not above 100 km, where, because of the paucity of intermolecular collisions, sound does not propagate.
Milne's definition in terms of radiative equilibrium
Edward A. Milne, thinking about stars, gave a definition of 'local thermodynamic equilibrium' in terms of the thermal radiation of the matter in each small local 'cell'. He defined 'local thermodynamic equilibrium' in a 'cell' by requiring that it macroscopically absorb and spontaneously emit radiation as if it were in radiative equilibrium in a cavity at the temperature of the matter of the 'cell'. Then it strictly obeys Kirchhoff's law of equality of radiative emissivity and absorptivity, with a black body source function. The key to local thermodynamic equilibrium here is that the rate of collisions of ponderable matter particles such as molecules should far exceed the rates of creation and annihilation of photons.
Entropy in evolving systems
It is pointed out by W.T. Grandy Jr, that entropy, though it may be defined for a non-equilibrium system is—when strictly considered—only a macroscopic quantity that refers to the whole system, and is not a dynamical variable and in general does not act as a local potential that describes local physical forces. Under special circumstances, however, one can metaphorically think as if the thermal variables behaved like local physical forces. The approximation that constitutes classical irreversible thermodynamics is built on this metaphoric thinking.
This point of view shares many points in common with the concept and the use of entropy in continuum thermomechanics, which evolved completely independently of statistical mechanics and maximum-entropy principles.
Entropy in non-equilibrium
To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that are used to fix the equilibrium state, as was described above, a set of variables that are called internal variables have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their tending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable
where is a relaxation time of a corresponding variables. It is convenient to consider the initial value are equal to zero. The above equation is valid for small deviations from equilibrium; The dynamics of internal variables in general case is considered by Pokrovskii.
Entropy of the system in non-equilibrium is a function of the total set of variables
The essential contribution to the thermodynamics of the non-equilibrium systems was brought by Prigogine, when he and his collaborators investigated the systems of chemically reacting substances. The stationary states of such systems exists due to exchange both particles and energy with the environment. In section 8 of the third chapter of his book, Prigogine has specified three contributions to the variation of entropy of the considered system at the given volume and constant temperature . The increment of entropy can be calculated according to the formula
The first term on the right hand side of the equation presents a stream of thermal energy into the system; the last term—a part of a stream of energy coming into the system with the stream of particles of substances that can be positive or negative, , where is chemical potential of substance . The middle term in (1) depicts energy dissipation (entropy production) due to the relaxation of internal variables . In the case of chemically reacting substances, which was investigated by Prigogine, the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalised, to consider any deviation from the equilibrium state as an internal variable, so that we consider the set of internal variables in equation (1) to consist of the quantities defining not only degrees of completeness of all chemical reactions occurring in the system, but also the structure of the system, gradients of temperature, difference of concentrations of substances and so on.
Flows and forces
The fundamental relation of classical equilibrium thermodynamics
expresses the change in entropy of a system as a function of the intensive quantities temperature , pressure and chemical potential and of the differentials of the extensive quantities energy , volume and particle number .
Following Onsager (1931,I), let us extend our considerations to thermodynamically non-equilibrium systems. As a basis, we need locally defined versions of the extensive macroscopic quantities , and and of the intensive macroscopic quantities , and .
For classical non-equilibrium studies, we will consider some new locally defined intensive macroscopic variables. We can, under suitable conditions, derive these new variables by locally defining the gradients and flux densities of the basic locally defined macroscopic quantities.
Such locally defined gradients of intensive macroscopic variables are called 'thermodynamic forces'. They 'drive' flux densities, perhaps misleadingly often called 'fluxes', which are dual to the forces. These quantities are defined in the article on Onsager reciprocal relations.
Establishing the relation between such forces and flux densities is a problem in statistical mechanics. Flux densities may be coupled. The article on Onsager reciprocal relations considers the stable near-steady thermodynamically non-equilibrium regime, which has dynamics linear in the forces and flux densities.
In stationary conditions, such forces and associated flux densities are by definition time invariant, as also are the system's locally defined entropy and rate of entropy production. Notably, according to Ilya Prigogine and others, when an open system is in conditions that allow it to reach a stable stationary thermodynamically non-equilibrium state, it organizes itself so as to minimize total entropy production defined locally. This is considered further below.
One wants to take the analysis to the further stage of describing the behaviour of surface and volume integrals of non-stationary local quantities; these integrals are macroscopic fluxes and production rates. In general the dynamics of these integrals are not adequately described by linear equations, though in special cases they can be so described.
Onsager reciprocal relations
Following Section III of Rayleigh (1873), Onsager (1931, I) showed that in the regime where both the flows are small and the thermodynamic forces vary slowly, the rate of creation of entropy is linearly related to the flows:
and the flows are related to the gradient of the forces, parametrized by a matrix of coefficients conventionally denoted :
from which it follows that:
The second law of thermodynamics requires that the matrix be positive definite. Statistical mechanics considerations involving microscopic reversibility of dynamics imply that the matrix is symmetric. This fact is called the Onsager reciprocal relations.
The generalization of the above equations for the rate of creation of entropy was given by Pokrovskii.
Speculated extremal principles for non-equilibrium processes
Until recently, prospects for useful extremal principles in this area have seemed clouded. Nicolis (1999) concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum dissipation; she says this seems to rule out the existence of a global organizing principle, and comments that this is to some extent disappointing; she also points to the difficulty of finding a thermodynamically consistent form of entropy production. Another top expert offers an extensive discussion of the possibilities for principles of extrema of entropy production and of dissipation of energy: Chapter 12 of Grandy (2008) is very cautious, and finds difficulty in defining the 'rate of internal entropy production' in many cases, and finds that sometimes for the prediction of the course of a process, an extremum of the quantity called the rate of dissipation of energy may be more useful than that of the rate of entropy production; this quantity appeared in Onsager's 1931 origination of this subject. Other writers have also felt that prospects for general global extremal principles are clouded. Such writers include Glansdorff and Prigogine (1971), Lebon, Jou and Casas-Vásquez (2008), and Šilhavý (1997).
There is good experimental evidence that heat convection does not obey extremal principles for time rate of entropy production. Theoretical analysis shows that chemical reactions do not obey extremal principles for the second differential of time rate of entropy production. The development of a general extremal principle seems infeasible in the current state of knowledge.
Applications
Non-equilibrium thermodynamics has been successfully applied to describe biological processes such as protein folding/unfolding and transport through membranes.
It is also used to give a description of the dynamics of nanoparticles, which can be out of equilibrium in systems where catalysis and electrochemical conversion is involved.
Also, ideas from non-equilibrium thermodynamics and the informatic theory of entropy have been adapted to describe general economic systems.
See also
Time crystal
Dissipative system
Entropy production
Extremal principles in non-equilibrium thermodynamics
Self-organization
Autocatalytic reactions and order creation
Self-organizing criticality
Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy of equations
Boltzmann equation
Vlasov equation
Maxwell's demon
Information entropy
Spontaneous symmetry breaking
Autopoiesis
Maximum power principle
References
Sources
Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, .
Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, .
Glansdorff, P., Prigogine, I. (1971). Thermodynamic Theory of Structure, Stability, and Fluctuations, Wiley-Interscience, London, 1971, .
Grandy, W.T. Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press. .
Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the Hungarian (1967) by E. Gyarmati and W.F. Heinz, Springer, Berlin.
Lieb, E.H., Yngvason, J. (1999). 'The physics and mathematics of the second law of thermodynamics', Physics Reports, 310: 1–96. See also this.
Further reading
Ziegler, Hans (1977): An introduction to Thermomechanics. North Holland, Amsterdam. . Second edition (1983) .
Kleidon, A., Lorenz, R.D., editors (2005). Non-equilibrium Thermodynamics and the Production of Entropy, Springer, Berlin. .
Prigogine, I. (1955/1961/1967). Introduction to Thermodynamics of Irreversible Processes. 3rd edition, Wiley Interscience, New York.
Zubarev D. N. (1974): Nonequilibrium Statistical Thermodynamics. New York, Consultants Bureau. ; .
Keizer, J. (1987). Statistical Thermodynamics of Nonequilibrium Processes, Springer-Verlag, New York, .
Zubarev D. N., Morozov V., Ropke G. (1996): Statistical Mechanics of Nonequilibrium Processes: Basic Concepts, Kinetic Theory. John Wiley & Sons. .
Zubarev D. N., Morozov V., Ropke G. (1997): Statistical Mechanics of Nonequilibrium Processes: Relaxation and Hydrodynamic Processes. John Wiley & Sons. .
Tuck, Adrian F. (2008). Atmospheric turbulence : a molecular dynamics perspective. Oxford University Press. .
Grandy, W.T. Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press. .
Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures. John Wiley & Sons, Chichester. .
de Groot S.R., Mazur P. (1984). Non-Equilibrium Thermodynamics (Dover).
Ramiro Augusto Salazar La Rotta. (2011). The Non-Equilibrium Thermodynamics, Perpetual
External links
Stephan Herminghaus' Dynamics of Complex Fluids Department at the Max Planck Institute for Dynamics and Self Organization
Non-equilibrium Statistical Thermodynamics applied to Fluid Dynamics and Laser Physics - 1992- book by Xavier de Hemptinne.
Nonequilibrium Thermodynamics of Small Systems - PhysicsToday.org
Into the Cool - 2005 book by Dorion Sagan and Eric D. Schneider, on nonequilibrium thermodynamics and evolutionary theory.
"Thermodynamics "beyond" local equilibrium"
Branches of thermodynamics | 0.784052 | 0.989595 | 0.775894 |
Standard enthalpy of reaction | The standard enthalpy of reaction (denoted ) for a chemical reaction is the difference between total product and total reactant molar enthalpies, calculated for substances in their standard states. The value can be approximately interpreted in terms of the total of the chemical bond energies for bonds broken and bonds formed.
For a generic chemical reaction
the standard enthalpy of reaction is related to the standard enthalpy of formation values of the reactants and products by the following equation:
In this equation, are the stoichiometric coefficients of each product and reactant. The standard enthalpy of formation, which has been determined for a vast number of substances, is the change of enthalpy during the formation of 1 mole of the substance from its constituent elements, with all substances in their standard states.
Standard states can be defined at any temperature and pressure, so both the standard temperature and pressure must always be specified. Most values of standard thermochemical data are tabulated at either (25°C, 1 bar) or (25°C, 1 atm).
For ions in aqueous solution, the standard state is often chosen such that the aqueous H+ ion at a concentration of exactly 1 mole/liter has a standard enthalpy of formation equal to zero, which makes possible the tabulation of standard enthalpies for cations and anions at the same standard concentration. This convention is consistent with the use of the standard hydrogen electrode in the field of electrochemistry. However, there are other common choices in certain fields, including a standard concentration for H+ of exactly 1 mole/(kg solvent) (widely used in chemical engineering) and mole/L (used in the field of biochemistry). For this reason it is important to note which standard concentration value is being used when consulting tables of enthalpies of formation.
Introduction
Two initial thermodynamic systems, each isolated in their separate states of internal thermodynamic equilibrium, can, by a thermodynamic operation, be coalesced into a single new final isolated thermodynamic system. If the initial systems differ in chemical constitution, then the eventual thermodynamic equilibrium of the final system can be the result of chemical reaction. Alternatively, an isolated thermodynamic system, in the absence of some catalyst, can be in a metastable equilibrium; introduction of a catalyst, or some other thermodynamic operation, such as release of a spark, can trigger a chemical reaction. The chemical reaction will, in general, transform some chemical potential energy into thermal energy. If the joint system is kept isolated, then its internal energy remains unchanged. Such thermal energy manifests itself, however, in changes in the non-chemical state variables (such as temperature, pressure, volume) of the joint systems, as well as the changes in the mole numbers of the chemical constituents that describe the chemical reaction.
Internal energy is defined with respect to some standard state. Subject to suitable thermodynamic operations, the chemical constituents of the final system can be brought to their respective standard states, along with transfer of energy as heat or through thermodynamic work, which can be measured or calculated from measurements of non-chemical state variables. Accordingly, the calculation of standard enthalpy of reaction is the most established way of quantifying the conversion of chemical potential energy into thermal energy.
Enthalpy of reaction for standard conditions defined and measured
The standard enthalpy of a reaction is defined so as to depend simply upon the standard conditions that are specified for it, not simply on the conditions under which the reactions actually occur. There are two general conditions under which thermochemical measurements are actually made.
(a) Constant volume and temperature: heat , where (sometimes written as ) is the internal energy of the system
(b) Constant pressure and temperature: heat , where is the enthalpy of the system
The magnitudes of the heat effects in these two conditions are different. In the first case the volume of the system is kept constant during the course of the measurement by carrying out the reaction in a closed and rigid container, and as there is no change in the volume no work is involved. From the first law of thermodynamics, , where W is the work done by the system. When only expansion work is possible for a process we have ; this implies that the heat of reaction at constant volume is equal to the change in the internal energy of the reacting system.
The thermal change that occurs in a chemical reaction is only due to the difference between the sum of internal energy of the products and the sum of the internal energy of reactants. We have
This also signifies that the amount of heat absorbed at constant volume could be identified with the change in the thermodynamic quantity internal energy.
At constant pressure on the other hand, the system is either kept open to the atmosphere or confined within a container on which a constant external pressure is exerted and under these conditions the volume of the system changes.
The thermal change at a constant pressure not only involves the change in the internal energy of the system but also the work performed either in expansion or contraction of the system. In general the first law requires that
(work)
If is only pressure–volume work, then at constant pressure
Assuming that the change in state variables is due solely to a chemical reaction, we have
As enthalpy or heat content is defined by , we have
By convention, the enthalpy of each element in its standard state is assigned a value of zero. If pure preparations of compounds or ions are not possible, then special further conventions are defined. Regardless, if each reactant and product can be prepared in its respective standard state, then the contribution of each species is equal to its molar enthalpy of formation multiplied by its stoichiometric coefficient in the reaction, and the enthalpy of reaction at constant (standard) pressure and constant temperature (usually 298 K) may be written as
As shown above, at constant pressure the heat of the reaction is exactly equal to the enthalpy change, , of the reacting system.
Variation with temperature or pressure
The variation of the enthalpy of reaction with temperature is given by Kirchhoff's Law of Thermochemistry, which states that the temperature derivative of ΔH for a chemical reaction is given by the difference in heat capacity (at constant pressure) between products and reactants:
.
Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature.
'Pressure variation effects and corrections due to mixing are generally minimal unless a reaction involves non-ideal gases and/or solutes, or is carried out at extremely high pressures. The enthalpy of mixing for a solution of ideal gases is exactly zero; the same is true for a reaction where the reactants and products are pure, unmixed components. Contributions to reaction enthalpies due to concentration variations for solutes in solution generally must be experimentally determined on a case by case basis, but would be exactly zero for ideal solutions since no change in the solution's average intermolecular forces as a function of concentration is possible in an ideal solution.
Subcategories
In each case the word standard implies that all reactants and products are in their standard states.
Standard enthalpy of combustion is the enthalpy change when one mole of an organic compound reacts with molecular oxygen (O2) to form carbon dioxide and liquid water. For example, the standard enthalpy of combustion of ethane gas refers to the reaction C2H6 (g) + (7/2) O2 (g) → 2 CO2 (g) + 3 H2O (l).
Standard enthalpy of formation is the enthalpy change when one mole of any compound is formed from its constituent elements in their standard states. The enthalpy of formation of one mole of ethane gas refers to the reaction 2 C (graphite) + 3 H2 (g) → C2H6 (g).
Standard enthalpy of hydrogenation is defined as the enthalpy change observed when one mole of an unsaturated compound reacts with an excess of hydrogen to become fully saturated. The hydrogenation of one mole of acetylene yields ethane as a product and is described by the equation C2H2 (g) + 2 H2 (g) → C2H6 (g).
Standard enthalpy of neutralization is the change in enthalpy that occurs when an acid and base undergo a neutralization reaction to form one mole of water. For example in aqueous solution, the standard enthalpy of neutralization of hydrochloric acid and the base magnesium hydroxide refers to the reaction HCl (aq) + 1/2 Mg(OH)2 → 1/2 MgCl2 (aq) + H2O(l).
Evaluation of reaction enthalpies
There are several methods of determining the values of reaction enthalpies, involving either measurements on the reaction of interest or calculations from data for related reactions.
For reactions which go rapidly to completion, it is often possible to measure the heat of reaction directly using a calorimeter. One large class of reactions for which such measurements are common is the combustion of organic compounds by reaction with molecular oxygen (O2) to form carbon dioxide and water (H2O). The heat of combustion can be measured with a so-called bomb calorimeter, in which the heat released by combustion at high temperature is lost to the surroundings as the system returns to its initial temperature. Since enthalpy is a state function, its value is the same for any path between given initial and final states, so that the measured ΔH'' is the same as if the temperature stayed constant during the combustion.
For reactions which are incomplete, the equilibrium constant can be determined as a function of temperature. The enthalpy of reaction is then found from the van 't Hoff equation as . A closely related technique is the use of an electroanalytical voltaic cell, which can be used to measure the Gibbs energy for certain reactions as a function of temperature, yielding and thereby .
It is also possible to evaluate the enthalpy of one reaction from the enthalpies of a number of other reactions whose sum is the reaction of interest, and these not need be formation reactions. This method is based on Hess's law, which states that the enthalpy change is the same for a chemical reaction which occurs as a single reaction or in several steps. If the enthalpies for each step can be measured, then their sum gives the enthalpy of the overall single reaction.
Finally the reaction enthalpy may be estimated using bond energies for the bonds which are broken and formed in the reaction of interest. This method is only approximate, however, because a reported bond energy is only an average value for different molecules with bonds between the same elements.
References
Enthalpy
Thermochemistry
Thermodynamics
pl:Standardowe molowe ciepło tworzenia | 0.785175 | 0.98816 | 0.775879 |
Gaussian (software) | Gaussian is a general purpose computational chemistry software package initially released in 1970 by John Pople and his research group at Carnegie Mellon University as Gaussian 70. It has been continuously updated since then. The name originates from Pople's use of Gaussian orbitals to speed up molecular electronic structure calculations as opposed to using Slater-type orbitals, a choice made to improve performance on the limited computing capacities of then-current computer hardware for Hartree–Fock calculations. The current version of the program is Gaussian 16. Originally available through the Quantum Chemistry Program Exchange, it was later licensed out of Carnegie Mellon University, and since 1987 has been developed and licensed by Gaussian, Inc.
Standard abilities
According to the most recent Gaussian manual, the package can do:
Molecular mechanics
AMBER
Universal force field (UFF)
DREIDING force field
Semi-empirical quantum chemistry method calculations
Austin Model 1 (AM1), PM3, CNDO, INDO, MINDO/3, MNDO
Self-consistent field (SCF methods)
Hartree–Fock method: restricted, unrestricted, and restricted open-shell
Møller–Plesset perturbation theory (MP2, MP3, MP4, MP5).
Built-in density functional theory (DFT) methods
B3LYP and other hybrid functionals
Exchange functionals: PBE, MPW, PW91, Slater, X-alpha, Gill96, TPSS.
Correlation functionals: PBE, TPSS, VWN, PW91, LYP, PL, P86, B95
ONIOM (QM/MM method) up to three layers
Complete active space (CAS) and multi-configurational self-consistent field calculations
Coupled cluster calculations
Quadratic configuration interaction (QCI) methods
Quantum chemistry composite methods – CBS-QB3, CBS-4, CBS-Q, CBS-Q/APNO, G1, G2, G3, W1 high-accuracy methods
Official release history
Gaussian 70, Gaussian 76, Gaussian 80, Gaussian 82, Gaussian 86, Gaussian 88, Gaussian 90, Gaussian 92, Gaussian 92/DFT, Gaussian 94, and Gaussian 98, Gaussian 03, Gaussian 09, Gaussian 16.
Other programs named 'Gaussian XX' were placed among the holdings of the Quantum Chemistry Program Exchange. These were unofficial, unverified ports of the program to other computer platforms.
License controversy
In the past, Gaussian, Inc. has attracted controversy for its licensing terms that stipulate that researchers who develop competing software packages are not permitted to use the software. Some scientists consider these terms overly restrictive. The anonymous group bannedbygaussian.org has published a list of scientists whom it claims are not permitted to use GAUSSIAN software. These assertions were repeated by Jim Giles in 2004 in Nature. The controversy was also noted in 1999 by Chemical and Engineering News (repeated without additional content in 2004), and in 2000, the World Association of Theoretically Oriented Chemists Scientific Board held a referendum of its executive board members on this issue with a majority (23 of 28) approving the resolution opposing the restrictive licenses.
Gaussian, Inc. disputes the accuracy of these descriptions of its policy and actions, noting that all of the listed institutions do in fact have licenses for everyone but directly competing researchers. They also claim that not licensing competitors is standard practice in the software industry and members of the Gaussian collaboration community have been refused licenses from competing institutions.
See also
List of quantum chemistry and solid-state physics software
References
External links
Computational chemistry software | 0.791173 | 0.980643 | 0.775858 |
Catenation | In chemistry, catenation is the bonding of atoms of the same element into a series, called a chain. A chain or a ring shape may be open if its ends are not bonded to each other (an open-chain compound), or closed if they are bonded in a ring (a cyclic compound). The words to catenate and catenation reflect the Latin root catena, "chain".
Carbon
Catenation occurs most readily with carbon, which forms covalent bonds with other carbon atoms to form longer chains and structures. This is the reason for the presence of the vast number of organic compounds in nature. Carbon is most well known for its properties of catenation, with organic chemistry essentially being the study of catenated carbon structures (and known as catenae). Carbon chains in biochemistry combine any of various other elements, such as hydrogen, oxygen, and biometals, onto the backbone of carbon.
However, carbon is by no means the only element capable of forming such catenae, and several other main-group elements are capable of forming an expansive range of catenae, including hydrogen, boron, silicon, phosphorus, sulfur and halogens.
The ability of an element to catenate is primarily based on the bond energy of the element to itself, which decreases with more diffuse orbitals (those with higher azimuthal quantum number) overlapping to form the bond. Hence, carbon, with the least diffuse valence shell p orbital is capable of forming longer p-p sigma bonded chains of atoms than heavier elements which bond via higher valence shell orbitals. Catenation ability is also influenced by a range of steric and electronic factors, including the electronegativity of the element in question, the molecular orbital n and the ability to form different kinds of covalent bonds. For carbon, the sigma overlap between adjacent atoms is sufficiently strong that perfectly stable chains can be formed. With other elements this was once thought to be extremely difficult in spite of plenty of evidence to the contrary.
Hydrogen
Theories of the structure of water involve three-dimensional networks of tetrahedra and chains and rings, linked via hydrogen bonding.
A polycatenated network, with rings formed from metal-templated hemispheres linked by hydrogen bonds, was reported in 2008.
In organic chemistry, hydrogen bonding is known to facilitate the formation of chain structures. 4-tricyclanol C10H16O, for example, shows catenated hydrogen bonding between the hydroxyl groups, leading to the formation of helical chains; crystalline isophthalic acid C8H6O4 is built up from molecules connected by hydrogen bonds, forming infinite chains.
In unusual conditions, a 1-dimensional series of hydrogen molecules confined within a single wall carbon nanotube is expected to become metallic at a relatively low pressure of 163.5 GPa. This is about 40% of the ~400 GPa thought to be required to metallize ordinary hydrogen, a pressure which is difficult to access experimentally.
Silicon
Silicon can form sigma bonds to other silicon atoms (and disilane is the parent of this class of compounds). However, it is difficult to prepare and isolate SinH2n+2 (analogous to the saturated alkane hydrocarbons) with n greater than about 8, as their thermal stability decreases with increases in the number of silicon atoms. Silanes higher in molecular weight than disilane decompose to polymeric polysilicon hydride and hydrogen. But with a suitable pair of organic substituents in place of hydrogen on each silicon it is possible to prepare polysilanes (sometimes, erroneously called polysilenes) that are analogues of alkanes. These long chain compounds have surprising electronic properties - high electrical conductivity, for example - arising from sigma delocalization of the electrons in the chain.
Even silicon–silicon pi bonds are possible. However, these bonds are less stable than the carbon analogues. Disilane and longer silanes are quite reactive compared to alkanes. Disilene and disilynes are quite rare, unlike alkenes and alkynes. Examples of disilynes, long thought to be too unstable to be isolated were reported in 2004.
Boron
In dodecaborate(12) anion, twelve boron atoms covalently link to each other to form an icosahedral structure. Various other similar motifs are also well studied, such as boranes, carboranes and metal dicarbollides.
Nitrogen
Nitrogen, unlike its neighbor carbon, is much less likely to form chains that are stable at room temperature. Some examples of which are solid nitrogen, triazane, azide anion and triazoles. Even longer series with eight nitrogen atoms or more, such as 1,1'-Azobis-1,2,3-triazole, have been synthesized. These compounds have potential use as a convenient way to store large amount of energy.
Phosphorus
Phosphorus chains (with organic substituents) have been prepared, although these tend to be quite fragile. Small rings or clusters are more common.
Sulfur
The versatile chemistry of elemental sulfur is largely due to catenation. In the native state, sulfur exists as S8 molecules. On heating these rings open and link together giving rise to increasingly long chains, as evidenced by the progressive increase in viscosity as the chains lengthen. Also, sulfur polycations, sulfur polyanions (polysulfides) and lower sulfur oxides are all known. Furthermore, selenium and tellurium show variants of these structural motifs.
Semimetallic elements
In recent years a variety of double and triple bonds between the semi-metallic elements have been reported, including silicon, germanium, arsenic, bismuth and so on. The ability of certain main group elements to catenate is currently the subject of research into inorganic polymers.
Halogen elements
Except for fluorine that can only form unstable polyfluorides at low temperature, all other stable halogens (Cl, Br, I) can form several isopolyhalogen anions that are stable at room temperature, of which the most prominent example being triiodide. In all these anions, the halogen atoms of the same element bond to each other.
See also
Backbone chain
Chain-growth polymerization
Macromolecule
Aromaticity
Polyhalogen ions
Polysulfides
Superatom
Inorganic polymer
Self-assembly
References
Bibliography
Organic chemistry
Inorganic chemistry | 0.787895 | 0.984657 | 0.775807 |
Enculturation | Enculturation is the process by which people learn the dynamics of their surrounding culture and acquire values and norms appropriate or necessary to that culture and its worldviews.
Definition and history of research
The term enculturation was used first by sociologist of science Harry Collins to describe one of the models whereby scientific knowledge is communicated among scientists, and is contrasted with the 'algorithmic' mode of communication.
The ingredients discussed by Collins for enculturation are
Learning by Immersion: whereby aspiring scientists learn by engaging in the daily activities of the laboratory, interacting with other scientists, and participating in experiments and discussions.
Tacit Knowledge: highlighting the importance of tacit knowledge—knowledge that is not easily codified or written down but is acquired through experience and practice.
Socialization: where individuals learn the social norms, values, and behaviours expected within the scientific community.
Language and Discourse: Scientists must become fluent in the terminology, theoretical frameworks, and modes of argumentation specific to their discipline.
Community Membership: recognition of the individual as a legitimate member of the scientific community.
The problem tackled in the article of Harry Collins was the early experiments for the detection of gravitational waves.
Enculturation is mostly studied in sociology and anthropology. The influences that limit, direct, or shape the individual (whether deliberately or not) include parents, other adults, and peers. If successful, enculturation results in competence in the language, values, and rituals of the culture. Growing up, everyone goes through their own version of enculturation. Enculturation helps form an individual into an acceptable citizen. Culture impacts everything that an individual does, regardless of whether they know about it. Enculturation is a deep-rooted process that binds together individuals. Even as a culture undergoes changes, elements such as central convictions, values, perspectives, and young raising practices remain similar. Enculturation paves way for tolerance which is highly needed for peaceful co-habitance.
The process of enculturation, most commonly discussed in the field of anthropology, is closely related to socialization, a concept central to the field of sociology. Both roughly describe the adaptation of an individual into social groups by absorbing the ideas, beliefs and practices surrounding them. In some disciplines, socialization refers to the deliberate shaping of the individual. As such, the term may cover both deliberate and informal enculturation.
The process of learning and absorbing culture need not be social, direct or conscious. Cultural transmission can occur in various forms, though the most common social methods include observing other individuals, being taught or being instructed. Less obvious mechanisms include learning one's culture from the media, the information environment and various social technologies, which can lead to cultural transmission and adaptation across societies. A good example of this is the diffusion of hip-hop culture into states and communities beyond its American origins.
Enculturation has often been studied in the context of non-immigrant African Americans.
Conrad Phillip Kottak (in Window on Humanity) writes:
Enculturation is referred to as acculturation in some academic literature. However, more recent literature has signalled a difference in meaning between the two. Whereas enculturation describes the process of learning one's own culture, acculturation denotes learning a different culture, for example, that of a host. The latter can be linked to ideas of a culture shock, which describes an emotionally-jarring disconnect between one's old and new culture cues.
Famously, the sociologist Talcott Parsons once described children as "barbarians" of a sort, since they are fundamentally uncultured.
How enculturation occurs
When minorities come into the U.S., these people might fully associate with their racial legacy prior to taking part in processing enculturation. Enculturation can happen in several ways. Direct education implies that your family, instructors, or different individuals from the general public unequivocally show you certain convictions, esteems, or anticipated standards of conduct. Parents may play a vital role in teaching their children standard behavior for their culture, including table manners and some aspects of polite social interactions. Strict familial and societal teaching, which often uses different forms of positive and negative reinforcement to shape behavior, can lead a person to adhere closely to their religious convictions and customs. Schools also provide a formal setting to learn national values, such as honoring a country's flag, national anthem, and other significant patriotic symbols.
Participatory learning occurs as individuals take an active role of interacting with their environment and culture. Through their own engagement in meaningful activities, they learn socio-cultural norms for their area and may adopt related qualities and values. For example, if your school organizes an outing to gather trash at a public park, this action assists with ingraining the upsides of regard for nature and ecological protection. Strict customs frequently stress participatory learning - for example, kids who take part in the singing of psalms during Christmas will assimilate the qualities and practices of the occasion.
Observational learning is when knowledge is gained essentially by noticing and emulating others. As much as an individual related to a model accepts that emulating the model will prompt good results and feels that one is fit for mimicking the way of behaving, learning can happen with no unequivocal instruction. For example, a youngster who is sufficiently fortunate to be brought into the world by guardians in a caring relationship will figure out how to be tender and mindful in their future connections.
See also
Civil society
Dual inheritance theory
Education
Educational anthropology
Ethnocentrism
Indoctrination
Intercultural competence
Mores
Norm (philosophy)
Norm (sociology)
Peer pressure
Transculturation
References
Bibliography
Further reading
External links
Enculturation and Acculturation
Community empowerment
Concepts of moral character, historical and contemporary (Stanford Encyclopedia of Philosophy)
Cultural concepts
Cultural studies
Interculturalism | 0.783525 | 0.990046 | 0.775726 |
Sublimation (phase transition) | Sublimation is the transition of a substance directly from the solid to the gas state, without passing through the liquid state. The verb form of sublimation is sublime, or less preferably, sublimate. Sublimate also refers to the product obtained by sublimation. The point at which sublimation occurs rapidly (for further details, see below) is called critical sublimation point, or simply sublimation point. Notable examples include sublimation of dry ice at room temperature and atmospheric pressure, and that of solid iodine with heating.
The reverse process of sublimation is deposition (also called desublimation), in which a substance passes directly from a gas to a solid phase, without passing through the liquid state.
All solids sublime, though most sublime at extremely low rates that are hardly detectable. At normal pressures, most chemical compounds and elements possess three different states at different temperatures. In these cases, the transition from the solid to the gas state requires an intermediate liquid state. The pressure referred to is the partial pressure of the substance, not the total (e.g. atmospheric) pressure of the entire system. Thus, any solid can sublime if its vapour pressure is higher than the surrounding partial pressure of the same substance, and in some cases, sublimes at an appreciable rate (e.g. water ice just below 0 °C).
For some substances, such as carbon and arsenic, sublimation from solid state is much more achievable than evaporation from liquid state and it is difficult to obtain them as liquids. This is because the pressure of their triple point in its phase diagram (which corresponds to the lowest pressure at which the substance can exist as a liquid) is very high.
Sublimation is caused by the absorption of heat which provides enough energy for some molecules to overcome the attractive forces of their neighbors and escape into the vapor phase. Since the process requires additional energy, sublimation is an endothermic change. The enthalpy of sublimation (also called heat of sublimation) can be calculated by adding the enthalpy of fusion and the enthalpy of vaporization.
Confusions
While the definition of sublimation is simple, there is often confusion as to what counts as a sublimation.
False correspondence with vaporization
Vaporization (from liquid to gas) is divided into two types: vaporization on the surface of the liquid is called evaporation, and vaporization at the boiling point with formation of bubbles in the interior of the liquid is called boiling. However there is no such distinction for the solid-to-gas transition, which is always called sublimation in both corresponding cases.
Potential distinction
For clarification, a distinction between the two corresponding cases is needed. With reference to a phase diagram, the sublimation that occurs left of the solid-gas boundary, the triple point or the solid-liquid boundary (corresponding to evaporation in vaporization) may be called gradual sublimation; and the substance sublimes gradually, regardless of rate. The sublimation that occurs at the solid-gas boundary (critical sublimation point) (corresponding to boiling in vaporization) may be called rapid sublimation, and the substance sublimes rapidly. The words "gradual" and "rapid" have acquired special meanings in this context and no longer describe the rate of sublimation.
Misuse for chemical reaction
The term sublimation refers specifically to a physical change of state and is not used to describe the transformation of a solid to a gas in a chemical reaction. For example, the dissociation on heating of solid ammonium chloride into hydrogen chloride and ammonia is not sublimation but a chemical reaction. Similarly the combustion of candles, containing paraffin wax, to carbon dioxide and water vapor is not sublimation but a chemical reaction with oxygen.
Historical definition
Sublimation is historically used as a generic term to describe a two-step phase transition ― a solid-to-gas transition (sublimation in a more precise definition) followed by a gas-to-solid transition (deposition). (See below)
Examples
The examples shown are substances that noticeably sublime under certain conditions.
Carbon dioxide
Solid carbon dioxide (dry ice) sublimes rapidly along the solid-gas boundary (sublimation point) below the triple point (e.g., at the temperature of −78.5 °C, at atmospheric pressure), whereas its melting into liquid CO2 can occur along the solid-liquid boundary (melting point) at pressures and temperatures above the triple point (i.e., 5.1 atm, −56.6 °C).
Water
Snow and ice sublime gradually at temperatures below the solid-liquid boundary (melting point) (generally 0 °C), and at partial pressures below the triple point pressure of , at a low rate. In freeze-drying, the material to be dehydrated is frozen and its water is allowed to sublime under reduced pressure or vacuum. The loss of snow from a snowfield during a cold spell is often caused by sunshine acting directly on the upper layers of the snow. Sublimation of ice is a factor to the erosive wear of glacier ice, also called ablation in glaciology.
Naphthalene
Naphthalene, an organic compound commonly found in pesticides such as mothballs, sublimes easily because it is made of non-polar molecules that are held together only by van der Waals intermolecular forces. Naphthalene is a solid that sublimes gradually at standard temperature and pressure, at a high rate, with the critical sublimation point at around . At low temperature, its vapour pressure is high enough, 1mmHg at 53°C, to make the solid form of naphthalene evaporate into gas. On cool surfaces, the naphthalene vapours will solidify to form needle-like crystals.
Iodine
Iodine sublimes gradually and produces visible fumes on gentle heating at standard atmospheric temperature. It is possible to obtain liquid iodine at atmospheric pressure by controlling the temperature at just between the melting point and the boiling point of iodine. In forensic science, iodine vapor can reveal latent fingerprints on paper.
Other substances
Arsenic sublimes gradually upon heating at atmospheric pressure, and sublimes rapidly at .
Cadmium and zinc sublime much more than other common materials, so they are not suitable materials for use in vacuum.
Purification by sublimation
Sublimation is a technique used by chemists to purify compounds. A solid is typically placed in a sublimation apparatus and heated under vacuum. Under this reduced pressure, the solid volatilizes and condenses as a purified compound on a cooled surface (cold finger), leaving a non-volatile residue of impurities behind. Once heating ceases and the vacuum is removed, the purified compound may be collected from the cooling surface.
For even higher purification efficiencies, a temperature gradient is applied, which also allows for the separation of different fractions. Typical setups use an evacuated glass tube that is heated gradually in a controlled manner. The material flow is from the hot end, where the initial material is placed, to the cold end that is connected to a pump stand. By controlling temperatures along the length of the tube, the operator can control the zones of re-condensation, with very volatile compounds being pumped out of the system completely (or caught by a separate cold trap), moderately volatile compounds re-condensing along the tube according to their different volatilities, and non-volatile compounds remaining in the hot end.
Vacuum sublimation of this type is also the method of choice for purification of organic compounds for use in the organic electronics industry, where very high purities (often > 99.99%) are needed to satisfy the standards for consumer electronics and other applications.
Historical usage
In ancient alchemy, a protoscience that contributed to the development of modern chemistry and medicine, alchemists developed a structure of basic laboratory techniques, theory, terminology, and experimental methods. Sublimation was used to refer to the process in which a substance is heated to a vapor, then immediately collects as sediment on the upper portion and neck of the heating medium (typically a retort or alembic), but can also be used to describe other similar non-laboratory transitions. It was mentioned by alchemical authors such as Basil Valentine and George Ripley, and in the Rosarium philosophorum, as a process necessary for the completion of the magnum opus. Here, the word sublimation was used to describe an exchange of "bodies" and "spirits" similar to laboratory phase transition between solids and gases. Valentine, in his Le char triomphal de l'antimoine (Triumphal Chariot of Antimony, published 1646) made a comparison to spagyrics in which a vegetable sublimation can be used to separate the spirits in wine and beer. Ripley used language more indicative of the mystical implications of sublimation, indicating that the process has a double aspect in the spiritualization of the body and the corporalizing of the spirit. He writes:
And Sublimations we make for three causes,
The first cause is to make the body spiritual.
The second is that the spirit may be corporeal,
And become fixed with it and consubstantial.
The third cause is that from its filthy original.
It may be cleansed, and its saltiness sulphurious
May be diminished in it, which is infectious.
Sublimation predictions
The enthalpy of sublimation has commonly been predicted using the equipartition theorem. If the lattice energy is assumed to be approximately half the packing energy, then the following thermodynamic corrections can be applied to predict the enthalpy of sublimation. Assuming a 1 molar ideal gas gives a correction for the thermodynamic environment (pressure and volume) in which pV = RT, hence a correction of 1RT. Additional corrections for the vibrations, rotations and translation then need to be applied. From the equipartition theorem gaseous rotation and translation contribute 1.5RT each to the final state, therefore a +3RT correction. Crystalline vibrations and rotations contribute 3RT each to the initial state, hence −6RT. Summing the RT corrections; −6RT + 3RT + RT = −2RT. This leads to the following approximate sublimation enthalpy. A similar approximation can be found for the entropy term if rigid bodies are assumed.
Dye-sublimation printing
Dye-sub printing is a digital printing technology using full color artwork that works with polyester and polymer-coated substrates. Also referred to as digital sublimation, the process is commonly used for decorating apparel, signs and banners, as well as novelty items such as cell phone covers, plaques, coffee mugs, and other items with sublimation-friendly surfaces. The process uses the science of sublimation, in which heat and pressure are applied to a solid, turning it into a gas through an endothermic reaction without passing through the liquid phase.
In sublimation printing, unique sublimation dyes are transferred to sheets of “transfer” paper via liquid gel ink through a piezoelectric print head. The ink is deposited on these high-release inkjet papers, which are used for the next step of the sublimation printing process. After the digital design is printed onto sublimation transfer sheets, it is placed on a heat press along with the substrate to be sublimated.
In order to transfer the image from the paper to the substrate, it requires a heat press process that is a combination of time, temperature and pressure. The heat press applies this special combination, which can change depending on the substrate, to “transfer” the sublimation dyes at the molecular level into the substrate. The most common dyes used for sublimation activate at 350 degrees Fahrenheit. However, a range of 380 to 420 degrees Fahrenheit is normally recommended for optimal color.
The result of the sublimation process is a nearly permanent, high resolution, full color print. Because the dyes are infused into the substrate at the molecular level, rather than applied at a topical level (such as with screen printing and direct to garment printing), the prints will not crack, fade or peel from the substrate under normal conditions.
See also
Ablation
Enthalpy of sublimation
Freeze-drying
Freezer burn – common process involving sublimation
Phase diagram
Phase transitions
Table of phase transitions of matter
References
External links
Alchemical processes
Atmospheric thermodynamics
Chemical processes
Gases
Laboratory techniques
Phase transitions
Separation processes | 0.777062 | 0.998276 | 0.775722 |
Theoretical physics | Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain, and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena.
The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation.
Overview
A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms.
A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical idea that (action and) energy are not continuously variable.
Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ (semi-) empirical formulas and heuristics to agree with experimental results, often without deep physical understanding. "Modelers" (also called "model-builders") often appear much like phenomenologists, but try to model speculative theories that have certain desirable features (rather than on experimental data), or apply the techniques of mathematical modeling to physics problems. Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled; e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics.
Theoretical advances may consist in setting aside old, incorrect paradigms (e.g., aether theory of light propagation, caloric theory of heat, burning consisting of evolving phlogiston, or astronomical bodies revolving around the Earth) or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result. Sometimes though, advances may proceed along different paths. For example, an essentially correct theory may need some conceptual or factual revisions; atomic theory, first postulated millennia ago (by several thinkers in Greece and India) and the two-fluid theory of electricity are two cases in this point. However, an exception to all the above is the wave–particle duality, a theory combining aspects of different, opposing models via the Bohr complementarity principle.
Physical theories become accepted if they are able to make correct predictions and no (or few) incorrect ones. The theory should have, at least as a secondary objective, a certain economy and elegance (compare to mathematical beauty), a notion sometimes called "Occam's razor" after the 13th-century English philosopher William of Occam (or Ockham), in which the simpler of two theories that describe the same matter just as adequately is preferred (but conceptual simplicity may mean mathematical complexity). They are also more likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method.
Physical theories can be grouped into three categories: mainstream theories, proposed theories and fringe theories.
History
Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, and continued by Plato and Aristotle, whose views held sway for a millennium. During the rise of medieval universities, the only acknowledged intellectual disciplines were the seven liberal arts of the Trivium like grammar, logic, and rhetoric and of the Quadrivium like arithmetic, geometry, music and astronomy. During the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon. As the Scientific Revolution gathered pace, the concepts of matter, energy, space, time and causality slowly began to acquire the form we know today, and other sciences spun off from the rubric of natural philosophy. Thus began the modern era of theory with the Copernican paradigm shift in astronomy, soon followed by Johannes Kepler's expressions for planetary orbits, which summarized the meticulous observations of Tycho Brahe; the works of these men (alongside Galileo's) can perhaps be considered to constitute the Scientific Revolution.
The great push toward the modern concept of explanation started with Galileo, one of the few physicists who was both a consummate theoretician and a great experimentalist. The analytic geometry and mechanics of Descartes were incorporated into the calculus and mechanics of Isaac Newton, another theoretician/experimentalist of the highest order, writing Principia Mathematica. In it contained a grand synthesis of the work of Copernicus, Galileo and Kepler; as well as Newton's theories of mechanics and gravitation, which held sway as worldviews until the early 20th century. Simultaneously, progress was also made in optics (in particular colour theory and the ancient science of geometrical optics), courtesy of Newton, Descartes and the Dutchmen Snell and Huygens. In the 18th and 19th centuries Joseph-Louis Lagrange, Leonhard Euler and William Rowan Hamilton would extend the theory of classical mechanics considerably. They picked up the interactive intertwining of mathematics and physics begun two millennia earlier by Pythagoras.
Among the great conceptual achievements of the 19th and 20th centuries were the consolidation of the idea of energy (as well as its global conservation) by the inclusion of heat, electricity and magnetism, and then light. The laws of thermodynamics, and most importantly the introduction of the singular concept of entropy began to provide a macroscopic explanation for the properties of matter. Statistical mechanics (followed by statistical physics and Quantum statistical mechanics) emerged as an offshoot of thermodynamics late in the 19th century. Another important event in the 19th century was the discovery of electromagnetic theory, unifying the previously separate phenomena of electricity, magnetism and light.
The pillars of modern physics, and perhaps the most revolutionary theories in the history of physics, have been relativity theory and quantum mechanics. Newtonian mechanics was subsumed under special relativity and Newton's gravity was given a kinematic explanation by general relativity. Quantum mechanics led to an understanding of blackbody radiation (which indeed, was an original motivation for the theory) and of anomalies in the specific heats of solids — and finally to an understanding of the internal structures of atoms and molecules. Quantum mechanics soon gave way to the formulation of quantum field theory (QFT), begun in the late 1920s. In the aftermath of World War 2, more progress brought much renewed interest in QFT, which had since the early efforts, stagnated. The same period also saw fresh attacks on the problems of superconductivity and phase transitions, as well as the first applications of QFT in the area of theoretical condensed matter. The 1960s and 70s saw the formulation of the Standard model of particle physics using QFT and progress in condensed matter physics (theoretical foundations of superconductivity and critical phenomena, among others), in parallel to the applications of relativity to problems in astronomy and cosmology respectively.
All of these achievements depended on the theoretical physics as a moving force both to suggest experiments and to consolidate results — often by ingenious application of existing mathematics, or, as in the case of Descartes and Newton (with Leibniz), by inventing new mathematics. Fourier's studies of heat conduction led to a new branch of mathematics: infinite, orthogonal series.
Modern theoretical physics attempts to unify theories and explain phenomena in further attempts to understand the Universe, from the cosmological to the elementary particle scale. Where experimentation cannot be done, theoretical physics still tries to advance through the use of mathematical models.
Mainstream theories
Mainstream theories (sometimes referred to as central theories) are the body of knowledge of both factual and scientific views and possess a usual scientific quality of the tests of repeatability, consistency with existing well-established science and experimentation. There do exist mainstream theories that are generally accepted theories based solely upon their effects explaining a wide variety of data, although the detection, explanation, and possible composition are subjects of debate.
Examples
Big Bang
Chaos theory
Classical mechanics
Classical field theory
Dynamo theory
Field theory
Ginzburg–Landau theory
Kinetic theory of gases
Classical electromagnetism
Perturbation theory (quantum mechanics)
Physical cosmology
Quantum chromodynamics
Quantum complexity theory
Quantum electrodynamics
Quantum field theory
Quantum field theory in curved spacetime
Quantum information theory
Quantum mechanics
Quantum thermodynamics
Relativistic quantum mechanics
Scattering theory
Standard Model
Statistical physics
Theory of relativity
Wave–particle duality
Proposed theories
The proposed theories of physics are usually relatively new theories which deal with the study of physics which include scientific approaches, means for determining the validity of models and new types of reasoning used to arrive at the theory. However, some proposed theories include theories that have been around for decades and have eluded methods of discovery and testing. Proposed theories can include fringe theories in the process of becoming established (and, sometimes, gaining wider acceptance). Proposed theories usually have not been tested. In addition to the theories like those listed below, there are also different interpretations of quantum mechanics, which may or may not be considered different theories since it is debatable whether they yield different predictions for physical experiments, even in principle. For example, AdS/CFT correspondence, Chern–Simons theory, graviton, magnetic monopole, string theory, theory of everything.
Fringe theories
Fringe theories include any new area of scientific endeavor in the process of becoming established and some proposed theories. It can include speculative sciences. This includes physics fields and physical theories presented in accordance with known evidence, and a body of associated predictions have been made according to that theory.
Some fringe theories go on to become a widely accepted part of physics. Other fringe theories end up being disproven. Some fringe theories are a form of protoscience and others are a form of pseudoscience. The falsification of the original theory sometimes leads to reformulation of the theory.
Examples
Aether (classical element)
Luminiferous aether
Digital physics
Electrogravitics
Stochastic electrodynamics
Tesla's dynamic theory of gravity
Thought experiments vs real experiments
"Thought" experiments are situations created in one's mind, asking a question akin to "suppose you are in this situation, assuming such is true, what would follow?". They are usually created to investigate phenomena that are not readily experienced in every-day situations. Famous examples of such thought experiments are Schrödinger's cat, the EPR thought experiment, simple illustrations of time dilation, and so on. These usually lead to real experiments designed to verify that the conclusion (and therefore the assumptions) of the thought experiments are correct. The EPR thought experiment led to the Bell inequalities, which were then tested to various degrees of rigor, leading to the acceptance of the current formulation of quantum mechanics and probabilism as a working hypothesis.
See also
List of theoretical physicists
Philosophy of physics
Symmetry in quantum mechanics
Timeline of developments in theoretical physics
Double field theory
Notes
References
Further reading
Duhem, Pierre. La théorie physique - Son objet, sa structure, (in French). 2nd edition - 1914. English translation: The physical theory - its purpose, its structure. Republished by Joseph Vrin philosophical bookstore (1981), .
Feynman, et al. The Feynman Lectures on Physics (3 vol.). First edition: Addison–Wesley, (1964, 1966).
Bestselling three-volume textbook covering the span of physics. Reference for both (under)graduate student and professional researcher alike.
Landau et al. Course of Theoretical Physics.
Famous series of books dealing with theoretical concepts in physics covering 10 volumes, translated into many languages and reprinted over many editions. Often known simply as "Landau and Lifschits" or "Landau-Lifschits" in the literature.
Longair, MS. Theoretical Concepts in Physics: An Alternative View of Theoretical Reasoning in Physics. Cambridge University Press; 2d edition (4 Dec 2003). .
Planck, Max (1909). Eight Lectures on theoretical physics. Library of Alexandria. , .
A set of lectures given in 1909 at Columbia University.
Sommerfeld, Arnold. Vorlesungen über theoretische Physik (Lectures on Theoretical Physics); German, 6 volumes.
A series of lessons from a master educator of theoretical physicists.
External links
MIT Center for Theoretical Physics
How to become a GOOD Theoretical Physicist, a website made by Gerard 't Hooft
de:Physik#Theoretische Physik | 0.777193 | 0.997929 | 0.775583 |
Substrate (chemistry) | In chemistry, the term substrate is highly context-dependent. Broadly speaking, it can refer either to a chemical species being observed in a chemical reaction, or to a surface on which other chemical reactions or microscopy are performed.
In the former sense, a reagent is added to the substrate to generate a product through a chemical reaction. The term is used in a similar sense in synthetic and organic chemistry, where the substrate is the chemical of interest that is being modified. In biochemistry, an enzyme substrate is the material upon which an enzyme acts. When referring to Le Chatelier's principle, the substrate is the reagent whose concentration is changed.
Spontaneous reaction
Where S is substrate and P is product.
Catalysed reaction
Where S is substrate, P is product and C is catalyst.
In the latter sense, it may refer to a surface on which other chemical reactions are performed or play a supporting role in a variety of spectroscopic and microscopic techniques, as discussed in the first few subsections below.
Microscopy
In three of the most common nano-scale microscopy techniques, atomic force microscopy (AFM), scanning tunneling microscopy (STM), and transmission electron microscopy (TEM), a substrate is required for sample mounting. Substrates are often thin and relatively free of chemical features or defects. Typically silver, gold, or silicon wafers are used due to their ease of manufacturing and lack of interference in the microscopy data. Samples are deposited onto the substrate in fine layers where it can act as a solid support of reliable thickness and malleability. Smoothness of the substrate is especially important for these types of microscopy because they are sensitive to very small changes in sample height.
Various other substrates are used in specific cases to accommodate a wide variety of samples. Thermally-insulating substrates are required for AFM of graphite flakes for instance, and conductive substrates are required for TEM. In some contexts, the word substrate can be used to refer to the sample itself, rather than the solid support on which it is placed.
Spectroscopy
Various spectroscopic techniques also require samples to be mounted on substrates, such as powder diffraction. This type of diffraction, which involves directing high-powered X-rays at powder samples to deduce crystal structures, is often performed with an amorphous substrate such that it does not interfere with the resulting data collection. Silicon substrates are also commonly used because of their cost-effective nature and relatively little data interference in X-ray collection.
Single-crystal substrates are useful in powder diffraction because they are distinguishable from the sample of interest in diffraction patterns by differentiating by phase.
Atomic layer deposition
In atomic layer deposition, the substrate acts as an initial surface on which reagents can combine to precisely build up chemical structures. A wide variety of substrates are used depending on the reaction of interest, but they frequently bind the reagents with some affinity to allow sticking to the substrate.
The substrate is exposed to different reagents sequentially and washed in between to remove excess. A substrate is critical in this technique because the first layer needs a place to bind to such that it is not lost when exposed to the second or third set of reagents.
Biochemistry
In biochemistry, the substrate is a molecule upon which an enzyme acts. Enzymes catalyze chemical reactions involving the substrate(s). In the case of a single substrate, the substrate bonds with the enzyme active site, and an enzyme-substrate complex is formed. The substrate is transformed into one or more products, which are then released from the active site. The active site is then free to accept another substrate molecule. In the case of more than one substrate, these may bind in a particular order to the active site, before reacting together to produce products. A substrate is called 'chromogenic' if it gives rise to a coloured product when acted on by an enzyme. In histological enzyme localization studies, the colored product of enzyme action can be viewed under a microscope, in thin sections of biological tissues. Similarly, a substrate is called 'fluorogenic' if it gives rise to a fluorescent product when acted on by an enzyme.
For example, curd formation (rennet coagulation) is a reaction that occurs upon adding the enzyme rennin to milk. In this reaction, the substrate is a milk protein (e.g., casein) and the enzyme is rennin. The products are two polypeptides that have been formed by the cleavage of the larger peptide substrate. Another example is the chemical decomposition of hydrogen peroxide carried out by the enzyme catalase. As enzymes are catalysts, they are not changed by the reactions they carry out. The substrate(s), however, is/are converted to product(s). Here, hydrogen peroxide is converted to water and oxygen gas.
Where E is enzyme, S is substrate, and P is product
While the first (binding) and third (unbinding) steps are, in general, reversible, the middle step may be irreversible (as in the rennin and catalase reactions just mentioned) or reversible (e.g. many reactions in the glycolysis metabolic pathway).
By increasing the substrate concentration, the rate of reaction will increase due to the likelihood that the number of enzyme-substrate complexes will increase; this occurs until the enzyme concentration becomes the limiting factor.
Substrate promiscuity
Although enzymes are typically highly specific, some are able to perform catalysis on more than one substrate, a property termed enzyme promiscuity. An enzyme may have many native substrates and broad specificity (e.g. oxidation by cytochrome p450s) or it may have a single native substrate with a set of similar non-native substrates that it can catalyse at some lower rate. The substrates that a given enzyme may react with in vitro, in a laboratory setting, may not necessarily reflect the physiological, endogenous substrates of the enzyme's reactions in vivo. That is to say that enzymes do not necessarily perform all the reactions in the body that may be possible in the laboratory. For example, while fatty acid amide hydrolase (FAAH) can hydrolyze the endocannabinoids 2-arachidonoylglycerol (2-AG) and anandamide at comparable rates in vitro, genetic or pharmacological disruption of FAAH elevates anandamide but not 2-AG, suggesting that 2-AG is not an endogenous, in vivo substrate for FAAH. In another example, the N-acyl taurines (NATs) are observed to increase dramatically in FAAH-disrupted animals, but are actually poor in vitro FAAH substrates.
Sensitivity
Sensitive substrates also known as sensitive index substrates are drugs that demonstrate an increase in AUC of ≥5-fold with strong index inhibitors of a given metabolic pathway in clinical drug-drug interaction (DDI) studies.
Moderate sensitive substrates are drugs that demonstrate an increase in AUC of ≥2 to <5-fold with strong index inhibitors of a given metabolic pathway in clinical DDI studies.
Interaction between substrates
Metabolism by the same cytochrome P450 isozyme can result in several clinically significant drug-drug interactions.
See also
Limiting reagent
Reaction progress kinetic analysis
Solvent
References
Chemical reactions
Enzyme kinetics
Catalysis | 0.786237 | 0.986422 | 0.775562 |
Rate equation | In chemistry, the rate equation (also known as the rate law or empirical differential rate equation) is an empirical differential mathematical expression for the reaction rate of a given reaction in terms of concentrations of chemical species and constant parameters (normally rate coefficients and partial orders of reaction) only. For many reactions, the initial rate is given by a power law such as
where and are the molar concentrations of the species and usually in moles per liter (molarity, ). The exponents and are the partial orders of reaction for and and the overall reaction order is the sum of the exponents. These are often positive integers, but they may also be zero, fractional, or negative. The order of reaction is a number which quantifies the degree to which the rate of a chemical reaction depends on concentrations of the reactants. In other words, the order of reaction is the exponent to which the concentration of a particular reactant is raised. The constant is the reaction rate constant or rate coefficient and at very few places velocity constant or specific rate of reaction. Its value may depend on conditions such as temperature, ionic strength, surface area of an adsorbent, or light irradiation. If the reaction goes to completion, the rate equation for the reaction rate applies throughout the course of the reaction.
Elementary (single-step) reactions and reaction steps have reaction orders equal to the stoichiometric coefficients for each reactant. The overall reaction order, i.e. the sum of stoichiometric coefficients of reactants, is always equal to the molecularity of the elementary reaction. However, complex (multi-step) reactions may or may not have reaction orders equal to their stoichiometric coefficients. This implies that the order and the rate equation of a given reaction cannot be reliably deduced from the stoichiometry and must be determined experimentally, since an unknown reaction mechanism could be either elementary or complex. When the experimental rate equation has been determined, it is often of use for deduction of the reaction mechanism.
The rate equation of a reaction with an assumed multi-step mechanism can often be derived theoretically using quasi-steady state assumptions from the underlying elementary reactions, and compared with the experimental rate equation as a test of the assumed mechanism. The equation may involve a fractional order, and may depend on the concentration of an intermediate species.
A reaction can also have an undefined reaction order with respect to a reactant if the rate is not simply proportional to some power of the concentration of that reactant; for example, one cannot talk about reaction order in the rate equation for a bimolecular reaction between adsorbed molecules:
Definition
Consider a typical chemical reaction in which two reactants A and B combine to form a product C:
This can also be written
The prefactors −1, −2 and 3 (with negative signs for reactants because they are consumed) are known as stoichiometric coefficients. One molecule of A combines with two of B to form 3 of C, so if we use the symbol [X] for the molar concentration of chemical X,
If the reaction takes place in a closed system at constant temperature and volume, without a build-up of reaction intermediates, the reaction rate is defined as
where is the stoichiometric coefficient for chemical Xi, with a negative sign for a reactant.
The initial reaction rate has some functional dependence on the concentrations of the reactants,
and this dependence is known as the rate equation or rate law. This law generally cannot be deduced from the chemical equation and must be determined by experiment.
Power laws
A common form for the rate equation is a power law:
The constant is called the rate constant. The exponents, which can be fractional, are called partial orders of reaction and their sum is the overall order of reaction.
In a dilute solution, an elementary reaction (one having a single step with a single transition state) is empirically found to obey the law of mass action. This predicts that the rate depends only on the concentrations of the reactants, raised to the powers of their stoichiometric coefficients.
The differential rate equation for an elementary reaction using mathematical product notation is:
Where:
is the rate of change of reactant concentration with respect to time.
k is the rate constant of the reaction.
represents the concentrations of the reactants, raised to the powers of their stoichiometric coefficients and multiplied together.
Determination of reaction order
Method of initial rates
The natural logarithm of the power-law rate equation is
This can be used to estimate the order of reaction of each reactant. For example, the initial rate can be measured in a series of experiments at different initial concentrations of reactant with all other concentrations kept constant, so that
The slope of a graph of as a function of then corresponds to the order with respect to reactant .
However, this method is not always reliable because
measurement of the initial rate requires accurate determination of small changes in concentration in short times (compared to the reaction half-life) and is sensitive to errors, and
the rate equation will not be completely determined if the rate also depends on substances not present at the beginning of the reaction, such as intermediates or products.
Integral method
The tentative rate equation determined by the method of initial rates is therefore normally verified by comparing the concentrations measured over a longer time (several half-lives) with the integrated form of the rate equation; this assumes that the reaction goes to completion.
For example, the integrated rate law for a first-order reaction is
where is the concentration at time and is the initial concentration at zero time. The first-order rate law is confirmed if is in fact a linear function of time. In this case the rate constant is equal to the slope with sign reversed.
Method of flooding
The partial order with respect to a given reactant can be evaluated by the method of flooding (or of isolation) of Ostwald. In this method, the concentration of one reactant is measured with all other reactants in large excess so that their concentration remains essentially constant. For a reaction with rate law the partial order with respect to is determined using a large excess of . In this case
with
and may be determined by the integral method. The order with respect to under the same conditions (with in excess) is determined by a series of similar experiments with a range of initial concentration so that the variation of can be measured.
Zero order
For zero-order reactions, the reaction rate is independent of the concentration of a reactant, so that changing its concentration has no effect on the rate of the reaction. Thus, the concentration changes linearly with time. The rate law for zero order reaction is
The unit of k is mol dm-3 s-1. This may occur when there is a bottleneck which limits the number of reactant molecules that can react at the same time, for example if the reaction requires contact with an enzyme or a catalytic surface.
Many enzyme-catalyzed reactions are zero order, provided that the reactant concentration is much greater than the enzyme concentration which controls the rate, so that the enzyme is saturated. For example, the biological oxidation of ethanol to acetaldehyde by the enzyme liver alcohol dehydrogenase (LADH) is zero order in ethanol.
Similarly reactions with heterogeneous catalysis can be zero order if the catalytic surface is saturated. For example, the decomposition of phosphine on a hot tungsten surface at high pressure is zero order in phosphine, which decomposes at a constant rate.
In homogeneous catalysis zero order behavior can come about from reversible inhibition. For example, ring-opening metathesis polymerization using third-generation Grubbs catalyst exhibits zero order behavior in catalyst due to the reversible inhibition that occurs between pyridine and the ruthenium center.
First order
A first order reaction depends on the concentration of only one reactant (a unimolecular reaction). Other reactants can be present, but their concentration has no effect on the rate. The rate law for a first order reaction is
The unit of k is s-1. Although not affecting the above math, the majority of first order reactions proceed via intermolecular collisions. Such collisions, which contribute the energy to the reactant, are necessarily second order. The rate of these collisions is, however, masked by the fact that the rate determining step remains the unimolecular breakdown of the energized reactant.
The half-life is independent of the starting concentration and is given by . The mean lifetime is τ = 1/k.
Examples of such reactions are:
2N2O5 -> 4NO2 + O2
[CoCl(NH3)5]^2+ + H2O -> [Co(H2O)(NH3)5]^3+ + Cl-
H2O2 -> H2O + 1/2O2
In organic chemistry, the class of SN1 (nucleophilic substitution unimolecular) reactions consists of first-order reactions. For example, in the reaction of aryldiazonium ions with nucleophiles in aqueous solution, , the rate equation is where Ar indicates an aryl group.
Second order
A reaction is said to be second order when the overall order is two. The rate of a second-order reaction may be proportional to one concentration squared, or (more commonly) to the product of two concentrations, As an example of the first type, the reaction is second-order in the reactant and zero order in the reactant CO. The observed rate is given by and is independent of the concentration of CO.
For the rate proportional to a single concentration squared, the time dependence of the concentration is given by
The unit of k is mol-1 dm3 s-1.
The time dependence for a rate proportional to two unequal concentrations is
if the concentrations are equal, they satisfy the previous equation.
The second type includes nucleophilic addition-elimination reactions, such as the alkaline hydrolysis of ethyl acetate:
CH3COOC2H5 + OH- -> CH3COO- + C2H5OH
This reaction is first-order in each reactant and second-order overall:
If the same hydrolysis reaction is catalyzed by imidazole, the rate equation becomes
The rate is first-order in one reactant (ethyl acetate), and also first-order in imidazole, which as a catalyst does not appear in the overall chemical equation.
Another well-known class of second-order reactions are the SN2 (bimolecular nucleophilic substitution) reactions, such as the reaction of n-butyl bromide with sodium iodide in acetone:
CH3CH2CH2CH2Br + NaI -> CH3CH2CH2CH2I + NaBr(v)
This same compound can be made to undergo a bimolecular (E2) elimination reaction, another common type of second-order reaction, if the sodium iodide and acetone are replaced with sodium tert-butoxide as the salt and tert-butanol as the solvent:
CH3CH2CH2CH2Br + NaO\mathit{t}-Bu -> CH3CH2CH=CH2 + NaBr + HO\mathit{t}-Bu
Pseudo-first order
If the concentration of a reactant remains constant (because it is a catalyst, or because it is in great excess with respect to the other reactants), its concentration can be included in the rate constant, leading to a pseudo–first-order (or occasionally pseudo–second-order) rate equation. For a typical second-order reaction with rate equation if the concentration of reactant B is constant then where the pseudo–first-order rate constant The second-order rate equation has been reduced to a pseudo–first-order rate equation, which makes the treatment to obtain an integrated rate equation much easier.
One way to obtain a pseudo-first order reaction is to use a large excess of one reactant (say, [B]≫[A]) so that, as the reaction progresses, only a small fraction of the reactant in excess (B) is consumed, and its concentration can be considered to stay constant. For example, the hydrolysis of esters by dilute mineral acids follows pseudo-first order kinetics, where the concentration of water is constant because it is present in large excess:
CH3COOCH3 + H2O -> CH3COOH + CH3OH
The hydrolysis of sucrose in acid solution is often cited as a first-order reaction with rate The true rate equation is third-order, however, the concentrations of both the catalyst and the solvent are normally constant, so that the reaction is pseudo–first-order.
Summary for reaction orders 0, 1, 2, and n
Elementary reaction steps with order 3 (called ternary reactions) are rare and unlikely to occur. However, overall reactions composed of several elementary steps can, of course, be of any (including non-integer) order.
Here stands for concentration in molarity (mol · L−1), for time, and for the reaction rate constant. The half-life of a first-order reaction is often expressed as t1/2 = 0.693/k (as ln(2)≈0.693).
Fractional order
In fractional order reactions, the order is a non-integer, which often indicates a chemical chain reaction or other complex reaction mechanism. For example, the pyrolysis of acetaldehyde into methane and carbon monoxide proceeds with an order of 1.5 with respect to acetaldehyde: The decomposition of phosgene to carbon monoxide and chlorine has order 1 with respect to phosgene itself and order 0.5 with respect to chlorine:
The order of a chain reaction can be rationalized using the steady state approximation for the concentration of reactive intermediates such as free radicals. For the pyrolysis of acetaldehyde, the Rice-Herzfeld mechanism is
Initiation CH3CHO -> .CH3 + .CHO
Propagation .CH3 + CH3CHO -> CH3CO. + CH4
CH3CO. -> .CH3 + CO
Termination 2 .CH3 -> C2H6
where • denotes a free radical. To simplify the theory, the reactions of the to form a second are ignored.
In the steady state, the rates of formation and destruction of methyl radicals are equal, so that
so that the concentration of methyl radical satisfies
[.CH3] \quad\propto \quad[CH3CHO]^{1/2}.
The reaction rate equals the rate of the propagation steps which form the main reaction products and CO:
in agreement with the experimental order of 3/2.
Complex laws
Mixed order
More complex rate laws have been described as being mixed order if they approximate to the laws for more than one order at different concentrations of the chemical species involved. For example, a rate law of the form represents concurrent first order and second order reactions (or more often concurrent pseudo-first order and second order) reactions, and can be described as mixed first and second order. For sufficiently large values of [A] such a reaction will approximate second order kinetics, but for smaller [A] the kinetics will approximate first order (or pseudo-first order). As the reaction progresses, the reaction can change from second order to first order as reactant is consumed.
Another type of mixed-order rate law has a denominator of two or more terms, often because the identity of the rate-determining step depends on the values of the concentrations. An example is the oxidation of an alcohol to a ketone by hexacyanoferrate (III) ion [Fe(CN)63−] with ruthenate (VI) ion (RuO42−) as catalyst. For this reaction, the rate of disappearance of hexacyanoferrate (III) is
This is zero-order with respect to hexacyanoferrate (III) at the onset of the reaction (when its concentration is high and the ruthenium catalyst is quickly regenerated), but changes to first-order when its concentration decreases and the regeneration of catalyst becomes rate-determining.
Notable mechanisms with mixed-order rate laws with two-term denominators include:
Michaelis–Menten kinetics for enzyme-catalysis: first-order in substrate (second-order overall) at low substrate concentrations, zero order in substrate (first-order overall) at higher substrate concentrations; and
the Lindemann mechanism for unimolecular reactions: second-order at low pressures, first-order at high pressures.
Negative order
A reaction rate can have a negative partial order with respect to a substance. For example, the conversion of ozone (O3) to oxygen follows the rate equation in an excess of oxygen. This corresponds to second order in ozone and order (−1) with respect to oxygen.
When a partial order is negative, the overall order is usually considered as undefined. In the above example, for instance, the reaction is not described as first order even though the sum of the partial orders is , because the rate equation is more complex than that of a simple first-order reaction.
Opposed reactions
A pair of forward and reverse reactions may occur simultaneously with comparable speeds. For example, A and B react into products P and Q and vice versa (a, b, p, and q are the stoichiometric coefficients):
{\mathit{a}A} + {\mathit{b}B} <=> {\mathit{p}P} + {\mathit{q}Q}
The reaction rate expression for the above reactions (assuming each one is elementary) can be written as:
where: k1 is the rate coefficient for the reaction that consumes A and B; k−1 is the rate coefficient for the backwards reaction, which consumes P and Q and produces A and B.
The constants k1 and k−1 are related to the equilibrium coefficient for the reaction (K) by the following relationship (set v=0 in balance):
Simple example
In a simple equilibrium between two species:
A <=> P
where the reaction starts with an initial concentration of reactant A, [A]0, and an initial concentration of 0 for product P at time t=0.
Then the equilibrium constant K is expressed as:
where and are the concentrations of A and P at equilibrium, respectively.
The concentration of A at time t, , is related to the concentration of P at time t, , by the equilibrium reaction equation:
[A]_\mathit{t} = [A]0 - [P]_\mathit{t}
The term [P]0 is not present because, in this simple example, the initial concentration of P is 0.
This applies even when time t is at infinity; i.e., equilibrium has been reached:
[A]_\mathit{e} = [A]0 - [P]_\mathit{e}
then it follows, by the definition of K, that
and, therefore,
These equations allow us to uncouple the system of differential equations, and allow us to solve for the concentration of A alone.
The reaction equation was given previously as:
For A <=> P this is simply
The derivative is negative because this is the rate of the reaction going from A to P, and therefore the concentration of A is decreasing. To simplify notation, let x be , the concentration of A at time t. Let be the concentration of A at equilibrium. Then:
Since:
the reaction rate becomes:
which results in:
.
A plot of the negative natural logarithm of the concentration of A in time minus the concentration at equilibrium versus time t gives a straight line with slope k1 + k−1. By measurement of [A]e and [P]e the values of K and the two reaction rate constants will be known.
Generalization of simple example
If the concentration at the time t = 0 is different from above, the simplifications above are invalid, and a system of differential equations must be solved. However, this system can also be solved exactly to yield the following generalized expressions:
When the equilibrium constant is close to unity and the reaction rates very fast for instance in conformational analysis of molecules, other methods are required for the determination of rate constants for instance by complete lineshape analysis in NMR spectroscopy.
Consecutive reactions
If the rate constants for the following reaction are and ; A -> B -> C , then the rate equation is:
For reactant A:
For reactant B:
For product C:
With the individual concentrations scaled by the total population of reactants to become probabilities, linear systems of differential equations such as these can be formulated as a master equation. The differential equations can be solved analytically and the integrated rate equations are
The steady state approximation leads to very similar results in an easier way.
Parallel or competitive reactions
When a substance reacts simultaneously to give two different products, a parallel or competitive reaction is said to take place.
Two first order reactions
A -> B and A -> C , with constants and and rate equations ; and
The integrated rate equations are then ; and
.
One important relationship in this case is
One first order and one second order reaction
This can be the case when studying a bimolecular reaction and a simultaneous hydrolysis (which can be treated as pseudo order one) takes place: the hydrolysis complicates the study of the reaction kinetics, because some reactant is being "spent" in a parallel reaction. For example, A reacts with R to give our product C, but meanwhile the hydrolysis reaction takes away an amount of A to give B, a byproduct: A + H2O -> B and A + R -> C . The rate equations are: and , where is the pseudo first order constant.
The integrated rate equation for the main product [C] is , which is equivalent to . Concentration of B is related to that of C through
The integrated equations were analytically obtained but during the process it was assumed that . Therefore, previous equation for [C] can only be used for low concentrations of [C] compared to [A]0
Stoichiometric reaction networks
The most general description of a chemical reaction network considers a number of distinct chemical species reacting via reactions.
The chemical equation of the -th reaction can then be written in the generic form
which is often written in the equivalent form
Here
is the reaction index running from 1 to ,
denotes the -th chemical species,
is the rate constant of the -th reaction and
and are the stoichiometric coefficients of reactants and products, respectively.
The rate of such a reaction can be inferred by the law of mass action
which denotes the flux of molecules per unit time and unit volume. Here ([\mathbf X])=([X1], [X2], \ldots ,[X_\mathit{N}]) is the vector of concentrations. This definition includes the elementary reactions:
zero order reactions
for which for all ,
first order reactions
for which for a single ,
second order reactions
for which for exactly two ; that is, a bimolecular reaction, or for a single ; that is, a dimerization reaction.
Each of these is discussed in detail below. One can define the stoichiometric matrix
denoting the net extent of molecules of in reaction . The reaction rate equations can then be written in the general form
This is the product of the stoichiometric matrix and the vector of reaction rate functions.
Particular simple solutions exist in equilibrium, , for systems composed of merely reversible reactions. In this case, the rate of the forward and backward reactions are equal, a principle called detailed balance. Detailed balance is a property of the stoichiometric matrix alone and does not depend on the particular form of the rate functions . All other cases where detailed balance is violated are commonly studied by flux balance analysis, which has been developed to understand metabolic pathways.
General dynamics of unimolecular conversion
For a general unimolecular reaction involving interconversion of different species, whose concentrations at time are denoted by through , an analytic form for the time-evolution of the species can be found. Let the rate constant of conversion from species to species be denoted as , and construct a rate-constant matrix whose entries are the .
Also, let be the vector of concentrations as a function of time.
Let be the vector of ones.
Let be the identity matrix.
Let be the function that takes a vector and constructs a diagonal matrix whose on-diagonal entries are those of the vector.
Let be the inverse Laplace transform from to .
Then the time-evolved state is given by
thus providing the relation between the initial conditions of the system and its state at time .
See also
Michaelis–Menten kinetics
Molecularity
Petersen matrix
Reaction–diffusion system
Reactions on surfaces: rate equations for reactions where at least one of the reactants adsorbs onto a surface
Reaction progress kinetic analysis
Reaction rate
Reaction rate constant
Steady state approximation
Gillespie algorithm
Balance equation
Belousov–Zhabotinsky reaction
Lotka–Volterra equations
Chemical kinetics
References
Books cited
External links
Chemical kinetics, reaction rate, and order (needs flash player)
Reaction kinetics, examples of important rate laws (lecture with audio).
Rates of Reaction
Chemical kinetics
Chemical reaction engineering
cy:Cyfradd adwaith#Hafaliadau cyfradd | 0.781347 | 0.992593 | 0.775559 |
Holism in science | Holism in science, holistic science, or methodological holism is an approach to research that emphasizes the study of complex systems. Systems are approached as coherent wholes whose component parts are best understood in context and in relation to both each other and to the whole. Holism typically stands in contrast with reductionism, which describes systems by dividing them into smaller components in order to understand them through their elemental properties.
The holism-individualism dichotomy is especially evident in conflicting interpretations of experimental findings across the social sciences, and reflects whether behavioural analysis begins at the systemic, macro-level (ie. derived from social relations) or the component micro-level (ie. derived from individual agents).
Overview
David Deutsch calls holism anti-reductionist and refers to the concept of thinking as the only legitimate way to think about science in as a series of emergent, or higher level phenomena. He argues that neither approach is purely correct.
Two aspects of Holism are:
The way of doing science, sometimes called "whole to parts", which focuses on observation of the specimen within its ecosystem first before breaking down to study any part of the specimen.
The idea that the scientist is not a passive observer of an external universe but rather a participant in the system.
Proponents claim that Holistic science is naturally suited to subjects such as ecology, biology, physics and the social sciences, where complex, non-linear interactions are the norm. These are systems where emergent properties arise at the level of the whole that cannot be predicted by focusing on the parts alone, which may make mainstream, reductionist science ill-equipped to provide understanding beyond a certain level. This principle of emergence in complex systems is often captured in the phrase ′the whole is greater than the sum of its parts′. Living organisms are an example: no knowledge of all the chemical and physical properties of matter can explain or predict the functioning of living organisms. The same happens in complex social human systems, where detailed understanding of individual behaviour cannot predict the behaviour of the group, which emerges at the level of the collective. The phenomenon of emergence may impose a theoretical limit on knowledge available through reductionist methodology, arguably making complex systems natural subjects for holistic approaches.
Science journalist John Horgan has expressed this view in the book The End of Science. He wrote that a certain pervasive model within holistic science, self-organized criticality, for example, "is not really a theory at all. Like punctuated equilibrium, self-organized criticality is merely a description, one of many, of the random fluctuations, the noise, permeating nature." By the theorists' own admissions, he said, such a model "can generate neither specific predictions about nature nor meaningful insights. What good is it, then?"
One of the reasons that holistic science attracts supporters is that it seems to offer a progressive, 'socio-ecological' view of the world, but Alan Marshall's book The Unity of Nature offers evidence to the contrary; suggesting holism in science is not 'ecological' or 'socially-responsive' at all, but regressive and repressive.
Examples in various fields of science
Physical science
Agriculture
Permaculture takes a systems level approach to agriculture and land management by attempting to copy what happens in the natural world. Holistic management integrates ecology and social sciences with food production. It was originally designed as a way to reverse desertification. Organic farming is sometimes considered a holistic approach.
In physics
Richard Healey offered a modal interpretation and used it to present a model account of the puzzling correlations which portrays them as resulting from the operation of a process that violates both spatial and spatiotemporal separability. He argued that, on this interpretation, the nonseparability of the process is a consequence of physical property holism; and that the resulting account yields genuine understanding of how the correlations come about without any violation of relativity theory or Local Action. Subsequent work by Clifton, Dickson and Myrvold cast doubt on whether the account can be squared with relativity theory’s requirement of Lorentz invariance but leaves no doubt of an spatially entangled holism in the theory. Paul Davies and John Gribbin further observe that Wheeler's delayed choice experiment shows how the quantum world displays a sort of holism in time as well as space.
In the holistic approach of David Bohm, any collection of quantum objects constitutes an indivisible whole within an implicate and explicate order. Bohm said there is no scientific evidence to support the dominant view that the universe consists of a huge, finite number of minute particles, and offered instead a view of undivided wholeness: "ultimately, the entire universe (with all its 'particles', including those constituting human beings, their laboratories, observing instruments, etc.) has to be understood as a single undivided whole, in which analysis into separately and independently existent parts has no fundamental status".
Chaos and complexity
Scientific holism holds that the behavior of a system cannot be perfectly predicted, no matter how much data is available. Natural systems can produce surprisingly unexpected behavior, and it is suspected that behavior of such systems might be computationally irreducible, which means it would not be possible to even approximate the system state without a full simulation of all the events occurring in the system. Key properties of the higher level behavior of certain classes of systems may be mediated by rare "surprises" in the behavior of their elements due to the principle of interconnectivity, thus evading predictions except by brute force simulation.
Ecology
Holistic thinking can be applied to ecology, combining biological, chemical, physical, economic, ethical, and political insights. The complexity grows with the area, so that it is necessary to reduce the characteristic of the view in other ways, for example to a specific time of duration.
Medicine
In primary care the term "holistic," has been used to describe approaches that take into account social considerations and other intuitive judgements. The term holism, and so-called approaches, appear in psychosomatic medicine in the 1970s, when they were considered one possible way to conceptualize psychosomatic phenomena. Instead of charting one-way causal links from psyche to soma, or vice versa, it aimed at a systemic model, where multiple biological, psychological and social factors were seen as interlinked.
Other, alternative approaches in the 1970s were psychosomatic and somatopsychic approaches, which concentrated on causal links only from psyche to soma, or from soma to psyche, respectively. At present it is commonplace in psychosomatic medicine to state that psyche and soma cannot really be separated for practical or theoretical purposes.
The term systems medicine first appeared in 1992 and takes an integrative approach to all of the body and environment.
Social science
Economics
Some economists use a causal holism theory in their work. That is they view the discipline in the manner of Ludwig Wittgenstein and claim that it can't be defined by necessary and sufficient conditions.
Education reform
The Taxonomy of Educational Objectives identifies many levels of cognitive functioning, which it is claimed may be used to create a more holistic education. In authentic assessment, rather than using computers to score multiple choice tests, a standards based assessment uses trained scorers to score open-response items using holistic scoring methods. In projects such as the North Carolina Writing Project, scorers are instructed not to count errors, or count numbers of points or supporting statements. The scorer is instead instructed to judge holistically whether "as a whole" is it more a "2" or a "3". Critics question whether such a process can be as objective as computer scoring, and the degree to which such scoring methods can result in different scores from different scorers.
Anthropology
Anthropology is holistic in two senses. First, it is concerned with all human beings across times and places, and with all dimensions of humanity (evolutionary, biophysical, sociopolitical, economic, cultural, psychological, etc.) Further, many academic programs following this approach take a "four-field" approach to anthropology that encompasses physical anthropology, archeology, linguistics, and cultural anthropology or social anthropology.
Some anthropologists disagree, and consider holism to be an artifact from 19th century social evolutionary thought that inappropriately imposes scientific positivism upon cultural anthropology.
The term "holism" is additionally used within social and cultural anthropology to refer to a methodological analysis of a society as a whole, in which component parts are treated as functionally relative to each other. One definition says: "as a methodological ideal, holism implies ... that one does not permit oneself to believe that our own established institutional boundaries (e.g. between politics, sexuality, religion, economics) necessarily may be found also in foreign societies."
Psychology of perception
A major holist movement in the early twentieth century was gestalt psychology. The claim was that perception is not an aggregation of atomic sense data but a field, in which there is a figure and a ground. Background has holistic effects on the perceived figure. Gestalt psychologists included Wolfgang Koehler, Max Wertheimer, Kurt Koffka. Koehler claimed the perceptual fields corresponded to electrical fields in the brain. Karl Lashley did experiments with gold foil pieces inserted in monkey brains purporting to show that such fields did not exist. However, many of the perceptual illusions and visual phenomena exhibited by the gestaltists were taken over (often without credit) by later perceptual psychologists. Gestalt psychology had influence on Fritz Perls' gestalt therapy, although some old-line gestaltists opposed the association with counter-cultural and New Age trends later associated with gestalt therapy. Gestalt theory was also influential on phenomenology. Aron Gurwitsch wrote on the role of the field of consciousness in gestalt theory in relation to phenomenology. Maurice Merleau-Ponty made much use of holistic psychologists such as work of Kurt Goldstein in his "Phenomenology of Perception."
Teleological psychology
Alfred Adler believed that the individual (an integrated whole expressed through a self-consistent unity of thinking, feeling, and action, moving toward an unconscious, fictional final goal), must be understood within the larger wholes of society, from the groups to which he belongs (starting with his face-to-face relationships), to the larger whole of mankind. The recognition of our social embeddedness and the need for developing an interest in the welfare of others, as well as a respect for nature, is at the heart of Adler's philosophy of living and principles of psychotherapy.
Edgar Morin, the French philosopher and sociologist, can be considered a holist based on the transdisciplinary nature of his work.
Skeptical reception
According to skeptics, the phrase "holistic science" is often misused by pseudosciences. In the book Science and Pseudoscience in Clinical Psychology it's noted that "Proponents of pseudoscientific claims, especially in organic medicine, and mental health, often resort to the "mantra of holism" to explain away negative findings. When invoking the mantra, they typically maintain that scientific claims can be evaluated only within the context of broader claims and therefore cannot be evaluated in isolation." This is an invocation of Karl Popper's demarcation problem and in a posting to Ask a Philosopher Massimo Pigliucci clarifies Popper by positing, "Instead of thinking of science as making progress by inductive generalization (which doesn’t work because no matter how many times a given theory may have been confirmed thus far, it is always possible that new, contrary, data will emerge tomorrow), we should say that science makes progress by conclusively disconfirming theories that are, in fact, wrong."
Victor J. Stenger states that "holistic healing is associated with the rejection of classical, Newtonian physics. Yet, holistic healing retains many ideas from eighteenth and nineteenth century physics. Its proponents are blissfully unaware that these ideas, especially superluminal holism, have been rejected by modern physics as well".
Some quantum mystics interpret the wave function of quantum mechanics as a vibration in a holistic ether that pervades the universe and wave function collapse as the result of some cosmic consciousness. This is a misinterpretation of the effects of quantum entanglement as a violation of relativistic causality and quantum field theory.
See also
Antireductionism
Emergence
Holarchy
Holism
Holism in ecological anthropology
Holistic management
Holistic health
Holon (philosophy)
Interdisciplinarity
Organicism
Scientific reductionism
Systems thinking
References
Further reading
Article "Patterns of Wholeness: Introducing Holistic Science" by Brian Goodwin, from the journal Resurgence
Article "From Control to Participation" by Brian Goodwin, from the journal Resurgence
Complex systems theory
Holism
Systems theory | 0.792563 | 0.978448 | 0.775481 |
Van der Waals force | In molecular physics and chemistry, the van der Waals force (sometimes van de Waals' force) is a distance-dependent interaction between atoms or molecules. Unlike ionic or covalent bonds, these attractions do not result from a chemical electronic bond; they are comparatively weak and therefore more susceptible to disturbance. The van der Waals force quickly vanishes at longer distances between interacting molecules.
Named after Dutch physicist Johannes Diderik van der Waals, the van der Waals force plays a fundamental role in fields as diverse as supramolecular chemistry, structural biology, polymer science, nanotechnology, surface science, and condensed matter physics. It also underlies many properties of organic compounds and molecular solids, including their solubility in polar and non-polar media.
If no other force is present, the distance between atoms at which the force becomes repulsive rather than attractive as the atoms approach one another is called the van der Waals contact distance; this phenomenon results from the mutual repulsion between the atoms' electron clouds.
The van der Waals forces are usually described as a combination of the London dispersion forces between "instantaneously induced dipoles", Debye forces between permanent dipoles and induced dipoles, and the Keesom force between permanent molecular dipoles whose rotational orientations are dynamically averaged over time.
Definition
Van der Waals forces include attraction and repulsions between atoms, molecules, as well as other intermolecular forces. They differ from covalent and ionic bonding in that they are caused by correlations in the fluctuating polarizations of nearby particles (a consequence of quantum dynamics).
The force results from a transient shift in electron density. Specifically, the electron density may temporarily shift to be greater on one side of the nucleus. This shift generates a transient charge which a nearby atom can be attracted to or repelled by. The force is repulsive at very short distances, reaches zero at an equilibrium distance characteristic for each atom, or molecule, and becomes attractive for distances larger than the equilibrium distance. For individual atoms, the equilibrium distance is between 0.3 nm and 0.5 nm, depending on the atomic-specific diameter. When the interatomic distance is greater than 1.0 nm the force is not strong enough to be easily observed as it decreases as a function of distance r approximately with the 7th power (~r−7).
Van der Waals forces are often among the weakest chemical forces. For example, the pairwise attractive van der Waals interaction energy between H (hydrogen) atoms in different H2 molecules equals 0.06 kJ/mol (0.6 meV) and the pairwise attractive interaction energy between O (oxygen) atoms in different O2 molecules equals 0.44 kJ/mol (4.6 meV). The corresponding vaporization energies of H2 and O2 molecular liquids, which result as a sum of all van der Waals interactions per molecule in the molecular liquids, amount to 0.90 kJ/mol (9.3 meV) and 6.82 kJ/mol (70.7 meV), respectively, and thus approximately 15 times the value of the individual pairwise interatomic interactions (excluding covalent bonds).
The strength of van der Waals bonds increases with higher polarizability of the participating atoms. For example, the pairwise van der Waals interaction energy for more polarizable atoms such as S (sulfur) atoms in H2S and sulfides exceeds 1 kJ/mol (10 meV), and the pairwise interaction energy between even larger, more polarizable Xe (xenon) atoms is 2.35 kJ/mol (24.3 meV). These van der Waals interactions are up to 40 times stronger than in H2, which has only one valence electron, and they are still not strong enough to achieve an aggregate state other than gas for Xe under standard conditions. The interactions between atoms in metals can also be effectively described as van der Waals interactions and account for the observed solid aggregate state with bonding strengths comparable to covalent and ionic interactions. The strength of pairwise van der Waals type interactions is on the order of 12 kJ/mol (120 meV) for low-melting Pb (lead) and on the order of 32 kJ/mol (330 meV) for high-melting Pt (platinum), which is about one order of magnitude stronger than in Xe due to the presence of a highly polarizable free electron gas. Accordingly, van der Waals forces can range from weak to strong interactions, and support integral structural loads when multitudes of such interactions are present.
Force Contributions
More broadly, intermolecular forces have several possible contributions. They are ordered from strongest to weakest:
A repulsive component resulting from the Pauli exclusion principle that prevents close contact of atoms, or the collapse of molecules.
Attractive or repulsive electrostatic interactions between permanent charges (in the case of molecular ions), dipoles (in the case of molecules without inversion centre), quadrupoles (all molecules with symmetry lower than cubic), and in general between permanent multipoles. These interactions also include hydrogen bonds, cation-pi, and pi-stacking interactions. Orientation-averaged contributions from electrostatic interactions are sometimes called the Keesom interaction or Keesom force after Willem Hendrik Keesom.
Induction (also known as polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced multipole on another. This interaction is sometimes called Debye force after Peter J. W. Debye. The interactions (2) and (3) are labelled polar Interactions.
Dispersion (usually named London dispersion interactions after Fritz London), which is the attractive interaction between any pair of molecules, including non-polar atoms, arising from the interactions of instantaneous multipoles.
When to apply the term "van der Waals" force depends on the text. The broadest definitions include all intermolecular forces which are electrostatic in origin, namely (2), (3) and (4). Some authors, whether or not they consider other forces to be of van der Waals type, focus on (3) and (4) as these are the components which act over the longest range.
All intermolecular/van der Waals forces are anisotropic (except those between two noble gas atoms), which means that they depend on the relative orientation of the molecules. The induction and dispersion interactions are always attractive, irrespective of orientation, but the electrostatic interaction changes sign upon rotation of the molecules. That is, the electrostatic force can be attractive or repulsive, depending on the mutual orientation of the molecules. When molecules are in thermal motion, as they are in the gas and liquid phase, the electrostatic force is averaged out to a large extent because the molecules thermally rotate and thus probe both repulsive and attractive parts of the electrostatic force. Random thermal motion can disrupt or overcome the electrostatic component of the van der Waals force but the averaging effect is much less pronounced for the attractive induction and dispersion forces.
The Lennard-Jones potential is often used as an approximate model for the isotropic part of a total (repulsion plus attraction) van der Waals force as a function of distance.
Van der Waals forces are responsible for certain cases of pressure broadening (van der Waals broadening) of spectral lines and the formation of van der Waals molecules. The London–van der Waals forces are related to the Casimir effect for dielectric media, the former being the microscopic description of the latter bulk property. The first detailed calculations of this were done in 1955 by E. M. Lifshitz. A more general theory of van der Waals forces has also been developed.
The main characteristics of van der Waals forces are:
They are weaker than normal covalent and ionic bonds.
The van der Waals forces are additive in nature, consisting of several individual interactions, and cannot be saturated.
They have no directional characteristic.
They are all short-range forces and hence only interactions between the nearest particles need to be considered (instead of all the particles). Van der Waals attraction is greater if the molecules are closer.
Van der Waals forces are independent of temperature except for dipole-dipole interactions.
In low molecular weight alcohols, the hydrogen-bonding properties of their polar hydroxyl group dominate other weaker van der Waals interactions. In higher molecular weight alcohols, the properties of the nonpolar hydrocarbon chain(s) dominate and determine their solubility.
Van der Waals forces are also responsible for the weak hydrogen bond interactions between unpolarized dipoles particularly in acid-base aqueous solution and between biological molecules.
London dispersion force
London dispersion forces, named after the German-American physicist Fritz London, are weak intermolecular forces that arise from the interactive forces between instantaneous multipoles in molecules without permanent multipole moments. In and between organic molecules the multitude of contacts can lead to larger contribution of dispersive attraction, particularly in the presence of heteroatoms. London dispersion forces are also known as 'dispersion forces', 'London forces', or 'instantaneous dipole–induced dipole forces'. The strength of London dispersion forces is proportional to the polarizability of the molecule, which in turn depends on the total number of electrons and the area over which they are spread. Hydrocarbons display small dispersive contributions, the presence of heteroatoms lead to increased LD forces as function of their polarizability, e.g. in the sequence RI>RBr>RCl>RF. In absence of solvents weakly polarizable hydrocarbons form crystals due to dispersive forces; their sublimation heat is a measure of the dispersive interaction.
Van der Waals forces between macroscopic objects
For macroscopic bodies with known volumes and numbers of atoms or molecules per unit volume, the total van der Waals force is often computed based on the "microscopic theory" as the sum over all interacting pairs. It is necessary to integrate over the total volume of the object, which makes the calculation dependent on the objects' shapes. For example, the van der Waals interaction energy between spherical bodies of radii R1 and R2 and with smooth surfaces was approximated in 1937 by Hamaker (using London's famous 1937 equation for the dispersion interaction energy between atoms/molecules as the starting point) by:
where A is the Hamaker coefficient, which is a constant (~10−19 − 10−20 J) that depends on the material properties (it can be positive or negative in sign depending on the intervening medium), and z is the center-to-center distance; i.e., the sum of R1, R2, and r (the distance between the surfaces): .
The van der Waals force between two spheres of constant radii (R1 and R2 are treated as parameters) is then a function of separation since the force on an object is the negative of the derivative of the potential energy function,. This yields:
In the limit of close-approach, the spheres are sufficiently large compared to the distance between them; i.e., or , so that equation (1) for the potential energy function simplifies to:
with the force:
The van der Waals forces between objects with other geometries using the Hamaker model have been published in the literature.
From the expression above, it is seen that the van der Waals force decreases with decreasing size of bodies (R). Nevertheless, the strength of inertial forces, such as gravity and drag/lift, decrease to a greater extent. Consequently, the van der Waals forces become dominant for collections of very small particles such as very fine-grained dry powders (where there are no capillary forces present) even though the force of attraction is smaller in magnitude than it is for larger particles of the same substance. Such powders are said to be cohesive, meaning they are not as easily fluidized or pneumatically conveyed as their more coarse-grained counterparts. Generally, free-flow occurs with particles greater than about 250 μm.
The van der Waals force of adhesion is also dependent on the surface topography. If there are surface asperities, or protuberances, that result in a greater total area of contact between two particles or between a particle and a wall, this increases the van der Waals force of attraction as well as the tendency for mechanical interlocking.
The microscopic theory assumes pairwise additivity. It neglects many-body interactions and retardation. A more rigorous approach accounting for these effects, called the "macroscopic theory", was developed by Lifshitz in 1956. Langbein derived a much more cumbersome "exact" expression in 1970 for spherical bodies within the framework of the Lifshitz theory while a simpler macroscopic model approximation had been made by Derjaguin as early as 1934. Expressions for the van der Waals forces for many different geometries using the Lifshitz theory have likewise been published.
Use by geckos and arthropods
The ability of geckos – which can hang on a glass surface using only one toe – to climb on sheer surfaces has been for many years mainly attributed to the van der Waals forces between these surfaces and the spatulae, or microscopic projections, which cover the hair-like setae found on their footpads.
There were efforts in 2008 to create a dry glue that exploits the effect, and success was achieved in 2011 to create an adhesive tape on similar grounds (i.e. based on van der Waals forces). In 2011, a paper was published relating the effect to both velcro-like hairs and the presence of lipids in gecko footprints.
A later study suggested that capillary adhesion might play a role, but that hypothesis has been rejected by more recent studies.
A 2014 study has shown that gecko adhesion to smooth Teflon and polydimethylsiloxane surfaces is mainly determined by electrostatic interaction (caused by contact electrification), not van der Waals or capillary forces.
Among the arthropods, some spiders have similar setae on their scopulae or scopula pads, enabling them to climb or hang upside-down from extremely smooth surfaces such as glass or porcelain.
See also
Arthropod adhesion
Cold welding
Dispersion (chemistry)
Gecko feet
Lennard-Jones potential
Noncovalent interactions
Synthetic setae
Van der Waals molecule
Van der Waals radius
Van der Waals strain
Van der Waals surface
Wringing of gauge blocks
References
Further reading
English translation:
English translation:
External links
An introductory description of the van der Waals force (as a sum of attractive components only)
TED Talk on biomimicry, including applications of van der Waals force.
Intermolecular forces
Force | 0.776366 | 0.998859 | 0.77548 |
Energy profile (chemistry) | In theoretical chemistry, an energy profile is a theoretical representation of a chemical reaction or process as a single energetic pathway as the reactants are transformed into products. This pathway runs along the reaction coordinate, which is a parametric curve that follows the pathway of the reaction and indicates its progress; thus, energy profiles are also called reaction coordinate diagrams. They are derived from the corresponding potential energy surface (PES), which is used in computational chemistry to model chemical reactions by relating the energy of a molecule(s) to its structure (within the Born–Oppenheimer approximation).
Qualitatively, the reaction coordinate diagrams (one-dimensional energy surfaces) have numerous applications. Chemists use reaction coordinate diagrams as both an analytical and pedagogical aid for rationalizing and illustrating kinetic and thermodynamic events. The purpose of energy profiles and surfaces is to provide a qualitative representation of how potential energy varies with molecular motion for a given reaction or process.
Potential energy surfaces
In simplest terms, a potential energy surface or PES is a mathematical or graphical representation of the relation between energy of a molecule and its geometry. The methods for describing the potential energy are broken down into a classical mechanics interpretation (molecular mechanics) and a quantum mechanical interpretation. In the quantum mechanical interpretation an exact expression for energy can be obtained for any molecule derived from quantum principles (although an infinite basis set may be required) but ab initio calculations/methods will often use approximations to reduce computational cost. Molecular mechanics is empirically based and potential energy is described as a function of component terms that correspond to individual potential functions such as torsion, stretches, bends, Van der Waals energies, electrostatics and cross terms. Each component potential function is fit to experimental data or properties predicted by ab initio calculations. Molecular mechanics is useful in predicting equilibrium geometries and transition states as well as relative conformational stability. As a reaction occurs the atoms of the molecules involved will generally undergo some change in spatial orientation through internal motion as well as its electronic environment. Distortions in the geometric parameters result in a deviation from the equilibrium geometry (local energy minima). These changes in geometry of a molecule or interactions between molecules are dynamic processes which call for understanding all the forces operating within the system. Since these forces can be mathematically derived as first derivative of potential energy with respect to a displacement, it makes sense to map the potential energy of the system as a function of geometric parameters , , and so on. The potential energy at given values of the geometric parameters is represented as a hyper-surface (when ) or a surface (when ). Mathematically, it can be written as
For the quantum mechanical interpretation, a PES is typically defined within the Born–Oppenheimer approximation (in order to distinguish between nuclear and electronic motion and energy) which states that the nuclei are stationary relative to the electrons. In other words, the approximation allows the kinetic energy of the nuclei (or movement of the nuclei) to be neglected and therefore the nuclei repulsion is a constant value (as static point charges) and is only considered when calculating the total energy of the system. The electronic energy is then taken to depend parametrically on the nuclear coordinates, meaning a new electronic energy must be calculated for each corresponding atomic configuration.
PES is an important concept in computational chemistry and greatly aids in geometry and transition state optimization.
Degrees of freedom
An -atom system is defined by coordinates: for each atom. These degrees of freedom can be broken down to include 3 overall translational and 3 (or 2) overall rotational degrees of freedom for a non-linear system (for a linear system). However, overall translational or rotational degrees do not affect the potential energy of the system, which only depends on its internal coordinates. Thus an -atom system will be defined by (non-linear) or (linear) coordinates. These internal coordinates may be represented by simple stretch, bend, torsion coordinates, or symmetry-adapted linear combinations, or redundant coordinates, or normal modes coordinates, etc. For a system described by -internal coordinates a separate potential energy function can be written with respect to each of these coordinates by holding the other parameters at a constant value allowing the potential energy contribution from a particular molecular motion (or interaction) to be monitored while the other parameters are defined.
Consider a diatomic molecule AB which can macroscopically visualized as two balls (which depict the two atoms A and B) connected through a spring which depicts the bond. As this spring (or bond) is stretched or compressed, the potential energy of the ball-spring system (AB molecule) changes and this can be mapped on a 2-dimensional plot as a function of distance between A and B, i.e. bond length.
The concept can be expanded to a tri-atomic molecule such as water where we have two bonds and bond angle as variables on which the potential energy of a water molecule will depend. We can safely assume the two bonds to be equal. Thus, a PES can be drawn mapping the potential energy E of a water molecule as a function of two geometric parameters, bond length and bond angle. The lowest point on such a PES will define the equilibrium structure of a water molecule.
The same concept is applied to organic compounds like ethane, butane etc. to define their lowest energy and most stable conformations.
Characterizing a PES
The most important points on a PES are the stationary points where the surface is flat, i.e. parallel to a horizontal line corresponding to one geometric parameter, a plane corresponding to two such parameters or even a hyper-plane corresponding to more than two geometric parameters. The energy values corresponding to the transition states and the ground state of the reactants and products can be found using the potential energy function by calculating the function's critical points or the stationary points. Stationary points occur when the 1st partial derivative of the energy with respect to each geometric parameter is equal to zero.
Using analytical derivatives of the derived expression for energy, one can find and characterize a stationary point as minimum, maximum or a saddle point. The ground states are represented by local energy minima and the transition states by saddle points.
Minima represent stable or quasi-stable species, i.e. reactants and products with finite lifetime. Mathematically, a minimum point is given as
A point may be local minimum when it is lower in energy compared to its surrounding only or a global minimum which is the lowest energy point on the entire potential energy surface.
Saddle point represents a maximum along only one direction (that of the reaction coordinate) and is a minimum along all other directions. In other words, a saddle point represents a transition state along the reaction coordinate. Mathematically, a saddle point occurs when
for all except along the reaction coordinate and
along the reaction coordinate.
Reaction coordinate diagrams
The intrinsic reaction coordinate (IRC), derived from the potential energy surface, is a parametric curve that connects two energy minima in the direction that traverses the minimum energy barrier (or shallowest ascent) passing through one or more saddle point(s). However, in reality if reacting species attains enough energy it may deviate from the IRC to some extent. The energy values (points on the hyper-surface) along the reaction coordinate result in a 1-D energy surface (a line) and when plotted against the reaction coordinate (energy vs reaction coordinate) gives what is called a reaction coordinate diagram (or energy profile). Another way of visualizing an energy profile is as a cross section of the hyper surface, or surface, long the reaction coordinate. Figure 5 shows an example of a cross section, represented by the plane, taken along the reaction coordinate and the potential energy is represented as a function or composite of two geometric variables to form a 2-D energy surface. In principle, the potential energy function can depend on N variables but since an accurate visual representation of a function of 3 or more variables cannot be produced (excluding level hypersurfaces) a 2-D surface has been shown. The points on the surface that intersect the plane are then projected onto the reaction coordinate diagram (shown on the right) to produce a 1-D slice of the surface along the IRC. The reaction coordinate is described by its parameters, which are frequently given as a composite of several geometric parameters, and can change direction as the reaction progresses so long as the smallest energy barrier (or activation energy (Ea)) is traversed. The saddle point represents the highest energy point lying on the reaction coordinate connecting the reactant and product; this is known as the transition state. A reaction coordinate diagram may also have one or more transient intermediates which are shown by high energy wells connected via a transition state peak. Any chemical structure that lasts longer than the time for typical bond vibrations (10−13 – 10−14s) can be considered as intermediate.
A reaction involving more than one elementary step has one or more intermediates being formed which, in turn, means there is more than one energy barrier to overcome. In other words, there is more than one transition state lying on the reaction pathway. As it is intuitive that pushing over an energy barrier or passing through a transition state peak would entail the highest energy, it becomes clear that it would be the slowest step in a reaction pathway. However, when more than one such barrier is to be crossed, it becomes important to recognize the highest barrier which will determine the rate of the reaction. This step of the reaction whose rate determines the overall rate of reaction is known as rate determining step or rate limiting step. The height of energy barrier is always measured relative to the energy of the reactant or starting material. Different possibilities have been shown in figure 6.
Reaction coordinate diagrams also give information about the equilibrium between a reactant or a product and an intermediate. If the barrier energy for going from intermediate to product is much higher than the one for reactant to intermediate transition, it can be safely concluded that a complete equilibrium is established between the reactant and intermediate. However, if the two energy barriers for reactant-to-intermediate and intermediate-to-product transformation are nearly equal, then no complete equilibrium is established and steady state approximation is invoked to derive the kinetic rate expressions for such a reaction.
Drawing a reaction coordinate diagram
Although a reaction coordinate diagram is essentially derived from a potential energy surface, it is not always feasible to draw one from a PES. A chemist draws a reaction coordinate diagram for a reaction based on the knowledge of free energy or enthalpy change associated with the transformation which helps him to place the reactant and product into perspective and whether any intermediate is formed or not. One guideline for drawing diagrams for complex reactions is the principle of least motion which says that a favored reaction proceeding from a reactant to an intermediate or from one intermediate to another or product is one which has the least change in nuclear position or electronic configuration. Thus, it can be said that the reactions involving dramatic changes in position of nuclei actually occur through a series of simple chemical reactions. Hammond postulate is another tool which assists in drawing the energy of a transition state relative to a reactant, an intermediate or a product. It states that the transition state resembles the reactant, intermediate or product that it is closest in energy to, as long the energy difference between the transition state and the adjacent structure is not too large. This postulate helps to accurately predict the shape of a reaction coordinate diagram and also gives an insight into the molecular structure at the transition state.
Kinetic and thermodynamic considerations
A chemical reaction can be defined by two important parameters- the Gibbs free energy associated with a chemical transformation and the rate of such a transformation. These parameters are independent of each other. While free energy change describes the stability of products relative to reactants, the rate of any reaction is defined by the energy of the transition state relative to the starting material. Depending on these parameters, a reaction can be favorable or unfavorable, fast or slow and reversible or irreversible, as shown in figure 8.
A favorable reaction is one in which the change in free energy ∆G° is negative (exergonic) or in other words, the free energy of product, G°product, is less than the free energy of the starting materials, G°reactant. ∆G°> 0 (endergonic) corresponds to an unfavorable reaction. The ∆G° can be written as a function of change in enthalpy (∆H°) and change in entropy (∆S°) as ∆G°= ∆H° – T∆S°. Practically, enthalpies, not free energy, are used to determine whether a reaction is favorable or unfavorable, because ∆H° is easier to measure and T∆S° is usually too small to be of any significance (for T < 100 °C). A reaction with ∆H°<0 is called exothermic reaction while one with ∆H°>0 is endothermic.
The relative stability of reactant and product does not define the feasibility of any reaction all by itself. For any reaction to proceed, the starting material must have enough energy to cross over an energy barrier. This energy barrier is known as activation energy (∆G≠) and the rate of reaction is dependent on the height of this barrier. A low energy barrier corresponds to a fast reaction and high energy barrier corresponds to a slow reaction.
A reaction is in equilibrium when the rate of forward reaction is equal to the rate of reverse reaction. Such a reaction is said to be reversible. If the starting material and product(s) are in equilibrium then their relative abundance is decided by the difference in free energy between them. In principle, all elementary steps are reversible, but in many cases the equilibrium lies so much towards the product side that the starting material is effectively no longer observable or present in sufficient concentration to have an effect on reactivity. Practically speaking, the reaction is considered to be irreversible.
While most reversible processes will have a reasonably small K of 103 or less, this is not a hard and fast rule, and a number of chemical processes require reversibility of even very favorable reactions. For instance, the reaction of an carboxylic acid with amines to form a salt takes place with K of 105–6, and at ordinary temperatures, this process is regarded as irreversible. Yet, with sufficient heating, the reverse reaction takes place to allow formation of the tetrahedral intermediate and, ultimately, amide and water. (For an extreme example requiring reversibility of a step with K > 1011, see demethylation.) A reaction can also be rendered irreversible if a subsequent, faster step takes place to consume the initial product(s), or a gas is evolved in an open system. Thus, there is no value of K that serves as a "dividing line" between reversible and irreversible processes. Instead, reversibility depends on timescale, temperature, the reaction conditions, and the overall energy landscape.
When a reactant can form two different products depending on the reaction conditions, it becomes important to choose the right conditions to favor the desired product. If a reaction is carried out at relatively lower temperature, then the product formed is one lying across the smaller energy barrier. This is called kinetic control and the ratio of the products formed depends on the relative energy barriers leading to the products. Relative stabilities of the products do not matter. However, at higher temperatures the molecules have enough energy to cross over both energy barriers leading to the products. In such a case, the product ratio is determined solely by the energies of the products and energies of the barrier do not matter. This is known as thermodynamic control and it can only be achieved when the products can inter-convert and equilibrate under the reaction condition. A reaction coordinate diagram can also be used to qualitatively illustrate kinetic and thermodynamic control in a reaction.
Applications
Following are few examples on how to interpret reaction coordinate diagrams and use them in analyzing reactions.
Solvent Effect: In general, if the transition state for the rate determining step corresponds to a more charged species relative to the starting material then increasing the polarity of the solvent will increase the rate of the reaction since a more polar solvent be more effective at stabilizing the transition state (ΔG‡ would decrease). If the transition state structure corresponds to a less charged species then increasing the solvents polarity would decrease the reaction rate since a more polar solvent would be more effective at stabilizing the starting material (ΔGo would decrease which in turn increases ΔG‡).
SN1 vs SN2
The SN1 and SN2 mechanisms are used as an example to demonstrate how solvent effects can be indicated in reaction coordinate diagrams.
SN1: Figure 10 shows the rate determining step for an SN1 mechanism, formation of the carbocation intermediate, and the corresponding reaction coordinate diagram. For an SN1 mechanism the transition state structure shows a partial charge density relative to the neutral ground state structure. Therefore, increasing the solvent polarity, for example from hexanes (shown as blue) to ether (shown in red), would decrease the rate of the reaction. As shown in figure 9, the starting material has approximately the same stability in both solvents (therefore ΔΔGo=ΔGopolar – ΔGonon polar is small) and the transition state is stabilized more in ether meaning ΔΔG≠ = ΔG≠polar – ΔG≠non-polar is large.
SN2: For an SN2 mechanism a strongly basic nucleophile (i.e. a charged nucleophile) is favorable. In figure 11 below the rate determining step for Williamson ether synthesis is shown. The starting material is methyl chloride and an ethoxide ion which has a localized negative charge meaning it is more stable in polar solvents. The figure shows a transition state structure as the methyl chloride undergoes nucleophilic attack. In the transition state structure the charge is distributed between the Cl and the O atoms and the more polar solvent is less effective at stabilizing the transition state structure relative to the starting materials. In other words, the energy difference between the polar and non-polar solvent is greater for the ground state (for the starting material) than in the transition state.
Catalysts: There are two types of catalysts, positive and negative. Positive catalysts increase the reaction rate and negative catalysts (or inhibitors) slow down a reaction and possibly cause the reaction not occur at all. The purpose of a catalyst is to alter the activation energy. Figure 12 illustrates the purpose of a catalyst in that only the activation energy is changed and not the relative thermodynamic stabilities, shown in the figure as ΔH, of the products and reactants. This means that a catalyst will not alter the equilibrium concentrations of the products and reactants but will only allow the reaction to reach equilibrium faster. Figure 13 shows the catalyzed pathway occurring in multiple steps which is a more realistic depiction of a catalyzed process. The new catalyzed pathway can occur through the same mechanism as the uncatalyzed reaction or through an alternate mechanism. An enzyme is a biological catalyst that increases the rate for many vital biochemical reactions. Figure 13 shows a common way to illustrate the effect of an enzyme on a given biochemical reaction.
See also
Gibbs free energy
Enthalpy
Entropy
Computational chemistry
Molecular mechanics
Born–Oppenheimer approximation
References
Computational chemistry | 0.795932 | 0.974298 | 0.775476 |
Petrology | Petrology is the branch of geology that studies rocks, their mineralogy, composition, texture, structure and the conditions under which they form. Petrology has three subdivisions: igneous, metamorphic, and sedimentary petrology. Igneous and metamorphic petrology are commonly taught together because both make heavy use of chemistry, chemical methods, and phase diagrams. Sedimentary petrology is commonly taught together with stratigraphy because it deals with the processes that form sedimentary rock. Modern sedimentary petrology is making increasing use of chemistry.
Background
Lithology was once approximately synonymous with petrography, but in current usage, lithology focuses on macroscopic hand-sample or outcrop-scale description of rocks while petrography is the speciality that deals with microscopic details.
In the petroleum industry, lithology, or more specifically mud logging, is the graphic representation of geological formations being drilled through and drawn on a log called a mud log. As the cuttings are circulated out of the borehole, they are sampled, examined (typically under a 10× microscope) and tested chemically when needed.
Methodology
Petrology utilizes the fields of mineralogy, petrography, optical mineralogy, and chemical analysis to describe the composition and texture of rocks. Petrologists also include the principles of geochemistry and geophysics through the study of geochemical trends and cycles and the use of thermodynamic data and experiments in order to better understand the origins of rocks.
Branches
There are three branches of petrology, corresponding to the three types of rocks: igneous, metamorphic, and sedimentary, and another dealing with experimental techniques:
Igneous petrology focuses on the composition and texture of igneous rocks (rocks such as granite or basalt which have crystallized from molten rock or magma). Igneous rocks include volcanic and plutonic rocks.
Sedimentary petrology focuses on the composition and texture of sedimentary rocks (rocks such as sandstone, shale, or limestone which consist of pieces or particles derived from other rocks or biological or chemical deposits, and are usually bound together in a matrix of finer material).
Metamorphic petrology focuses on the composition and texture of metamorphic rocks (rocks such as slate, marble, gneiss, or schist) which have undergone chemical, mineralogical or textural changes due to the effects of pressure, temperature, or both). The original rock, prior to change (called the protolith), may be of any sort.
Experimental petrology employs high-pressure, high-temperature apparatus to investigate the geochemistry and phase relations of natural or synthetic materials at elevated pressures and temperatures. Experiments are particularly useful for investigating rocks of the lower crust and upper mantle that rarely survive the journey to the surface in pristine condition. They are also one of the prime sources of information about completely inaccessible rocks, such as those in the Earth's lower mantle and in the mantles of the other terrestrial planets and the Moon. The work of experimental petrologists has laid a foundation on which modern understanding of igneous and metamorphic processes has been built.
See also
Ore
Pedology
References
Citations
Sources
Best, Myron G. (2002), Igneous and Metamorphic Petrology (Blackwell Publishing) .
Blatt, Harvey; Tracy, Robert J.; Owens, Brent (2005), Petrology: igneous, sedimentary, and metamorphic (W. H. Freeman) .
Dietrich, Richard Vincent; Skinner, Brian J. (2009), Gems, Granites, and Gravels: knowing and using rocks and minerals (Cambridge University Press)
Fei, Yingwei; Bertka, Constance M.; Mysen, Bjorn O. (eds.) (1999), Mantle Petrology: field observations and high-pressure experimentation (Houston TX: Geochemical Society) .
Philpotts, Anthony; Ague, Jay (2009), Principles of Igneous and Metamorphic Petrology (Cambridge University Press)
Robb, L. (2005). Introduction to Ore-Forming Processes (Blackwell Science)
External links
Atlas of Igneous and metamorphic rocks, minerals, and textures – Geology Department, University of North Carolina
Metamorphic Petrology Database (MetPetDB) – Department of Earth and Environmental Sciences, Rensselaer Polytechnic Institute
Petrological Database of the Ocean Floor (PetDB) - Center for International Earth Science Information Network, Columbia University
Petroleum geology
Oilfield terminology | 0.785631 | 0.987066 | 0.77547 |
Process theory | A process theory is a system of ideas that explains how an entity changes and develops. Process theories are often contrasted with variance theories, that is, systems of ideas that explain the variance in a dependent variable based on one or more independent variables. While process theories focus on how something happens, variance theories focus on why something happens. Examples of process theories include evolution by natural selection, continental drift and the nitrogen cycle.
Process theory archetypes
Process theories come in four common archetypes. Evolutionary process theories explain change in a population through variation, selection and retention—much like biological evolution. In a dialectic process theory, “stability and change are explained by reference to the balance of power between opposing entities” (p. 517). In a teleological process theory, an agent “constructs an envisioned end state, takes action to reach it and monitors the progress” (p. 518). In a lifecycle process theory, “the trajectory to the final end state is prefigured and requires a particular historical sequence of events” (p. 515); that is, change always conforms to the same series of activities, stages, phases, like a caterpillar transforming into a butterfly.
Applications and examples
Process theories are important in management and software engineering. Process theories are used to explain how decisions are made how software is designed and how software processes are improved.
Motivation theories can be classified broadly into two different perspectives: Content and Process theories.
Content theories deal with “what” motivates people and it is concerned with individual needs and goals. Maslow, Alderfer, Herzberg and McClelland studied motivation from a “content” perspective.
Process theories deal with the “process” of motivation and are concerned with “how” motivation occurs. Vroom, Porter & Lawler, Adams and Locke studied motivation from a “process” perspective.
Process theories are also used in education, psychology, geology and many other fields; however, they are not always called "process theories".
See also
Interactions of actors theory
Process-oriented psychology
Process philosophy
Process architecture
Notes
References
A Brief Introduction to Motivation Theory
Management science | 0.803682 | 0.964869 | 0.775448 |
Quantitative research | Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philosophies.
Associated with the natural, applied, formal, and social sciences this research strategy promotes the objective empirical investigation of observable phenomena to test and understand relationships. This is done through a range of quantifying methods and techniques, reflecting on its broad utilization as a research strategy across differing academic disciplines.
There are several situations where quantitative research may not be the most appropriate or effective method to use:
1. When exploring in-depth or complex topics.
2. When studying subjective experiences and personal opinions.
3. When conducting exploratory research.
4. When studying sensitive or controversial topics
The objective of quantitative research is to develop and employ mathematical models, theories, and hypotheses pertaining to phenomena. The process of measurement is central to quantitative research because it provides the fundamental connection between empirical observation and mathematical expression of quantitative relationships.
Quantitative data is any data that is in numerical form such as statistics, percentages, etc. The researcher analyses the data with the help of statistics and hopes the numbers will yield an unbiased result that can be generalized to some larger population. Qualitative research, on the other hand, inquires deeply into specific experiences, with the intention of describing and exploring meaning through text, narrative, or visual-based data, by developing themes exclusive to that set of participants.
Quantitative research is widely used in psychology, economics, demography, sociology, marketing, community health, health & human development, gender studies, and political science; and less frequently in anthropology and history. Research in mathematical sciences, such as physics, is also "quantitative" by definition, though this use of the term differs in context. In the social sciences, the term relates to empirical methods originating in both philosophical positivism and the history of statistics, in contrast with qualitative research methods.
Qualitative research produces information only on the particular cases studied, and any more general conclusions are only hypotheses. Quantitative methods can be used to verify which of such hypotheses are true. A comprehensive analysis of 1274 articles published in the top two American sociology journals between 1935 and 2005 found that roughly two-thirds of these articles used quantitative method.
Overview
Quantitative research is generally closely affiliated with ideas from 'the scientific method', which can include:
The generation of models, theories and hypotheses
The development of instruments and methods for measurement
Experimental control and manipulation of variables
Collection of empirical data
Modeling and analysis of data
Quantitative research is often contrasted with qualitative research, which purports to be focused more on discovering underlying meanings and patterns of relationships, including classifications of types of phenomena and entities, in a manner that does not involve mathematical models. Approaches to quantitative psychology were first modeled on quantitative approaches in the physical sciences by Gustav Fechner in his work on psychophysics, which built on the work of Ernst Heinrich Weber. Although a distinction is commonly drawn between qualitative and quantitative aspects of scientific investigation, it has been argued that the two go hand in hand. For example, based on analysis of the history of science, Kuhn concludes that "large amounts of qualitative work have usually been prerequisite to fruitful quantification in the physical sciences". Qualitative research is often used to gain a general sense of phenomena and to form theories that can be tested using further quantitative research. For instance, in the social sciences qualitative research methods are often used to gain better understanding of such things as intentionality (from the speech response of the researchee) and meaning (why did this person/group say something and what did it mean to them?) (Kieron Yeoman).
Although quantitative investigation of the world has existed since people first began to record events or objects that had been counted, the modern idea of quantitative processes have their roots in Auguste Comte's positivist framework. Positivism emphasized the use of the scientific method through observation to empirically test hypotheses explaining and predicting what, where, why, how, and when phenomena occurred. Positivist scholars like Comte believed only scientific methods rather than previous spiritual explanations
for human behavior could advance.
Quantitative methods are an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative methods,
reviews of the literature (including scholarly), interviews with experts and computer simulation, and which forms an extension of data triangulation.
Quantitative methods have limitations. These studies do not provide reasoning behind participants' responses, they often do not reach underrepresented populations, and they may span long periods in order to collect the data.
Use of statistics
Statistics is the most widely used branch of mathematics in quantitative research outside of the physical sciences, and also finds applications within the physical sciences, such as in statistical mechanics. Statistical methods are used extensively within fields such as economics, social sciences and biology. Quantitative research using statistical methods starts with the collection of data, based on the hypothesis or theory. Usually a big sample of data is collected – this would require verification, validation and recording before the analysis can take place. Software packages such as SPSS and R are typically used for this purpose. Causal relationships are studied by manipulating factors thought to influence the phenomena of interest while controlling other variables relevant to the experimental outcomes. In the field of health, for example, researchers might measure and study the relationship between dietary intake and measurable physiological effects such as weight loss, controlling for other key variables such as exercise. Quantitatively based opinion surveys are widely used in the media, with statistics such as the proportion of respondents in favor of a position commonly reported. In opinion surveys, respondents are asked a set of structured questions and their responses are tabulated. In the field of climate science, researchers compile and compare statistics such as temperature or atmospheric concentrations of carbon dioxide.
Empirical relationships and associations are also frequently studied by using some form of general linear model, non-linear model, or by using factor analysis. A fundamental principle in quantitative research is that correlation does not imply causation, although some such as Clive Granger suggest that a series of correlations can imply a degree of causality. This principle follows from the fact that it is always possible a spurious relationship exists for variables between which covariance is found in some degree. Associations may be examined between any combination of continuous and categorical variables using methods of statistics. Other data analytical approaches for studying causal relations can be performed with Necessary Condition Analysis (NCA), which outlines must-have conditions for the studied outcome variable.
Measurement
Views regarding the role of measurement in quantitative research are somewhat divergent. Measurement is often regarded as being only a means by which observations are expressed numerically in order to investigate causal relations or associations. However, it has been argued that measurement often plays a more important role in quantitative research. For example, Kuhn argued that within quantitative research, the results that are shown can prove to be strange. This is because accepting a theory based on results of quantitative data could prove to be a natural phenomenon. He argued that such abnormalities are interesting when done during the process of obtaining data, as seen below:
When measurement departs from theory, it is likely to yield mere numbers, and their very neutrality makes them particularly sterile as a source of remedial suggestions. But numbers register the departure from theory with an authority and finesse that no qualitative technique can duplicate, and that departure is often enough to start a search (Kuhn, 1961, p. 180).
In classical physics, the theory and definitions which underpin measurement are generally deterministic in nature. In contrast, probabilistic measurement models known as the Rasch model and Item response theory models are generally employed in the social sciences. Psychometrics is the field of study concerned with the theory and technique for measuring social and psychological attributes and phenomena. This field is central to much quantitative research that is undertaken within the social sciences.
Quantitative research may involve the use of proxies as stand-ins for other quantities that cannot be directly measured. Tree-ring width, for example, is considered a reliable proxy of ambient environmental conditions such as the warmth of growing seasons or amount of rainfall. Although scientists cannot directly measure the temperature of past years, tree-ring width and other climate proxies have been used to provide a semi-quantitative record of average temperature in the Northern Hemisphere back to 1000 A.D. When used in this way, the proxy record (tree ring width, say) only reconstructs a certain amount of the variance of the original record. The proxy may be calibrated (for example, during the period of the instrumental record) to determine how much variation is captured, including whether both short and long term variation is revealed. In the case of tree-ring width, different species in different places may show more or less sensitivity to, say, rainfall or temperature: when reconstructing a temperature record there is considerable skill in selecting proxies that are well correlated with the desired variable.
Relationship with qualitative methods
In most physical and biological sciences, the use of either quantitative or qualitative methods is uncontroversial, and each is used when appropriate. In the social sciences, particularly in sociology, social anthropology and psychology, the use of one or other type of method can be a matter of controversy and even ideology, with particular schools of thought within each discipline favouring one type of method and pouring scorn on to the other. The majority tendency throughout the history of social science, however, is to use eclectic approaches-by combining both methods. Qualitative methods might be used to understand the meaning of the conclusions produced by quantitative methods. Using quantitative methods, it is possible to give precise and testable expression to qualitative ideas. This combination of quantitative and qualitative data gathering is often referred to as mixed-methods research.
Examples
Research that consists of the percentage amounts of all the elements that make up Earth's atmosphere.
Survey that concludes that the average patient has to wait two hours in the waiting room of a certain doctor before being selected.
An experiment in which group x was given two tablets of aspirin a day and group y was given two tablets of a placebo a day where each participant is randomly assigned to one or other of the groups. The numerical factors such as two tablets, percent of elements and the time of waiting make the situations and results quantitative.
In economics, quantitative research is used to analyze business enterprises and the factors contributing to the diversity of organizational structures and the relationships of firms with labour, capital and product markets.
See also
Antipositivism
Case study research
Econometrics
Falsifiability
Market research
Positivism
Qualitative research
Quantitative marketing research
Quantitative psychology
Quantification (science)
Observational study
Sociological positivism
Statistical survey
Statistics
References | 0.776896 | 0.998116 | 0.775432 |
Osteology | Osteology is the scientific study of bones, practised by osteologists. A subdiscipline of anatomy, anthropology, and paleontology, osteology is the detailed study of the structure of bones, skeletal elements, teeth, microbone morphology, function, disease, pathology, the process of ossification from cartilaginous molds, and the resistance and hardness of bones (biophysics).
Osteologists frequently work in the public and private sector as consultants for museums, scientists for research laboratories, scientists for medical investigations and/or for companies producing osteological reproductions in an academic context.
Osteology and osteologists should not be confused with the pseudoscientific practice of osteopathy and its practitioners, osteopaths.
Methods
A typical analysis will include:
an inventory of the skeletal elements present
a dental inventory
aging data, based upon epiphyseal fusion and dental eruption (for subadults) and deterioration of the pubic symphysis or sternal end of ribs (for adults)
stature and other metric data
ancestry
non-metric traits
pathology and/or cultural modifications
Applications
Osteological approaches are frequently applied to investigations in disciplines such as vertebrate paleontology, zoology, forensic science, physical anthropology, and archaeology. It has been shown that osteological characters have greater consistency with molecular phylogenies than non-osteological (soft tissue) characters, implying that they may be more reliable in reconstructing evolutionary history. Osteology has a place in research on topics including:
Ancient warfare
Activity patterns
Criminal investigations
Demography
Developmental biology
Diet
Disease
Genetics of early populations
Fossil assemblages
Health
Human migration
Identification of unknown remains
Physique
Social inequality
War crimes
Human osteology and forensic anthropology
Examination of human osteology is often used in forensic anthropology, which is usually used to identify age, death, sex, growth, and development of human remains and can be used in a biocultural context.
There are four factors leading to variation in skeletal anatomy: ontogeny (or growth), sexual dimorphism, geographic variation and individual, or idiosyncratic, variation.
Osteology can also determine an individual's ancestry, race or ethnicity. Historically, humans were typically grouped into three outdated race groups: caucasoids, mongoloids and negroids. However, this classification system is growing less reliable due to interancestrial marriages increases and markers become less defined. Determination of ancestry is controversial, but can give an understandable label to define the ancestry of an unidentified body or skeleton.
Crossrail Project
One example of osteology and its various applications is illustrated by the Crossrail Project. An endeavor by the city of London to expand their railway system inadvertently uncovered 25 human skeletons at Charterhouse Square in 2013. Although archaeological excavation of the skeletons temporarily halted further advances in the railway system, they have given way to new, possibly revolutionary, discoveries in the field of archaeology.
These 25 skeletal remains, along with many more that were found in further searches, are believed to be from the mass graves dug to bury the millions of victims of the Black Death in the 14th century. Archaeologists and forensic scientists have used osteology to examine the condition of the skeletal remains, to help piece together the reason why the Black Death had such a detrimental effect on the European population. It was discovered that most of the population was in generally poor health to begin with. Through extensive analysis of the bones, it was discovered that many of the inhabitants of Great Britain were plagued with rickets, anemia, and malnutrition. There has also been frequent evidence that much of the population had traces of broken bones from frequent fighting and hard labor.
This archaeological project has been named the Crossrail Project. Archaeologists will continue to excavate and search for remains to help uncover missing pieces of history. These advances in our understanding of the past will be improved by the study of other skeletons buried in the same area.
See also
Osteometric points
Museum of Osteology
Forensic anthropology
Paleontology
Bone Clones
Notes
References
Bass, W. M. 2005. Human Osteology: A Laboratory and Field Manual. 5th Edition. Columbia: Missouri Archaeological Society.
Buikstra, J. E. and Ubelaker, D. H. (eds.) 1994. Standards for Data Collection from Human Skeletal Remains. Arkansas Archeological Survey Research Series No. 44.
Cox, M and Mays, S. (eds.) 2000. Human Osteology in Archaeology and Forensic Science. London: Greenwich Medical Media.
F.F.A. Ijpma, H.J. ten Duis, T.M. van Guilik 2012." A cornerstone of orthopaedic education". Bones & Joint 360, Vol. 1, No. 6
Ekezie, Jervas, PHD, "Bone, the Frame of Human Classification: The Core of Anthropology". Department of Anatomy, School of Basic Medical Sciences, Federal University of Technology, PMB 1526, Owerri, Nigeria. Volume 2: Issue 1.
External links
British Association for Biological Anthropology and Osteoarchaeology
Museum of Osteology
Biological anthropology | 0.784674 | 0.98816 | 0.775384 |
Chemical engineering | Chemical engineering is an engineering field which deals with the study of the operation and design of chemical plants as well as methods of improving production. Chemical engineers develop economical commercial processes to convert raw materials into useful products. Chemical engineering uses principles of chemistry, physics, mathematics, biology, and economics to efficiently use, produce, design, transport and transform energy and materials. The work of chemical engineers can range from the utilization of nanotechnology and nanomaterials in the laboratory to large-scale industrial processes that convert chemicals, raw materials, living cells, microorganisms, and energy into useful forms and products. Chemical engineers are involved in many aspects of plant design and operation, including safety and hazard assessments, process design and analysis, modeling, control engineering, chemical reaction engineering, nuclear engineering, biological engineering, construction specification, and operating instructions.
Chemical engineers typically hold a degree in Chemical Engineering or Process Engineering. Practicing engineers may have professional certification and be accredited members of a professional body. Such bodies include the Institution of Chemical Engineers (IChemE) or the American Institute of Chemical Engineers (AIChE). A degree in chemical engineering is directly linked with all of the other engineering disciplines, to various extents.
Etymology
A 1996 article cites James F. Donnelly for mentioning an 1839 reference to chemical engineering in relation to the production of sulfuric acid. In the same paper, however, George E. Davis, an English consultant, was credited with having coined the term. Davis also tried to found a Society of Chemical Engineering, but instead, it was named the Society of Chemical Industry (1881), with Davis as its first secretary. The History of Science in United States: An Encyclopedia puts the use of the term around 1890. "Chemical engineering", describing the use of mechanical equipment in the chemical industry, became common vocabulary in England after 1850. By 1910, the profession, "chemical engineer," was already in common use in Britain and the United States.
History
New concepts and innovations
In the 1940s, it became clear that unit operations alone were insufficient in developing chemical reactors. While the predominance of unit operations in chemical engineering courses in Britain and the United States continued until the 1960s, transport phenomena started to receive greater focus. Along with other novel concepts, such as process systems engineering (PSE), a "second paradigm" was defined. Transport phenomena gave an analytical approach to chemical engineering while PSE focused on its synthetic elements, such as those of a control system and process design. Developments in chemical engineering before and after World War II were mainly incited by the petrochemical industry; however, advances in other fields were made as well. Advancements in biochemical engineering in the 1940s, for example, found application in the pharmaceutical industry, and allowed for the mass production of various antibiotics, including penicillin and streptomycin. Meanwhile, progress in polymer science in the 1950s paved way for the "age of plastics".
Safety and hazard developments
Concerns regarding large-scale chemical manufacturing facilities' safety and environmental impact were also raised during this period. Silent Spring, published in 1962, alerted its readers to the harmful effects of DDT, a potent insecticide. The 1974 Flixborough disaster in the United Kingdom resulted in 28 deaths, as well as damage to a chemical plant and three nearby villages. 1984 Bhopal disaster in India resulted in almost 4,000 deaths. These incidents, along with other incidents, affected the reputation of the trade as industrial safety and environmental protection were given more focus. In response, the IChemE required safety to be part of every degree course that it accredited after 1982. By the 1970s, legislation and monitoring agencies were instituted in various countries, such as France, Germany, and the United States. In time, the systematic application of safety principles to chemical and other process plants began to be considered a specific discipline, known as process safety.
Recent progress
Advancements in computer science found applications for designing and managing plants, simplifying calculations and drawings that previously had to be done manually. The completion of the Human Genome Project is also seen as a major development, not only advancing chemical engineering but genetic engineering and genomics as well. Chemical engineering principles were used to produce DNA sequences in large quantities.
Concepts
Chemical engineering involves the application of several principles. Key concepts are presented below.
Plant design and construction
Chemical engineering design concerns the creation of plans, specifications, and economic analyses for pilot plants, new plants, or plant modifications. Design engineers often work in a consulting role, designing plants to meet clients' needs. Design is limited by several factors, including funding, government regulations, and safety standards. These constraints dictate a plant's choice of process, materials, and equipment.
Plant construction is coordinated by project engineers and project managers, depending on the size of the investment. A chemical engineer may do the job of project engineer full-time or part of the time, which requires additional training and job skills or act as a consultant to the project group. In the USA the education of chemical engineering graduates from the Baccalaureate programs accredited by ABET do not usually stress project engineering education, which can be obtained by specialized training, as electives, or from graduate programs. Project engineering jobs are some of the largest employers for chemical engineers.
Process design and analysis
A unit operation is a physical step in an individual chemical engineering process. Unit operations (such as crystallization, filtration, drying and evaporation) are used to prepare reactants, purifying and separating its products, recycling unspent reactants, and controlling energy transfer in reactors. On the other hand, a unit process is the chemical equivalent of a unit operation. Along with unit operations, unit processes constitute a process operation. Unit processes (such as nitration, hydrogenation, and oxidation involve the conversion of materials by biochemical, thermochemical and other means. Chemical engineers responsible for these are called process engineers.
Process design requires the definition of equipment types and sizes as well as how they are connected and the materials of construction. Details are often printed on a Process Flow Diagram which is used to control the capacity and reliability of a new or existing chemical factory.
Education for chemical engineers in the first college degree 3 or 4 years of study stresses the principles and practices of process design. The same skills are used in existing chemical plants to evaluate the efficiency and make recommendations for improvements.
Transport phenomena
Modeling and analysis of transport phenomena is essential for many industrial applications. Transport phenomena involve fluid dynamics, heat transfer and mass transfer, which are governed mainly by momentum transfer, energy transfer and transport of chemical species, respectively. Models often involve separate considerations for macroscopic, microscopic and molecular level phenomena. Modeling of transport phenomena, therefore, requires an understanding of applied mathematics.
Applications and practice
Chemical engineers develop economic ways of using materials and energy. Chemical engineers use chemistry and engineering to turn raw materials into usable products, such as medicine, petrochemicals, and plastics on a large-scale, industrial setting. They are also involved in waste management and research. Both applied and research facets could make extensive use of computers.
Chemical engineers may be involved in industry or university research where they are tasked with designing and performing experiments, by scaling up theoretical chemical reactions, to create better and safer methods for production, pollution control, and resource conservation. They may be involved in designing and constructing plants as a project engineer. Chemical engineers serving as project engineers use their knowledge in selecting optimal production methods and plant equipment to minimize costs and maximize safety and profitability. After plant construction, chemical engineering project managers may be involved in equipment upgrades, troubleshooting, and daily operations in either full-time or consulting roles.
See also
Related topics
Education for Chemical Engineers
English Engineering units
List of chemical engineering societies
List of chemical engineers
List of chemical process simulators
Outline of chemical engineering
Related fields and concepts
Biochemical engineering
Bioinformatics
Biological engineering
Biomedical engineering
Biomolecular engineering
Bioprocess engineering
Biotechnology
Biotechnology engineering
Catalysts
Ceramics
Chemical process modeling
Chemical reactor
Chemical technologist
Chemical weapons
Cheminformatics
Computational fluid dynamics
Corrosion engineering
Cost estimation
Earthquake engineering
Electrochemistry
Electrochemical engineering
Environmental engineering
Fischer Tropsch synthesis
Fluid dynamics
Food engineering
Fuel cell
Gasification
Heat transfer
Industrial catalysts
Industrial chemistry
Industrial gas
Mass transfer
Materials science
Metallurgy
Microfluidics
Mineral processing
Molecular engineering
Nanotechnology
Natural environment
Natural gas processing
Nuclear reprocessing
Oil exploration
Oil refinery
Paper engineering
Petroleum engineering
Pharmaceutical engineering
Plastics engineering
Polymers
Process control
Process design
Process development
Process engineering
Process miniaturization
Process safety
Semiconductor device fabrication
Separation processes (see also: separation of mixture)
Crystallization processes
Distillation processes
Membrane processes
Syngas production
Textile engineering
Thermodynamics
Transport phenomena
Unit operations
Water technology
Associations
American Institute of Chemical Engineers
Chemical Institute of Canada
European Federation of Chemical Engineering
Indian Institute of Chemical Engineers
Institution of Chemical Engineers
National Organization for the Professional Advancement of Black Chemists and Chemical Engineers
References
Bibliography
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Process engineering
Engineering disciplines | 0.77677 | 0.998211 | 0.77538 |
Protein quaternary structure | Protein quaternary structure is the fourth (and highest) classification level of protein structure. Protein quaternary structure refers to the structure of proteins which are themselves composed of two or more smaller protein chains (also referred to as subunits). Protein quaternary structure describes the number and arrangement of multiple folded protein subunits in a multi-subunit complex. It includes organizations from simple dimers to large homooligomers and complexes with defined or variable numbers of subunits. In contrast to the first three levels of protein structure, not all proteins will have a quaternary structure since some proteins function as single units. Protein quaternary structure can also refer to biomolecular complexes of proteins with nucleic acids and other cofactors.
Description and examples
Many proteins are actually assemblies of multiple polypeptide chains. The quaternary structure refers to the number and arrangement of the protein subunits with respect to one another. Examples of proteins with quaternary structure include hemoglobin, DNA polymerase, ribosomes, antibodies, and ion channels.
Enzymes composed of subunits with diverse functions are sometimes called holoenzymes, in which some parts may be known as regulatory subunits and the functional core is known as the catalytic subunit. Other assemblies referred to instead as multiprotein complexes also possess quaternary structure. Examples include nucleosomes and microtubules. Changes in quaternary structure can occur through conformational changes within individual subunits or through reorientation of the subunits relative to each other. It is through such changes, which underlie cooperativity and allostery in "multimeric" enzymes, that many proteins undergo regulation and perform their physiological function.
The above definition follows a classical approach to biochemistry, established at times when the distinction between a protein and a functional, proteinaceous unit was difficult to elucidate. More recently, people refer to protein–protein interaction when discussing quaternary structure of proteins and consider all assemblies of proteins as protein complexes.
Nomenclature
The number of subunits in an oligomeric complex is described using names that end in -mer (Greek for "part, subunit"). Formal and Greco-Latinate names are generally used for the first ten types and can be used for up to twenty subunits, whereas higher order complexes are usually described by the number of subunits, followed by -meric.
*No known examples
The smallest unit forming a homo-oligomer, i.e. one protein chain or subunit, is designated as a monomer, subunit or protomer. The latter term was originally devised to specify the smallest unit of hetero-oligomeric proteins, but is also applied to homo-oligomeric proteins in current literature. The subunits usually arrange in cyclic symmetry to form closed point group symmetries.
Although complexes higher than octamers are rarely observed for most proteins, there are some important exceptions. Viral capsids are often composed of multiples of 60 proteins. Several molecular machines are also found in the cell, such as the proteasome (four heptameric rings = 28 subunits), the transcription complex and the spliceosome. The ribosome is probably the largest molecular machine, and is composed of many RNA and protein molecules.
In some cases, proteins form complexes that then assemble into even larger complexes. In such cases, one uses the nomenclature, e.g., "dimer of dimers" or "trimer of dimers". This may suggest that the complex might dissociate into smaller sub-complexes before dissociating into monomers. This usually implies that the complex consists of different oligomerisation interfaces. For example, a tetrameric protein may have one four-fold rotation axis, i.e. point group symmetry 4 or C4. In this case the four interfaces between the subunits are identical. It may also have point group symmetry 222 or D2. This tetramer has different interfaces and the tetramer can dissociate into two identical homodimers. Tetramers of 222 symmetry are "dimer of dimers". Hexamers of 32 point group symmetry are "trimer of dimers" or "dimer of trimers". Thus, the nomenclature "dimer of dimers" is used to specify the point group symmetry or arrangement of the oligomer, independent of information relating to its dissociation properties.
Another distinction often made when referring to oligomers is whether they are homomeric or heteromeric, referring to whether the smaller protein subunits that come together to make the protein complex are the same (homomeric) or different (heteromeric) from each other. For example, two identical protein monomers would come together to form a homo-dimer, whereas two different protein monomers would create a hetero-dimer.
Structure Determination
Protein quaternary structure can be determined using a variety of experimental techniques that require a sample of protein in a variety of experimental conditions. The experiments often provide an estimate of the mass of the native protein and, together with knowledge of the masses and/or stoichiometry of the subunits, allow the quaternary structure to be predicted with a given accuracy. It is not always possible to obtain a precise determination of the subunit composition for a variety of reasons.
The number of subunits in a protein complex can often be determined by measuring the hydrodynamic molecular volume or mass of the intact complex, which requires native solution conditions. For folded proteins, the mass can be inferred from its volume using the partial specific volume of 0.73 ml/g. However, volume measurements are less certain than mass measurements, since unfolded proteins appear to have a much larger volume than folded proteins; additional experiments are required to determine whether a protein is unfolded or has formed an oligomer.
Common techniques used to study protein quaternary structure
Ultracentrifugation
Surface-induced dissociation mass spectrometry
Coimmunoprecipation
FRET
Nuclear Magnetic Resonance (NMR)
Direct mass measurement of intact complexes
Sedimentation-equilibrium analytical ultracentrifugation
Electrospray mass spectrometry
Mass Spectrometric Immunoassay MSIA
Direct size measurement of intact complexes
Static light scattering
Size exclusion chromatography (requires calibration)
Dual polarisation interferometry
Indirect size measurement of intact complexes
Sedimentation-velocity analytical ultracentrifugation (measures the translational diffusion constant)
Dynamic light scattering (measures the translational diffusion constant)
Pulsed-gradient protein nuclear magnetic resonance (measures the translational diffusion constant)
Fluorescence polarization (measures the rotational diffusion constant)
Dielectric relaxation (measures the rotational diffusion constant)
Dual polarisation interferometry (measures the size and the density of the complex)
Methods that measure the mass or volume under unfolding conditions (such as
MALDI-TOF mass spectrometry and SDS-PAGE) are generally not useful, since non-native conditions usually cause the complex to dissociate into monomers. However, these may sometimes be applicable; for example, the experimenter may apply SDS-PAGE after first treating the intact complex with chemical cross-link reagents.
Structure Prediction
Some bioinformatics methods have been developed for predicting the quaternary structural attributes of proteins based on their sequence information by using various modes of pseudo amino acid composition.
Protein folding prediction programs used to predict protein tertiary structure have also been expanding to better predict protein quaternary structure. One such development is AlphaFold-Multimer built upon the AlphaFold model for predicting protein tertiary structure.
Role in Cell Signaling
Protein quaternary structure also plays an important role in certain cell signaling pathways. The G-protein coupled receptor pathway involves a heterotrimeric protein known as a G-protein. G-proteins contain three distinct subunits known as the G-alpha, G-beta, and G-gamma subunits. When the G-protein is activated, it binds to the G-protein coupled receptor protein and the cell signaling pathway is initiated. Another example is the receptor tyrosine kinase (RTK) pathway, which is initiated by the dimerization of two receptor tyrosine kinase monomers. When the dimer is formed, the two kinases can phosphorylate each other and initiate a cell signaling pathway.
Protein–protein interactions
Proteins are capable of forming very tight but also only transient complexes. For example, ribonuclease inhibitor binds to ribonuclease A with a roughly 20 fM dissociation constant. Other proteins have evolved to bind specifically to unusual moieties on another protein, e.g., biotin groups (avidin), phosphorylated tyrosines (SH2 domains) or proline-rich segments (SH3 domains). Protein–protein interactions can be engineered to favor certain oligomerization states.
Intragenic complementation
When multiple copies of a polypeptide encoded by a gene form a quaternary complex, this protein structure is referred to as a multimer. When a multimer is formed from polypeptides produced by two different mutant alleles of a particular gene, the mixed multimer may exhibit greater functional activity than the unmixed multimers formed by each of the mutants alone. In such a case, the phenomenon is referred to as intragenic complementation (also called inter-allelic complementation). Intragenic complementation appears to be common and has been studied in many different genes in a variety of organisms including the fungi Neurospora crassa, Saccharomyces cerevisiae and Schizosaccharomyces pombe; the bacterium Salmonella typhimurium; the virus bacteriophage T4, an RNA virus, and humans. The intermolecular forces likely responsible for self-recognition and multimer formation were discussed by Jehle.
Assembly
Direct interaction of two nascent proteins emerging from nearby ribosomes appears to be a general mechanism for oligomer formation. Hundreds of protein oligomers were identified that assemble in human cells by such an interaction. The most prevalent form of interaction was between the N-terminal regions of the interacting proteins. Dimer formation appears to be able to occur independently of dedicated assembly machines.
See also
Structural biology
Nucleic acid quaternary structure
Multiprotein complex
Biomolecular complex
Oligomers
Notes
References
External links
The Macromolecular Structure Database (MSD) at the European Bioinformatics Institute (EBI) – Serves a list of the Probable Quaternary Structure (PQS) for every protein in the Protein Data Bank (PDB).
PQS server – PQS has not been updated since August 2009
PISA – The Protein Interfaces, Surfaces and Assemblies server at the MSD.
EPPIC – Evolutionary Protein–Protein Interface Classification: evolutionary assessment of interfaces in crystal structures
3D complex – Structural classification of protein complexes
Proteopedia – Proteopedia Home Page The collaborative, 3D encyclopedia of proteins and other molecules.
PDBWiki – PDBWiki Home Page – a website for community annotation of PDB structures.
ProtCID – ProtCID—a database of similar protein–protein interfaces in crystal structures of homologous proteins.
Protein structure 4
Stereochemistry | 0.78143 | 0.992245 | 0.77537 |
Bioinorganic chemistry | Bioinorganic chemistry is a field that examines the role of metals in biology. Bioinorganic chemistry includes the study of both natural phenomena such as the behavior of metalloproteins as well as artificially introduced metals, including those that are non-essential, in medicine and toxicology. Many biological processes such as respiration depend upon molecules that fall within the realm of inorganic chemistry. The discipline also includes the study of inorganic models or mimics that imitate the behaviour of metalloproteins.
As a mix of biochemistry and inorganic chemistry, bioinorganic chemistry is important in elucidating the implications of electron-transfer proteins, substrate bindings and activation, atom and group transfer chemistry as well as metal properties in biological chemistry. The successful development of truly interdisciplinary work is necessary to advance bioinorganic chemistry.
Composition of living organisms
About 99% of mammals' mass are the elements carbon, nitrogen, calcium, sodium, chlorine, potassium, hydrogen, phosphorus, oxygen and sulfur. The organic compounds (proteins, lipids and carbohydrates) contain the majority of the carbon and nitrogen and most of the oxygen and hydrogen is present as water. The entire collection of metal-containing biomolecules in a cell is called the metallome.
History
Paul Ehrlich used organoarsenic (“arsenicals”) for the treatment of syphilis, demonstrating the relevance of metals, or at least metalloids, to medicine, that blossomed with Rosenberg's discovery of the anti-cancer activity of cisplatin (cis-PtCl2(NH3)2). The first protein ever crystallized (see James B. Sumner) was urease, later shown to contain nickel at its active site. Vitamin B12, the cure for pernicious anemia was shown crystallographically by Dorothy Crowfoot Hodgkin to consist of a cobalt in a corrin macrocycle.
Themes in bioinorganic chemistry
Several distinct systems are of identifiable in bioinorganic chemistry. Major areas include:
Metal ion transport and storage
A diverse collection of transporters (e.g. the ion pump NaKATPase), vacuoles, storage proteins (e.g. ferritin), and small molecules (e.g. siderophores) are employed to control metal ions concentration and bio-availability in living organisms. Crucially, many essential metals are not readily accessible to downstream proteins owing to low solubility in aqueous solutions or scarcity in the cellular environment. Organisms have developed a number of strategies for collecting and transporting such elements while limiting their cytotoxicity.
Enzymology
Many reactions in life sciences involve water and metal ions are often at the catalytic centers (active sites) for these enzymes, i.e. these are metalloproteins. Often the reacting water is a ligand (see metal aquo complex). Examples of hydrolase enzymes are carbonic anhydrase, metallophosphatases, and metalloproteinases. Bioinorganic chemists seek to understand and replicate the function of these metalloproteins.
Metal-containing electron transfer proteins are also common. They can be organized into three major classes: iron–sulfur proteins (such as rubredoxins, ferredoxins, and Rieske proteins), blue copper proteins, and cytochromes. These electron transport proteins are complementary to the non-metal electron transporters nicotinamide adenine dinucleotide (NAD) and flavin adenine dinucleotide (FAD). The nitrogen cycle make extensive use of metals for the redox interconversions.
Toxicity
Several metal ions are toxic to humans and other animals. The bioinorganic chemistry of lead in the context of its toxicity has been reviewed.
Oxygen transport and activation proteins
Aerobic life make extensive use of metals such as iron, copper, and manganese. Heme is utilized by red blood cells in the form of hemoglobin for oxygen transport and is perhaps the most recognized metal system in biology. Other oxygen transport systems include myoglobin, hemocyanin, and hemerythrin. Oxidases and oxygenases are metal systems found throughout nature that take advantage of oxygen to carry out important reactions such as energy generation in cytochrome c oxidase or small molecule oxidation in cytochrome P450 oxidases or methane monooxygenase. Some metalloproteins are designed to protect a biological system from the potentially harmful effects of oxygen and other reactive oxygen-containing molecules such as hydrogen peroxide. These systems include peroxidases, catalases, and superoxide dismutases. A complementary metalloprotein to those that react with oxygen is the oxygen evolving complex present in plants. This system is part of the complex protein machinery that produces oxygen as plants perform photosynthesis.
Bioorganometallic chemistry
Bioorganometallic systems feature metal-carbon bonds as structural elements or as intermediates. Bioorganometallic enzymes and proteins include the hydrogenases, FeMoco in nitrogenase, and methylcobalamin. These naturally occurring organometallic compounds. This area is more focused on the utilization of metals by unicellular organisms. Bioorganometallic compounds are significant in environmental chemistry.
Metals in medicine
A number of drugs contain metals. This theme relies on the study of the design and mechanism of action of metal-containing pharmaceuticals, and compounds that interact with endogenous metal ions in enzyme active sites. The most widely used anti-cancer drug is cisplatin. MRI contrast agent commonly contain gadolinium. Lithium carbonate has been used to treat the manic phase of bipolar disorder. Gold antiarthritic drugs, e.g. auranofin have been commercialized. Carbon monoxide-releasing molecules are metal complexes have been developed to suppress inflammation by releasing small amounts of carbon monoxide. The cardiovascular and neuronal importance of nitric oxide has been examined, including the enzyme nitric oxide synthase. (See also: nitrogen assimilation.) Besides, metallic transition complexes based on triazolopyrimidines have been tested against several parasite strains.
Environmental chemistry
Environmental chemistry traditionally emphasizes the interaction of heavy metals with organisms. Methylmercury has caused major disaster called Minamata disease. Arsenic poisoning is a widespread problem owing largely to arsenic contamination of groundwater, which affects many millions of people in developing countries. The metabolism of mercury- and arsenic-containing compounds involves cobalamin-based enzymes.
Biomineralization
Biomineralization is the process by which living organisms produce minerals, often to harden or stiffen existing tissues. Such tissues are called mineralized tissues. Examples include silicates in algae and diatoms, carbonates in invertebrates, and calcium phosphates and carbonates in vertebrates. Other examples include copper, iron and gold deposits involving bacteria. Biologically-formed minerals often have special uses such as magnetic sensors in magnetotactic bacteria (Fe3O4), gravity sensing devices (CaCO3, CaSO4, BaSO4) and iron storage and mobilization (Fe2O3•H2O in the protein ferritin). Because extracellular iron is strongly involved in inducing calcification, its control is essential in developing shells; the protein ferritin plays an important role in controlling the distribution of iron.
Types of inorganic substances in biology
Alkali and alkaline earth metals
The abundant inorganic elements act as ionic electrolytes. The most important ions are sodium, potassium, calcium, magnesium, chloride, phosphate, and bicarbonate. The maintenance of precise gradients across cell membranes maintains osmotic pressure and pH. Ions are also critical for nerves and muscles, as action potentials in these tissues are produced by the exchange of electrolytes between the extracellular fluid and the cytosol. Electrolytes enter and leave cells through proteins in the cell membrane called ion channels. For example, muscle contraction depends upon the movement of calcium, sodium and potassium through ion channels in the cell membrane and T-tubules.
Transition metals
The transition metals are usually present as trace elements in organisms, with zinc and iron being most abundant. These metals are used as protein cofactors and signalling molecules. Many are essential for the activity of enzymes such as catalase and oxygen-carrier proteins such as hemoglobin. These cofactors are tightly to a specific protein; although enzyme cofactors can be modified during catalysis, cofactors always return to their original state after catalysis has taken place. The metal micronutrients are taken up into organisms by specific transporters and bound to storage proteins such as ferritin or metallothionein when not being used. Cobalt is essential for the functioning of vitamin B12.
Main group compounds
Many other elements aside from metals are bio-active. Sulfur and phosphorus are required for all life. Phosphorus almost exclusively exists as phosphate and its various esters. Sulfur exists in a variety of oxidation states, ranging from sulfate (SO42−) down to sulfide (S2−). Selenium is a trace element involved in proteins that are antioxidants. Cadmium is important because of its toxicity.
See also
Physiology
Cofactor
Iron metabolism
References
Literature
Heinz-Bernhard Kraatz (editor), Nils Metzler-Nolte (editor), Concepts and Models in Bioinorganic Chemistry, John Wiley and Sons, 2006,
Ivano Bertini, Harry B. Gray, Edward I. Stiefel, Joan Selverstone Valentine, Biological Inorganic Chemistry, University Science Books, 2007,
Wolfgang Kaim, Brigitte Schwederski "Bioinorganic Chemistry: Inorganic Elements in the Chemistry of Life." John Wiley and Sons, 1994,
Rosette M. Roat-Malone, Bioinorganic Chemistry : A Short Course, Wiley-Interscience, 2002,
J.J.R. Fraústo da Silva and R.J.P. Williams, The biological chemistry of the elements: The inorganic chemistry of life, 2nd Edition, Oxford University Press, 2001,
Lawrence Que, Jr., ed., Physical Methods in Bioinorganic Chemistry, University Science Books, 2000,
External links
The Society of Biological Inorganic Chemistry (SBIC)'s home page
The French Bioinorganic Chemistry Society
Glossary of Terms in Bioinorganic Chemistry
Metal Coordination Groups in Proteins by Marjorie Harding
European Bioinformatics Institute
MetalPDB: A database of metal sites in biomolecular structures
Biochemistry
Inorganic chemistry
Medicinal inorganic chemistry | 0.796692 | 0.97317 | 0.775317 |
Assembly theory | Assembly theory is a framework developed to quantify the complexity of molecules and objects by assessing the minimal number of steps required to assemble them from fundamental building blocks. Proposed by chemist Lee Cronin and his team, the theory assigns an assembly index to molecules, which serves as a measurable indicator of their structural complexity. This approach allows for experimental verification and has applications in understanding selection processes, evolution, and the identification of biosignatures in astrobiology.
Background
The hypothesis was proposed by chemist Leroy Cronin in 2017 and developed by the team he leads at the University of Glasgow, then extended in collaboration with a team at Arizona State University led by astrobiologist Sara Imari Walker, in a paper released in 2021.
Assembly theory conceptualizes objects not as point particles, but as entities defined by their possible formation histories. This allows objects to show evidence of selection, within well-defined boundaries of individuals or selected units. Combinatorial objects are important in chemistry, biology and technology, in which most objects of interest (if not all) are hierarchical modular structures. For any object an 'assembly space' can be defined as all recursively assembled pathways that produce this object. The 'assembly index' is the number of steps on a shortest path producing the object. For such shortest path, the assembly space captures the minimal memory, in terms of the minimal number of operations necessary to construct an object based on objects that could have existed in its past.
The assembly is defined as "the total amount of selection necessary to produce an ensemble of observed objects"; for an ensemble containing objects in total, of which are unique, the assembly is defined to be
,
where denotes 'copy number', the number of occurrences of objects of type having assembly index .
For example, the word 'abracadabra' contains 5 unique letters (a, b, c, d and r) and is 11 symbols long. It can be assembled from its constituents as a + b --> ab + r --> abr + a --> abra + c --> abrac + a --> abraca + d --> abracad + abra --> abracadabra, because 'abra' was already constructed at an earlier stage. Because this requires at least 7 steps, the assembly index is 7. The word ‘abracadrbaa’, of the same length, for example, has no repeats so has an assembly index of 10.
Take two binary strings and as another example. Both have the same length bits, both have the same Hamming weight . However, the assembly index of the first string is ("01" is assembled, joined with itself into "0101", and joined again with "0101" taken from the assembly pool), while the assembly index of the second string is , since in this case only "01" can be taken from the assembly pool.
In general, for K subunits of an object O the assembly index is bounded by .
Once a pathway to assemble an object is discovered, the object can be reproduced. The rate of discovery of new objects can be defined by the expansion rate , introducing a discovery timescale .
To include copy number in the dynamics of assembly theory, a production timescale is defined, where is the production rate of a specific object .
Defining these two distinct timescales , for the initial discovery of an object, and , for making copies of existing objects, allows to determine the regimes in which selection is possible.
While other approaches can provide a measure of complexity, the researchers claim that assembly theory's molecular assembly number is the first to be measurable experimentally. Molecules with a high assembly index are very unlikely to form abiotically, and the probability of abiotic formation goes down as the value of the assembly index increases. The assembly index of a molecule can be obtained directly via spectroscopic methods. This method could be implemented in a fragmentation tandem mass spectrometry instrument to search for biosignatures.
The theory was extended to map chemical space with molecular assembly trees, demonstrating the application of this approach in drug discovery, in particular in research of new opiate-like molecules by connecting the "assembly pool elements through the same pattern in which they were disconnected from their parent compound(s)".
It is difficult to identify chemical signatures that are unique to life. For example, the Viking lander biological experiments detected molecules that could be explained by either living or natural non-living processes.
It appears that only living samples can produce assembly index measurements above ~15. However, 2021, Cronin first explained how polyoxometalates could have large assembly indexes >15 in theory due to autocatalysis.
Critical views
Chemist Steven A. Benner has publicly criticized various aspects of assembly theory. Benner argues that it is transparently false that non-living systems, and with no life intervention, cannot contain molecules that are complex but people would be misled in thinking that because it was published in Nature journals after peer review, these papers must be right.
A paper published in the Journal of Molecular Evolution refers to Hector Zenil's blog post "that identifies no less than eight fallacies of assembly theory". The paper also refers to the video essay by the same author, stating that it "summarizes these fallacies, and highlights conceptual/methodological limitations, and the pervasive failure by the proponents of assembly theory to acknowledge relevant previous work in the field of complexity science". The paper concludes that "the hype around Assembly Theory reflects rather unfavorably both on the authors and the scientific publication system in general". The author concludes that what "assembly theory really does is to detect and quantify bias caused by higher-level constraints in some well-defined rule-based worlds"; one "can use assembly theory to check whether something unexpected is going on in a very broad range of computational model worlds or universes".
The group led by Hector Zenil, a former Senior researcher and faculty member from Oxford and Cambridge and currently an Associate Professor in Biomedical Engineering from King's College London, is cited to have reproduced the results of Assembly Theory with traditional statistical algorithms.
Another paper authored by a group of chemists and planetary scientists, including an author affiliated with NASA, published in the journal of the Royal Society Interface demonstrated that abiotic chemical processes have the potential to form crystal structures of great complexity — values exceeding the proposed abiotic/biotic divide of MA index = 15. They conclude that "while the proposal of a biosignature based on a molecular assembly index of 15 is an intriguing and testable concept, the contention that only life can generate molecular structures with MA index ≥ 15 is in error".
The paper also cites the papers and posts of Hector Zenil as questioning whether a single scalar value like the assembly index can be employed to adequately discriminate between living and nonliving systems, and pointing out the noticeable similarities of the Assembly Theory approach to uncited prior efforts to distinguish biotic from abiotic molecular compounds.
In particular, the paper mentions that Zenil and colleagues "may also have anticipated key conclusions of Assembly Theory by exploring connections among causal memory, selection, and evolution".
See also
List of interstellar and circumstellar molecules
Word problem for groups
References
Further reading
Extraterrestrial life
Molecular biology techniques
Theories | 0.781247 | 0.99239 | 0.775302 |
Title 21 of the Code of Federal Regulations | Title 21 is the portion of the Code of Federal Regulations that governs food and drugs within the United States for the Food and Drug Administration (FDA), the Drug Enforcement Administration (DEA), and the Office of National Drug Control Policy (ONDCP).
It is divided into three chapters:
Chapter I — Food and Drug Administration
Chapter II — Drug Enforcement Administration
Chapter III — Office of National Drug Control Policy
Chapter I
Most of the Chapter I regulations are based on the Federal Food, Drug, and Cosmetic Act.
Notable sections:
11 — electronic records and electronic signature related
50 Protection of human subjects in clinical trials
54 Financial disclosure by clinical investigators
56 Institutional review boards that oversee clinical trials
58 Good laboratory practices (GLP) for nonclinical studies
The 100 series are regulations pertaining to food:
101, especially 101.9 — Nutrition facts label related
(c)(2)(ii) — Requirement to include trans fat values
(c)(8)(iv) — Vitamin and mineral values
106-107 requirements for infant formula
110 et seq. cGMPs for food products
111 et seq. cGMPs for dietary supplements
170 food additives
190 dietary supplements
The 200 and 300 series are regulations pertaining to pharmaceuticals :
202-203 Drug advertising and marketing
210 et seq. cGMPs for pharmaceuticals
310 et seq. Requirements for new drugs
328 et seq. Specific requirements for over-the-counter (OTC) drugs.
The 500 series are regulations for animal feeds and animal medications:
510 et seq. New animal drugs
556 Tolerances for residues of drugs in food animals
The 600 series covers biological products (e.g. vaccines, blood):
601 Licensing under section 351 of the Public Health Service Act
606 et seq. cGMPs for human blood and blood products
The 700 series includes the limited regulations on cosmetics:
701 Labeling requirements
The 800 series are for medical devices:
803 Medical device reporting
814 Premarket approval of medical devices
820 et seq. Quality system regulations (analogous to cGMP, but structured like ISO)
860 et seq. Listing of specific approved devices and how they are classified
The 900 series covers mammography quality requirements enforced by CDRH.
The 1000 series covers radiation-emitting device (e.g. cell phones, lasers, x-ray generators); requirements enforced by the Center for Devices and Radiological Health. It also talks about the FDA citizen petition.
The 1100 series includes updated rules deeming items that statutorily come under the definition of "tobacco product" to be subject to the Federal Food, Drug, and Cosmetic Act as amended by the Tobacco Control Act. The items affected include E-cigarettes, Hookah tobacco, and pipe tobacco.
The 1200 series consists of rules primarily based in laws other than the Food, Drug, and Cosmetic Act:
1240 Rules promulgated under 361 of the Public Health Service Act on interstate control of communicable disease, such as:
Requirements for pasteurization of milk
Interstate shipment of turtles as pets.
Interstate shipment of African rodents that may carry monkeypox.
Sanitation on interstate conveyances (i.e. airplanes and ships)
1271 Requirements for human cells, tissues, and cellular and tissue-based products (i.e. the cGTPs).
Chapter II
Notable sections:
1308 — Schedules of controlled substances
1308.03(a) — Administrative Controlled Substances Code Number
1308.11 — List of Schedule I drugs
1308.12 — List of Schedule II drugs
1308.13 — List of Schedule III drugs
1308.14 — List of Schedule IV drugs
1308.15 — List of Schedule V drugs
See also
Title 21 of the United States Code - Food and Drugs
EudraLex (medicinal products in the European Union)
References
External links
Title 21 of the Code of Federal Regulations (current "Electronic CFR")
21
Drug control law in the United States
Food law
Regulation of medical devices | 0.785771 | 0.986676 | 0.775301 |
Glycation | Glycation (non-enzymatic glycosylation) is the covalent attachment of a sugar to a protein, lipid or nucleic acid molecule. Typical sugars that participate in glycation are glucose, fructose, and their derivatives. Glycation is the non-enzymatic process responsible for many (e.g. micro and macrovascular) complications in diabetes mellitus and is implicated in some diseases and in aging. Glycation end products are believed to play a causative role in the vascular complications of diabetes mellitus.
In contrast with glycation, glycosylation is the enzyme-mediated ATP-dependent attachment of sugars to a protein or lipid. Glycosylation occurs at defined sites on the target molecule. It is a common form of post-translational modification of proteins and is required for the functioning of the mature protein.
Biochemistry
Glycations occur mainly in the bloodstream to a small proportion of the absorbed simple sugars: glucose, fructose, and galactose. It appears that fructose has approximately ten times the glycation activity of glucose, the primary body fuel. Glycation can occur through Amadori reactions, Schiff base reactions, and Maillard reactions; which lead to advanced glycation end products (AGEs).
Biomedical implications
Red blood cells have a consistent lifespan of 120 days and are accessible for measurement of glycated hemoglobin. Measurement of HbA1c—the predominant form of glycated hemoglobin—enables medium-term blood sugar control to be monitored in diabetes.
Some glycation products are implicated in many age-related chronic diseases, including cardiovascular diseases (the endothelium, fibrinogen, and collagen are damaged) and Alzheimer's disease (amyloid proteins are side-products of the reactions progressing to AGEs).
Long-lived cells (such as nerves and different types of brain cell), long-lasting proteins (such as crystallins of the lens and cornea), and DNA can sustain substantial glycation over time. Damage by glycation results in stiffening of the collagen in the blood vessel walls, leading to high blood pressure, especially in diabetes. Glycations also cause weakening of the collagen in the blood vessel walls, which may lead to micro- or macro-aneurysm; this may cause strokes if in the brain.
DNA glycation
The term DNA glycation applies to DNA damage induced by reactive carbonyls (principally methylglyoxal and glyoxal) that are present in cells as by-products of sugar metabolism. Glycation of DNA can cause mutation, breaks in DNA and cytotoxicity. Guanine in DNA is the base most susceptible to glycation. Glycated DNA, as a form of damage, appears to be as frequent as the more well studied oxidative DNA damage. A protein, designated DJ-1 (also known as PARK7), is employed in the repair of glycated DNA bases in humans, and homologs of this protein have also been identified in bacteria.
See also
Advanced glycation end-product
Alagebrium
Fructose
Galactose
Glucose
Glycosylation
Glycated hemoglobin
List of aging processes
Additional reading
References
Ageing processes
Carbohydrates
Post-translational modification
Protein metabolism | 0.78126 | 0.992364 | 0.775294 |
Plant physiology | Plant physiology is a subdiscipline of botany concerned with the functioning, or physiology, of plants.
Plant physiologists study fundamental processes of plants, such as photosynthesis, respiration, plant nutrition, plant hormone functions, tropisms, nastic movements, photoperiodism, photomorphogenesis, circadian rhythms, environmental stress physiology, seed germination, dormancy and stomata function and transpiration. Plant physiology interacts with the fields of plant morphology (structure of plants), plant ecology (interactions with the environment), phytochemistry (biochemistry of plants), cell biology, genetics, biophysics and molecular biology.
Aims
The field of plant physiology includes the study of all the internal activities of plants—those chemical and physical processes associated with life as they occur in plants. This includes study at many levels of scale of size and time. At the smallest scale are molecular interactions of photosynthesis and internal diffusion of water, minerals, and nutrients. At the largest scale are the processes of plant development, seasonality, dormancy, and reproductive control. Major subdisciplines of plant physiology include phytochemistry (the study of the biochemistry of plants) and phytopathology (the study of disease in plants). The scope of plant physiology as a discipline may be divided into several major areas of research.
First, the study of phytochemistry (plant chemistry) is included within the domain of plant physiology. To function and survive, plants produce a wide array of chemical compounds not found in other organisms. Photosynthesis requires a large array of pigments, enzymes, and other compounds to function. Because they cannot move, plants must also defend themselves chemically from herbivores, pathogens and competition from other plants. They do this by producing toxins and foul-tasting or smelling chemicals. Other compounds defend plants against disease, permit survival during drought, and prepare plants for dormancy, while other compounds are used to attract pollinators or herbivores to spread ripe seeds.
Secondly, plant physiology includes the study of biological and chemical processes of individual plant cells. Plant cells have a number of features that distinguish them from cells of animals, and which lead to major differences in the way that plant life behaves and responds differently from animal life. For example, plant cells have a cell wall which maintains the shape of plant cells. Plant cells also contain chlorophyll, a chemical compound that interacts with light in a way that enables plants to manufacture their own nutrients rather than consuming other living things as animals do.
Thirdly, plant physiology deals with interactions between cells, tissues, and organs within a plant. Different cells and tissues are physically and chemically specialized to perform different functions. Roots and rhizoids function to anchor the plant and acquire minerals in the soil. Leaves catch light in order to manufacture nutrients. For both of these organs to remain living, minerals that the roots acquire must be transported to the leaves, and the nutrients manufactured in the leaves must be transported to the roots. Plants have developed a number of ways to achieve this transport, such as vascular tissue, and the functioning of the various modes of transport is studied by plant physiologists.
Fourthly, plant physiologists study the ways that plants control or regulate internal functions. Like animals, plants produce chemicals called hormones which are produced in one part of the plant to signal cells in another part of the plant to respond. Many flowering plants bloom at the appropriate time because of light-sensitive compounds that respond to the length of the night, a phenomenon known as photoperiodism. The ripening of fruit and loss of leaves in the winter are controlled in part by the production of the gas ethylene by the plant.
Finally, plant physiology includes the study of plant response to environmental conditions and their variation, a field known as environmental physiology. Stress from water loss, changes in air chemistry, or crowding by other plants can lead to changes in the way a plant functions. These changes may be affected by genetic, chemical, and physical factors.
Biochemistry of plants
The chemical elements of which plants are constructed—principally carbon, oxygen, hydrogen, nitrogen, phosphorus, sulfur, etc.—are the same as for all other life forms: animals, fungi, bacteria and even viruses. Only the details of their individual molecular structures vary.
Despite this underlying similarity, plants produce a vast array of chemical compounds with unique properties which they use to cope with their environment. Pigments are used by plants to absorb or detect light, and are extracted by humans for use in dyes. Other plant products may be used for the manufacture of commercially important rubber or biofuel. Perhaps the most celebrated compounds from plants are those with pharmacological activity, such as salicylic acid from which aspirin is made, morphine, and digoxin. Drug companies spend billions of dollars each year researching plant compounds for potential medicinal benefits.
Constituent elements
Plants require some nutrients, such as carbon and nitrogen, in large quantities to survive. Some nutrients are termed macronutrients, where the prefix macro- (large) refers to the quantity needed, not the size of the nutrient particles themselves. Other nutrients, called micronutrients, are required only in trace amounts for plants to remain healthy. Such micronutrients are usually absorbed as ions dissolved in water taken from the soil, though carnivorous plants acquire some of their micronutrients from captured prey.
The following tables list element nutrients essential to plants. Uses within plants are generalized.
Pigments
Among the most important molecules for plant function are the pigments. Plant pigments include a variety of different kinds of molecules, including porphyrins, carotenoids, and anthocyanins. All biological pigments selectively absorb certain wavelengths of light while reflecting others. The light that is absorbed may be used by the plant to power chemical reactions, while the reflected wavelengths of light determine the color the pigment appears to the eye.
Chlorophyll is the primary pigment in plants; it is a porphyrin that absorbs red and blue wavelengths of light while reflecting green. It is the presence and relative abundance of chlorophyll that gives plants their green color. All land plants and green algae possess two forms of this pigment: chlorophyll a and chlorophyll b. Kelps, diatoms, and other photosynthetic heterokonts contain chlorophyll c instead of b, red algae possess chlorophyll a. All chlorophylls serve as the primary means plants use to intercept light to fuel photosynthesis.
Carotenoids are red, orange, or yellow tetraterpenoids. They function as accessory pigments in plants, helping to fuel photosynthesis by gathering wavelengths of light not readily absorbed by chlorophyll. The most familiar carotenoids are carotene (an orange pigment found in carrots), lutein (a yellow pigment found in fruits and vegetables), and lycopene (the red pigment responsible for the color of tomatoes). Carotenoids have been shown to act as antioxidants and to promote healthy eyesight in humans.
Anthocyanins (literally "flower blue") are water-soluble flavonoid pigments that appear red to blue, according to pH. They occur in all tissues of higher plants, providing color in leaves, stems, roots, flowers, and fruits, though not always in sufficient quantities to be noticeable. Anthocyanins are most visible in the petals of flowers, where they may make up as much as 30% of the dry weight of the tissue. They are also responsible for the purple color seen on the underside of tropical shade plants such as Tradescantia zebrina. In these plants, the anthocyanin catches light that has passed through the leaf and reflects it back towards regions bearing chlorophyll, in order to maximize the use of available light
Betalains are red or yellow pigments. Like anthocyanins they are water-soluble, but unlike anthocyanins they are indole-derived compounds synthesized from tyrosine. This class of pigments is found only in the Caryophyllales (including cactus and amaranth), and never co-occur in plants with anthocyanins. Betalains are responsible for the deep red color of beets, and are used commercially as food-coloring agents. Plant physiologists are uncertain of the function that betalains have in plants which possess them, but there is some preliminary evidence that they may have fungicidal properties.
Signals and regulators
Plants produce hormones and other growth regulators which act to signal a physiological response in their tissues. They also produce compounds such as phytochrome that are sensitive to light and which serve to trigger growth or development in response to environmental signals.
Plant hormones
Plant hormones, known as plant growth regulators (PGRs) or phytohormones, are chemicals that regulate a plant's growth. According to a standard animal definition, hormones are signal molecules produced at specific locations, that occur in very low concentrations, and cause altered processes in target cells at other locations. Unlike animals, plants lack specific hormone-producing tissues or organs. Plant hormones are often not transported to other parts of the plant and production is not limited to specific locations.
Plant hormones are chemicals that in small amounts promote and influence the growth, development and differentiation of cells and tissues. Hormones are vital to plant growth; affecting processes in plants from flowering to seed development, dormancy, and germination. They regulate which tissues grow upwards and which grow downwards, leaf formation and stem growth, fruit development and ripening, as well as leaf abscission and even plant death.
The most important plant hormones are abscissic acid (ABA), auxins, ethylene, gibberellins, and cytokinins, though there are many other substances that serve to regulate plant physiology.
Photomorphogenesis
While most people know that light is important for photosynthesis in plants, few realize that plant sensitivity to light plays a role in the control of plant structural development (morphogenesis). The use of light to control structural development is called photomorphogenesis, and is dependent upon the presence of specialized photoreceptors, which are chemical pigments capable of absorbing specific wavelengths of light.
Plants use four kinds of photoreceptors: phytochrome, cryptochrome, a UV-B photoreceptor, and protochlorophyllide a. The first two of these, phytochrome and cryptochrome, are photoreceptor proteins, complex molecular structures formed by joining a protein with a light-sensitive pigment. Cryptochrome is also known as the UV-A photoreceptor, because it absorbs ultraviolet light in the long wave "A" region. The UV-B receptor is one or more compounds not yet identified with certainty, though some evidence suggests carotene or riboflavin as candidates. Protochlorophyllide a, as its name suggests, is a chemical precursor of chlorophyll.
The most studied of the photoreceptors in plants is phytochrome. It is sensitive to light in the red and far-red region of the visible spectrum. Many flowering plants use it to regulate the time of flowering based on the length of day and night (photoperiodism) and to set circadian rhythms. It also regulates other responses including the germination of seeds, elongation of seedlings, the size, shape and number of leaves, the synthesis of chlorophyll, and the straightening of the epicotyl or hypocotyl hook of dicot seedlings.
Photoperiodism
Many flowering plants use the pigment phytochrome to sense seasonal changes in day length, which they take as signals to flower. This sensitivity to day length is termed photoperiodism. Broadly speaking, flowering plants can be classified as long day plants, short day plants, or day neutral plants, depending on their particular response to changes in day length. Long day plants require a certain minimum length of daylight to start flowering, so these plants flower in the spring or summer. Conversely, short day plants flower when the length of daylight falls below a certain critical level. Day neutral plants do not initiate flowering based on photoperiodism, though some may use temperature sensitivity (vernalization) instead.
Although a short day plant cannot flower during the long days of summer, it is not actually the period of light exposure that limits flowering. Rather, a short day plant requires a minimal length of uninterrupted darkness in each 24-hour period (a short daylength) before floral development can begin. It has been determined experimentally that a short day plant (long night) does not flower if a flash of phytochrome activating light is used on the plant during the night.
Plants make use of the phytochrome system to sense day length or photoperiod. This fact is utilized by florists and greenhouse gardeners to control and even induce flowering out of season, such as the poinsettia (Euphorbia pulcherrima).
Environmental physiology
Paradoxically, the subdiscipline of environmental physiology is on the one hand a recent field of study in plant ecology and on the other hand one of the oldest. Environmental physiology is the preferred name of the subdiscipline among plant physiologists, but it goes by a number of other names in the applied sciences. It is roughly synonymous with ecophysiology, crop ecology, horticulture and agronomy. The particular name applied to the subdiscipline is specific to the viewpoint and goals of research. Whatever name is applied, it deals with the ways in which plants respond to their environment and so overlaps with the field of ecology.
Environmental physiologists examine plant response to physical factors such as radiation (including light and ultraviolet radiation), temperature, fire, and wind. Of particular importance are water relations (which can be measured with the Pressure bomb) and the stress of drought or inundation, exchange of gases with the atmosphere, as well as the cycling of nutrients such as nitrogen and carbon.
Environmental physiologists also examine plant response to biological factors. This includes not only negative interactions, such as competition, herbivory, disease and parasitism, but also positive interactions, such as mutualism and pollination.
While plants, as living beings, can perceive and communicate physical stimuli and damage, they do not feel pain as members of the animal kingdom do simply because of the lack of any pain receptors, nerves, or a brain, and, by extension, lack of consciousness. Many plants are known to perceive and respond to mechanical stimuli at a cellular level, and some plants such as the venus flytrap or touch-me-not, are known for their "obvious sensory abilities". Nevertheless, the plant kingdom as a whole do not feel pain notwithstanding their abilities to respond to sunlight, gravity, wind, and any external stimuli such as insect bites, since they lack any nervous system. The primary reason for this is that, unlike the members of the animal kingdom whose evolutionary successes and failures are shaped by suffering, the evolution of plants are simply shaped by life and death.
Tropisms and nastic movements
Plants may respond both to directional and non-directional stimuli. A response to a directional stimulus, such as gravity or sun light, is called a tropism. A response to a nondirectional stimulus, such as temperature or humidity, is a nastic movement.
Tropisms in plants are the result of differential cell growth, in which the cells on one side of the plant elongates more than those on the other side, causing the part to bend toward the side with less growth. Among the common tropisms seen in plants is phototropism, the bending of the plant toward a source of light. Phototropism allows the plant to maximize light exposure in plants which require additional light for photosynthesis, or to minimize it in plants subjected to intense light and heat. Geotropism allows the roots of a plant to determine the direction of gravity and grow downwards. Tropisms generally result from an interaction between the environment and production of one or more plant hormones.
Nastic movements results from differential cell growth (e.g. epinasty and hiponasty), or from changes in turgor pressure within plant tissues (e.g., nyctinasty), which may occur rapidly. A familiar example is thigmonasty (response to touch) in the Venus fly trap, a carnivorous plant. The traps consist of modified leaf blades which bear sensitive trigger hairs. When the hairs are touched by an insect or other animal, the leaf folds shut. This mechanism allows the plant to trap and digest small insects for additional nutrients. Although the trap is rapidly shut by changes in internal cell pressures, the leaf must grow slowly to reset for a second opportunity to trap insects.
Plant disease
Economically, one of the most important areas of research in environmental physiology is that of phytopathology, the study of diseases in plants and the manner in which plants resist or cope with infection. Plant are susceptible to the same kinds of disease organisms as animals, including viruses, bacteria, and fungi, as well as physical invasion by insects and roundworms.
Because the biology of plants differs with animals, their symptoms and responses are quite different. In some cases, a plant can simply shed infected leaves or flowers to prevent the spread of disease, in a process called abscission. Most animals do not have this option as a means of controlling disease. Plant diseases organisms themselves also differ from those causing disease in animals because plants cannot usually spread infection through casual physical contact. Plant pathogens tend to spread via spores or are carried by animal vectors.
One of the most important advances in the control of plant disease was the discovery of Bordeaux mixture in the nineteenth century. The mixture is the first known fungicide and is a combination of copper sulfate and lime. Application of the mixture served to inhibit the growth of downy mildew that threatened to seriously damage the French wine industry.
History
Early history
Francis Bacon published one of the first plant physiology experiments in 1627 in the book, Sylva Sylvarum. Bacon grew several terrestrial plants, including a rose, in water and concluded that soil was only needed to keep the plant upright. Jan Baptist van Helmont published what is considered the first quantitative experiment in plant physiology in 1648. He grew a willow tree for five years in a pot containing 200 pounds of oven-dry soil. The soil lost just two ounces of dry weight and van Helmont concluded that plants get all their weight from water, not soil. In 1699, John Woodward published experiments on growth of spearmint in different sources of water. He found that plants grew much better in water with soil added than in distilled water.
Stephen Hales is considered the Father of Plant Physiology for the many experiments in the 1727 book, Vegetable Staticks; though Julius von Sachs unified the pieces of plant physiology and put them together as a discipline. His Lehrbuch der Botanik was the plant physiology bible of its time.
Researchers discovered in the 1800s that plants absorb essential mineral nutrients as inorganic ions in water. In natural conditions, soil acts as a mineral nutrient reservoir but the soil itself is not essential to plant growth. When the mineral nutrients in the soil are dissolved in water, plant roots absorb nutrients readily, soil is no longer required for the plant to thrive. This observation is the basis for hydroponics, the growing of plants in a water solution rather than soil, which has become a standard technique in biological research, teaching lab exercises, crop production and as a hobby.
Economic applications
Food production
In horticulture and agriculture along with food science, plant physiology is an important topic relating to fruits, vegetables, and other consumable parts of plants. Topics studied include: climatic requirements, fruit drop, nutrition, ripening, fruit set. The production of food crops also hinges on the study of plant physiology covering such topics as optimal planting and harvesting times and post harvest storage of plant products for human consumption and the production of secondary products like drugs and cosmetics.
Crop physiology steps back and looks at a field of plants as a whole, rather than looking at each plant individually. Crop physiology looks at how plants respond to each other and how to maximize results like food production through determining things like optimal planting density.
See also
Biomechanics
Hyperaccumulator
Phytochemistry
Plant anatomy
Plant morphology
Plant secondary metabolism
Branches of botany
References
Further reading
Lincoln Taiz, Eduardo Zeiger, Ian Max Møller, Angus Murphy: Fundamentals of Plant Physiology. Sinauer, 2018.
Branches of botany | 0.783324 | 0.989696 | 0.775253 |
Diagenesis | Diagenesis is the process that describes physical and chemical changes in sediments first caused by water-rock interactions, microbial activity, and compaction after their deposition. Increased pressure and temperature only start to play a role as sediments become buried much deeper in the Earth's crust. In the early stages, the transformation of poorly consolidated sediments into sedimentary rock (lithification) is simply accompanied by a reduction in porosity and water expulsion (clay sediments), while their main mineralogical assemblages remain unaltered. As the rock is carried deeper by further deposition above, its organic content is progressively transformed into kerogens and bitumens.
The process of diagenesis excludes surface alteration (weathering) and deep metamorphism. There is no sharp boundary between diagenesis and metamorphism, but the latter occurs at higher temperatures and pressures. Hydrothermal solutions, meteoric groundwater, rock porosity, permeability, dissolution/precipitation reactions, and time are all influential factors.
After deposition, sediments are compacted as they are buried beneath successive layers of sediment and cemented by minerals that precipitate from solution. Grains of sediment, rock fragments and fossils can be replaced by other minerals (e.g. calcite, siderite, pyrite or marcasite) during diagenesis. Porosity usually decreases during diagenesis, except in rare cases such as dissolution of minerals and dolomitization.
The study of diagenesis in rocks is used to understand the geologic history they have undergone and the nature and type of fluids that have circulated through them. From a commercial standpoint, such studies aid in assessing the likelihood of finding various economically viable mineral and hydrocarbon deposits.
The process of diagenesis is also important in the decomposition of bone tissue.
Role in anthropology and paleontology
The term diagenesis, literally meaning "across generation", is extensively used in geology. However, this term has filtered into the field of anthropology, archaeology and paleontology to describe the changes and alterations that take place on skeletal (biological) material. Specifically, diagenesis "is the cumulative physical, chemical, and biological environment; these processes will modify an organic object's original chemical and/or structural properties and will govern its ultimate fate, in terms of preservation or destruction". In order to assess the potential impact of diagenesis on archaeological or fossil bones, many factors need to be assessed, beginning with elemental and mineralogical composition of bone and enveloping soil, as well as the local burial environment (geology, climatology, groundwater).
The composite nature of bone, comprising one-third organic (mainly protein collagen) and two thirds mineral (calcium phosphate mostly in the form of hydroxyapatite) renders its diagenesis more complex. Alteration occurs at all scales from molecular loss and substitution, through crystallite reorganization, porosity, and microstructural changes, and in many cases, to the disintegration of the complete unit. Three general pathways of the diagenesis of bone have been identified:
Chemical deterioration of the organic phase.
Chemical deterioration of the mineral phase.
(Micro) biological attack of the composite.
They are as follows:
The dissolution of collagen depends on time, temperature, and environmental pH. At high temperatures, the rate of collagen loss will be accelerated, and extreme pH can cause collagen swelling and accelerated hydrolysis. Due to the increase in porosity of bones through collagen loss, the bone becomes susceptible to hydrolytic infiltration where the hydroxyapatite, with its affinity for amino acids, permits charged species of endogenous and exogenous origin to take up residence.
The hydrolytic activity plays a key role in the mineral phase transformations that expose the collagen to accelerated chemical- and bio-degradation. Chemical changes affect crystallinity. Mechanisms of chemical change, such as the uptake of F− or may cause recrystallization where hydroxyapatite is dissolved and re-precipitated allowing for the incorporation or substitution of exogenous material.
Once an individual has been interred, microbial attack, the most common mechanism of bone deterioration, occurs rapidly. During this phase, most bone collagen is lost and porosity is increased. The dissolution of the mineral phase caused by low pH permits access to the collagen by extracellular microbial enzymes thus microbial attack.
Role in hydrocarbon generation
When animal or plant matter is buried during sedimentation, the constituent organic molecules (lipids, proteins, carbohydrates and lignin-humic compounds) break down due to the increase in temperature and pressure. This transformation occurs in the first few hundred meters of burial and results in the creation of two primary products: kerogens and bitumens.
It is generally accepted that hydrocarbons are formed by the thermal alteration of these kerogens (the biogenic theory). In this way, given certain conditions (which are largely temperature-dependent) kerogens will break down to form hydrocarbons through a chemical process known as cracking, or catagenesis.
A kinetic model based on experimental data can capture most of the essential transformation in diagenesis, and a mathematical model in a compacting porous medium to model the dissolution-precipitation mechanism. These models have been intensively studied and applied in real geological applications.
Diagenesis has been divided, based on hydrocarbon and coal genesis into: eodiagenesis (early), mesodiagenesis (middle) and telodiagenesis (late). During the early or eodiagenesis stage shales lose pore water, little to no hydrocarbons are formed and coal varies between lignite and sub-bituminous. During mesodiagenesis, dehydration of clay minerals occurs, the main development of oil genesis occurs and high to low volatile bituminous coals are formed. During telodiagenesis, organic matter undergoes cracking and dry gas is produced; semi-anthracite coals develop.
Early diagenesis in newly formed aquatic sediments is mediated by microorganisms using different electron acceptors as part of their metabolism. Organic matter is mineralized, liberating gaseous carbon dioxide (CO2) in the porewater, which, depending on the conditions, can diffuse into the water column. The various processes of mineralization in this phase are nitrification and denitrification, manganese oxide reduction, iron hydroxide reduction, sulfate reduction, and fermentation.
Role in bone decomposition
Diagenesis alters the proportions of organic collagen and inorganic components (hydroxyapatite, calcium, magnesium) of bone exposed to environmental conditions, especially moisture. This is accomplished by the exchange of natural bone constituents, deposition in voids or defects, adsorption onto the bone surface and leaching from the bone.
See also
References
Geological processes
Fossil fuels
Sedimentology | 0.784161 | 0.98864 | 0.775253 |
Dimensional analysis | In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric current) and units of measurement (such as metres and grams) and tracking these dimensions as calculations or comparisons are performed. The term dimensional analysis is also used to refer to conversion of units from one dimensional unit to another, which can be used to evaluate scientific formulae.
Commensurable physical quantities are of the same kind and have the same dimension, and can be directly compared to each other, even if they are expressed in differing units of measurement; e.g., metres and feet, grams and pounds, seconds and years. Incommensurable physical quantities are of different kinds and have different dimensions, and can not be directly compared to each other, no matter what units they are expressed in, e.g. metres and grams, seconds and grams, metres and seconds. For example, asking whether a gram is larger than an hour is meaningless.
Any physically meaningful equation, or inequality, must have the same dimensions on its left and right sides, a property known as dimensional homogeneity. Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It also serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation.
The concept of physical dimension or quantity dimension, and of dimensional analysis, was introduced by Joseph Fourier in 1822.
Formulation
The Buckingham π theorem describes how every physically meaningful equation involving variables can be equivalently rewritten as an equation of dimensionless parameters, where m is the rank of the dimensional matrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables.
A dimensional equation can have the dimensions reduced or eliminated through nondimensionalization, which begins with dimensional analysis, and involves scaling quantities by characteristic units of a system or physical constants of nature. This may give insight into the fundamental properties of the system, as illustrated in the examples below.
The dimension of a physical quantity can be expressed as a product of the base physical dimensions such as length, mass and time, each raised to an integer (and occasionally rational) power. The dimension of a physical quantity is more fundamental than some scale or unit used to express the amount of that physical quantity. For example, mass is a dimension, while the kilogram is a particular reference quantity chosen to express a quantity of mass. The choice of unit is arbitrary, and its choice is often based on historical precedent. Natural units, being based on only universal constants, may be thought of as being "less arbitrary".
There are many possible choices of base physical dimensions. The SI standard selects the following dimensions and corresponding dimension symbols:
time (T), length (L), mass (M), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J).
The symbols are by convention usually written in roman sans serif typeface. Mathematically, the dimension of the quantity is given by
where , , , , , , are the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form a basis – for instance, one could replace the dimension (I) of electric current of the SI basis with a dimension (Q) of electric charge, since .
A quantity that has only (with all other exponents zero) is known as a geometric quantity. A quantity that has only both and is known as a kinematic quantity. A quantity that has only all of , , and is known as a dynamic quantity.
A quantity that has all exponents null is said to have dimension one.
The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity have conversion factors that relate them. For example, ; in this case 2.54 cm/in is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity.
There are also physicists who have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity, although this does not invalidate the usefulness of dimensional analysis.
Simple cases
As examples, the dimension of the physical quantity speed is
The dimension of the physical quantity acceleration is
The dimension of the physical quantity force is
The dimension of the physical quantity pressure is
The dimension of the physical quantity energy is
The dimension of the physical quantity power is
The dimension of the physical quantity electric charge is
The dimension of the physical quantity voltage is
The dimension of the physical quantity capacitance is
Rayleigh's method
In dimensional analysis, Rayleigh's method is a conceptual tool used in physics, chemistry, and engineering. It expresses a functional relationship of some variables in the form of an exponential equation. It was named after Lord Rayleigh.
The method involves the following steps:
Gather all the independent variables that are likely to influence the dependent variable.
If is a variable that depends upon independent variables , , , ..., , then the functional equation can be written as .
Write the above equation in the form , where is a dimensionless constant and , , , ..., are arbitrary exponents.
Express each of the quantities in the equation in some base units in which the solution is required.
By using dimensional homogeneity, obtain a set of simultaneous equations involving the exponents , , , ..., .
Solve these equations to obtain the values of the exponents , , , ..., .
Substitute the values of exponents in the main equation, and form the non-dimensional parameters by grouping the variables with like exponents.
As a drawback, Rayleigh's method does not provide any information regarding number of dimensionless groups to be obtained as a result of dimensional analysis.
Concrete numbers and base units
Many parameters and measurements in the physical sciences and engineering are expressed as a concrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed with division, e.g. 60 km/h. Other relations can involve multiplication (often shown with a centered dot or juxtaposition), powers (like m2 for square metres), or combinations thereof.
A set of base units for a system of measurement is a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed. For example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units.
Sometimes the names of units obscure the fact that they are derived units. For example, a newton (N) is a unit of force, which may be expressed as the product of mass (with unit kg) and acceleration (with unit m⋅s−2). The newton is defined as .
Percentages, derivatives and integrals
Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since .
Taking a derivative with respect to a quantity divides the dimension by the dimension of the variable that is differentiated with respect to. Thus:
position has the dimension L (length);
derivative of position with respect to time (, velocity) has dimension T−1L—length from position, time due to the gradient;
the second derivative (, acceleration) has dimension .
Likewise, taking an integral adds the dimension of the variable one is integrating with respect to, but in the numerator.
force has the dimension (mass multiplied by acceleration);
the integral of force with respect to the distance the object has travelled (, work) has dimension .
In economics, one distinguishes between stocks and flows: a stock has a unit (say, widgets or dollars), while a flow is a derivative of a stock, and has a unit of the form of this unit divided by one of time (say, dollars/year).
In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example, debt-to-GDP ratios are generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus debt-to-GDP should have the unit year, which indicates that debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged.
Dimensional homogeneity (commensurability)
The most basic rule of dimensional analysis is that of dimensional homogeneity.
However, the dimensions form an abelian group under multiplication, so:
For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes sense to ask whether 1 mile is more, the same, or less than 1 kilometre, being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h.
The rule implies that in a physically meaningful expression only quantities of the same dimension can be added, subtracted, or compared. For example, if , and denote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expression is meaningful, but the heterogeneous expression is meaningless. However, is fine. Thus, dimensional analysis may be used as a sanity check of physical equations: the two sides of any equation must be commensurable or have the same dimensions.
Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, although torque and energy share the dimension , they are fundamentally different physical quantities.
To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same unit. For example, to compare 32 metres with 35 yards, use to convert 35 yards to 32.004 m.
A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables. For example, Newton's laws of motion must hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that a conversion factor between two units that measure the same dimension must take multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres.
Conversion factor
In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called a conversion factor. For example, kPa and bar are both units of pressure, and . The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to . Since any quantity can be multiplied by 1 without changing it, the expression "" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including the unit. For example, because , and bar/bar cancels out, so .
Applications
Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well.
Mathematics
A simple application of dimensional analysis to mathematics is in computing the form of the volume of an -ball (the solid ball in n dimensions), or the area of its surface, the -sphere: being an -dimensional figure, the volume scales as , while the surface area, being -dimensional, scales as . Thus the volume of the -ball in terms of the radius is , for some constant . Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone.
Finance, economics, and accounting
In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios.
For example, the P/E ratio has dimensions of time (unit: year), and can be interpreted as "years of earnings to earn the price paid".
In economics, debt-to-GDP ratio also has the unit year (debt has a unit of currency, GDP has a unit of currency/year).
Velocity of money has a unit of 1/years (GDP/money supply has a unit of currency/year over currency): how often a unit of currency circulates per year.
Annual continuously compounded interest rates and simple interest rates are often expressed as a percentage (adimensional quantity) while time is expressed as an adimensional quantity consisting of the number of years. However, if the time includes year as the unit of measure, the dimension of the rate is 1/year. Of course, there is nothing special (apart from the usual convention) about using year as a unit of time: any other time unit can be used. Furthermore, if rate and time include their units of measure, the use of different units for each is not problematic. In contrast, rate and time need to refer to a common period if they are adimensional. (Note that effective interest rates can only be defined as adimensional quantities.)
In financial analysis, bond duration can be defined as , where is the value of a bond (or portfolio), is the continuously compounded interest rate and is a derivative. From the previous point, the dimension of is 1/time. Therefore, the dimension of duration is time (usually expressed in years) because is in the "denominator" of the derivative.
Fluid mechanics
In fluid mechanics, dimensional analysis is performed to obtain dimensionless pi terms or groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable pi terms or groups, it is possible to develop a similar set of pi terms for a model that has the same dimensional relationships. In other words, pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include:
Reynolds number, generally important in all types of fluid problems:
Froude number, modeling flow with a free surface:
Euler number, used in problems in which pressure is of interest:
Mach number, important in high speed flows where the velocity approaches or exceeds the local speed of sound: where is the local speed of sound.
History
The origins of dimensional analysis have been disputed by historians. The first written application of dimensional analysis has been credited to François Daviet, a student of Lagrange, in a 1799 article at the Turin Academy of Science.
This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in the Buckingham π theorem.
Simeon Poisson also treated the same problem of the parallelogram law by Daviet, in his treatise of 1811 and 1833 (vol I, p. 39). In the second edition of 1833, Poisson explicitly introduces the term dimension instead of the Daviet homogeneity.
In 1822, the important Napoleonic scientist Joseph Fourier made the first credited important contributions based on the idea that physical laws like should be independent of the units employed to measure the physical variables.
James Clerk Maxwell played a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived. Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form of Newton's law of universal gravitation in which the gravitational constant is taken as unity, thereby defining . By assuming a form of Coulomb's law in which the Coulomb constant ke is taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge were , which, after substituting his equation for mass, results in charge having the same dimensions as mass, viz. .
Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Rayleigh first published the technique in his 1877 book The Theory of Sound.
The original meaning of the word dimension, in Fourier's Theorie de la Chaleur, was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time. This was slightly changed by Maxwell, who said the dimensions of acceleration are T−2L, instead of just the exponents.
Examples
A simple example: period of a harmonic oscillator
What is the period of oscillation of a mass attached to an ideal linear spring with spring constant suspended in gravity of strength ? That period is the solution for of some dimensionless equation in the variables , , , and .
The four quantities have the following dimensions: [T]; [M]; [M/T2]; and [L/T2]. From these we can form only one dimensionless product of powers of our chosen variables, , and putting for some dimensionless constant gives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematical group. They are often called dimensionless numbers as well.
The variable does not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combines with , , and , because is the only quantity that involves the dimension L. This implies that in this problem the is irrelevant. Dimensional analysis can sometimes yield strong statements about the irrelevance of some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent of : it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way: , for some dimensionless constant (equal to from the original dimensionless equation).
When faced with a case where dimensional analysis rejects a variable (, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here.
When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such as .
A more complex example: energy of a vibrating wire
Consider the case of a vibrating wire of length (L) vibrating with an amplitude (L). The wire has a linear density (M/L) and is under tension (LM/T2), and we want to know the energy (L2M/T2) in the wire. Let and be two dimensionless products of powers of the variables chosen, given by
The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation
where is some unknown function, or, equivalently as
where is some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown function . But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to , and so infer that . The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident.
The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on a dimensionless number such as the Reynolds number, which may be interpreted by dimensional analysis.
A third example: demand versus capacity for a rotating disc
Consider the case of a thin, solid, parallel-sided rotating disc of axial thickness (L) and radius (L). The disc has a density (M/L3), rotates at an angular velocity (T−1) and this leads to a stress (T−2L−1M) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following non-dimensional groups:
demand/capacity =
thickness/radius or aspect ratio =
Through the use of numerical experiments using, for example, the finite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs.
Properties
Mathematical properties
The dimensions that can be formed from a given collection of basic physical dimensions, such as T, L, and M, form an abelian group: The identity is written as 1; , and the inverse of L is 1/L or L−1. L raised to any integer power is a member of the group, having an inverse of L or 1/L. The operation of the group is multiplication, having the usual rules for handling exponents. Physically, 1/L can be interpreted as reciprocal length, and 1/T as reciprocal time (see reciprocal second).
An abelian group is equivalent to a module over the integers, with the dimensional symbol corresponding to the tuple . When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the module. When measurable quantities are raised to an integer power, the same is done to the dimensional symbols attached to those quantities; this corresponds to scalar multiplication in the module.
A basis for such a module of dimensional symbols is called a set of base quantities, and all other vectors are called derived units. As in any module, one may choose different bases, which yields different systems of units (e.g., choosing whether the unit for charge is derived from the unit for current, or vice versa).
The group identity, the dimension of dimensionless quantities, corresponds to the origin in this module, .
In certain cases, one can define fractional dimensions, specifically by formally defining fractional powers of one-dimensional vector spaces, like . However, it is not possible to take arbitrary fractional powers of units, due to representation-theoretic obstructions.
One can work with vector spaces with given dimensions without needing to use units (corresponding to coordinate systems of the vector spaces). For example, given dimensions and , one has the vector spaces and , and can define as the tensor product. Similarly, the dual space can be interpreted as having "negative" dimensions. This corresponds to the fact that under the natural pairing between a vector space and its dual, the dimensions cancel, leaving a dimensionless scalar.
The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). The nullity describes some number (e.g., ) of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities, . (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (and exponentiating) together the measured quantities to produce something with the same unit as some derived quantity can be expressed in the general form
Consequently, every possible commensurate equation for the physics of the system can be rewritten in the form
Knowing this restriction can be a powerful tool for obtaining new insight into the system.
Mechanics
The dimension of physical quantities of interest in mechanics can be expressed in terms of base dimensions T, L, and M – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by a change of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not entirely arbitrary, because they must form a basis: they must span the space, and be linearly independent.
For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to T, L, M: the former can be expressed as [F = LM/T2], L, M, while the latter can be expressed as [T = (LM/F)1/2], L, M.
On the other hand, length, velocity and time (T, L, V) do not form a set of base dimensions for mechanics, for two reasons:
There is no way to obtain mass – or anything derived from it, such as force – without introducing another base dimension (thus, they do not span the space).
Velocity, being expressible in terms of length and time, is redundant (the set is not linearly independent).
Other fields of physics and chemistry
Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of T, L, M and Q, where Q represents the dimension of electric charge. In thermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, the amount of substance (the number of molecules divided by the Avogadro constant, ≈ ) is also defined as a base dimension, N.
In the interaction of relativistic plasma with strong laser pulses, a dimensionless relativistic similarity parameter, connected with the symmetry properties of the collisionless Vlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features.
Polynomials and transcendental functions
Bridgman's theorem restricts the type of function that can be used to define a physical quantity from general (dimensionally compounded) quantities to only products of powers of the quantities, unless some of the independent quantities are algebraically combined to yield dimensionless groups, whose functions are grouped together in the dimensionless numeric multiplying factor. This excludes polynomials of more than one term or transcendental functions not of that form.
Scalar arguments to transcendental functions such as exponential, trigonometric and logarithmic functions, or to inhomogeneous polynomials, must be dimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.)
While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identity , where the logarithm is taken in any base, holds for dimensionless numbers and , but it does not hold if and are dimensional, because in this case the left-hand side is well-defined but the right-hand side is not.
Similarly, while one can evaluate monomials of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: for , the expression makes sense (as an area), while for , the expression does not make sense.
However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example,
This is the height to which an object rises in time if the acceleration of gravity is 9.8 and the initial upward speed is 500 . It is not necessary for to be in seconds. For example, suppose = 0.01 minutes. Then the first term would be
Combining units and numerical values
The value of a dimensional physical quantity is written as the product of a unit [] within the dimension and a dimensionless numerical value or numerical factor, .
When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in the same unit so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 metre added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. A conversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed:
is identical to
The factor 0.3048 m/ft is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to the same unit so that their numerical values can be added or subtracted.
Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units.
Quantity equations
A quantity equation, also sometimes called a complete equation, is an equation that remains valid independently of the unit of measurement used when expressing the physical quantities.
In contrast, in a numerical-value equation, just the numerical values of the quantities occur, without units. Therefore, it is only valid when each numerical values is referenced to a specific unit.
For example, a quantity equation for displacement as speed multiplied by time difference would be:
for = 5 m/s, where and may be expressed in any units, converted if necessary.
In contrast, a corresponding numerical-value equation would be:
where is the numeric value of when expressed in seconds and is the numeric value of when expressed in metres.
Generally, the use of numerical-value equations is discouraged.
Dimensionless concepts
Constants
The dimensionless constants that arise in the results obtained, such as the in the Poiseuille's Law problem and the in the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc.
Formalisms
Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as the Ising model can be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length, ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be , where is the dimension of the lattice.
It has been argued by some physicists, e.g., Michael J. Duff, that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants: , , and , in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other.
Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constants , , and (but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limit , and . In problems involving a gravitational field the latter limit should be taken such that the field stays finite.
Dimensional equivalences
Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force.
SI units
Programming languages
Dimensional correctness as part of type checking has been studied since 1977.
Implementations for Ada and C++ were described in 1985 and 1988.
Kennedy's 1996 thesis describes an implementation in Standard ML, and later in F#. There are implementations for Haskell, OCaml, and Rust, Python, and a code checker for Fortran.
Griffioen's 2019 thesis extended Kennedy's Hindley–Milner type system to support Hart's matrices.
McBride and Nordvall-Forsberg show how to use dependent types to extend type systems for units of measure.
Mathematica 13.2 has a function for transformations with quantities named NondimensionalizationTransform that applies a nondimensionalization transform to an equation. Mathematica also has a function to find the dimensions of a unit such as 1 J named UnitDimensions. Mathematica also has a function that will find dimensionally equivalent combinations of a subset of physical quantities named DimensionalCombations. Mathematica can also factor out certain dimension with UnitDimensions by specifying an argument to the function UnityDimensions. For example, you can use UnityDimensions to factor out angles. In addition to UnitDimensions, Mathematica can find the dimensions of a QuantityVariable with the function QuantityVariableDimensions.
Geometry: position vs. displacement
Affine quantities
Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. In mathematics scalars are considered a special case of vectors; vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: an origin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change).
Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable:
adding two displacements should yield a new displacement (walking ten paces then twenty paces gets you thirty paces forward),
adding a displacement to a position should yield a new position (walking one block down the street from an intersection gets you to the next intersection),
subtracting two positions should yield a displacement,
but one may not add two positions.
This illustrates the subtle distinction between affine quantities (ones modeled by an affine space, such as position) and vector quantities (ones modeled by a vector space, such as displacement).
Vector quantities may be added to each other, yielding a new vector quantity, and a vector quantity may be added to a suitable affine quantity (a vector space acts on an affine space), yielding a new affine quantity.
Affine quantities cannot be added, but may be subtracted, yielding relative quantities which are vectors, and these relative differences may then be added to each other or to an affine quantity.
Properly then, positions have dimension of affine length, while displacements have dimension of vector length. To assign a number to an affine unit, one must not only choose a unit of measurement, but also a point of reference, while to assign a number to a vector unit only requires a unit of measurement.
Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis.
This distinction is particularly important in the case of temperature, for which the numeric value of absolute zero is not the origin 0 in some scales. For absolute zero,
−273.15 °C ≘ 0 K = 0 °R ≘ −459.67 °F,
where the symbol ≘ means corresponds to, since although these values on the respective temperature scales correspond, they represent distinct quantities in the same way that the distances from distinct starting points to the same end point are distinct quantities, and cannot in general be equated.
For temperature differences,
1 K = 1 °C ≠ 1 °F = 1 °R.
(Here °R refers to the Rankine scale, not the Réaumur scale).
Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C.
Orientation and frame of reference
Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with a direction. (In 1 dimension, this issue is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in multi-dimensional Euclidean space, one also needs a bearing: they need to be compared to a frame of reference.
This leads to the extensions discussed below, namely Huntley's directed dimensions and Siano's orientational analysis.
Huntley's extensions
Huntley has pointed out that a dimensional analysis can become more powerful by discovering new independent dimensions in the quantities under consideration, thus increasing the rank of the dimensional matrix.
He introduced two approaches:
The magnitudes of the components of a vector are to be considered dimensionally independent. For example, rather than an undifferentiated length dimension L, we may have Lx represent dimension in the x-direction, and so forth. This requirement stems ultimately from the requirement that each component of a physically meaningful equation (scalar, vector, or tensor) must be dimensionally consistent.
Mass as a measure of the quantity of matter is to be considered dimensionally independent from mass as a measure of inertia.
Directed dimensions
As an example of the usefulness of the first approach, suppose we wish to calculate the distance a cannonball travels when fired with a vertical velocity component and a horizontal velocity component , assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are then , the distance travelled, with dimension L, , , both dimensioned as T−1L, and the downward acceleration of gravity, with dimension T−2L.
With these four quantities, we may conclude that the equation for the range may be written:
Or dimensionally
from which we may deduce that and , which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions T and L, and four parameters, with one equation.
However, if we use directed length dimensions, then will be dimensioned as T−1L, as T−1L, as L and as T−2L. The dimensional equation becomes:
and we may solve completely as , and . The increase in deductive power gained by the use of directed length dimensions is apparent.
Huntley's concept of directed length dimensions however has some serious limitations:
It does not deal well with vector equations involving the cross product,
nor does it handle well the use of angles as physical variables.
It also is often quite difficult to assign the L, L, L, L, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries?
Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's directed length dimensions to real problems.
Quantity of matter
In Huntley's second approach, he holds that it is sometimes useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of the quantity of matter. Quantity of matter is defined by Huntley as a quantity only to inertial mass, while not implicating inertial properties. No further restrictions are added to its definition.
For example, consider the derivation of Poiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass, we may choose as the relevant variables:
There are three fundamental variables, so the above five equations will yield two independent dimensionless variables:
If we distinguish between inertial mass with dimension and quantity of matter with dimension , then mass flow rate and density will use quantity of matter as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written:
where now only is an undetermined constant (found to be equal to by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yield Poiseuille's law.
Huntley's recognition of quantity of matter as an independent quantity dimension is evidently successful in the problems where it is applicable, but his definition of quantity of matter is open to interpretation, as it lacks specificity beyond the two requirements he postulated for it. For a given substance, the SI dimension amount of substance, with unit mole, does satisfy Huntley's two requirements as a measure of quantity of matter, and could be used as a quantity of matter in any problem of dimensional analysis where Huntley's concept is applicable.
Siano's extension: orientational analysis
Angles are, by convention, considered to be dimensionless quantities (although the wisdom of this is contested ) . As an example, consider again the projectile problem in which a point mass is launched from the origin at a speed and angle above the x-axis, with the force of gravity directed along the negative y-axis. It is desired to find the range , at which point the mass returns to the x-axis. Conventional analysis will yield the dimensionless variable , but offers no insight into the relationship between and .
Siano has suggested that the directed dimensions of Huntley be replaced by using orientational symbols to denote vector directions, and an orientationless symbol 10. Thus, Huntley's L becomes L1 with L specifying the dimension of length, and specifying the orientation. Siano further shows that the orientational symbols have an algebra of their own. Along with the requirement that , the following multiplication table for the orientation symbols results:
The orientational symbols form a group (the Klein four-group or "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of . For angles, consider an angle that lies in the z-plane. Form a right triangle in the z-plane with being one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation and the side opposite has an orientation . Since (using to indicate orientational equivalence) we conclude that an angle in the xy-plane must have an orientation , which is not unreasonable. Analogous reasoning forces the conclusion that has orientation while has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the form , where and are real scalars. An expression such as is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written:
which for and yields . Siano distinguishes between geometric angles, which have an orientation in 3-dimensional space, and phase angles associated with time-based oscillations, which have no spatial orientation, i.e. the orientation of a phase angle is .
The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive more information about acceptable solutions of physical problems. In this approach, one solves the dimensional equation as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral, putting it into normal form. The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols. The solution is then more complete than the one that dimensional analysis alone gives. Often, the added information is that one of the powers of a certain variable is even or odd.
As an example, for the projectile problem, using orientational symbols, , being in the xy-plane will thus have dimension and the range of the projectile will be of the form:
Dimensional homogeneity will now correctly yield and , and orientational homogeneity requires that . In other words, that must be an odd integer. In fact, the required function of theta will be which is a series consisting of odd powers of .
It is seen that the Taylor series of and are orientationally homogeneous using the above multiplication table, while expressions like and are not, and are (correctly) deemed unphysical.
Siano's orientational analysis is compatible with the conventional conception of angular quantities as being dimensionless, and within orientational analysis, the radian may still be considered a dimensionless unit. The orientational analysis of a quantity equation is carried out separately from the ordinary dimensional analysis, yielding information that supplements the dimensional analysis.
See also
Buckingham π theorem
Dimensionless numbers in fluid mechanics
Fermi estimate – used to teach dimensional analysis
Numerical-value equation
Rayleigh's method of dimensional analysis
Similitude – an application of dimensional analysis
System of measurement
Related areas of mathematics
Covariance and contravariance of vectors
Exterior algebra
Geometric algebra
Quantity calculus
Notes
References
As postscript
, (5): 147, (6): 101, (7): 129
Wilson, Edwin B. (1920) "Theory of Dimensions", chapter XI of Aeronautics, via Internet Archive
Further reading
External links
List of dimensions for variety of physical quantities
Unicalc Live web calculator doing units conversion by dimensional analysis
A C++ implementation of compile-time dimensional analysis in the Boost open-source libraries
Buckingham's pi-theorem
Quantity System calculator for units conversion based on dimensional approach
Units, quantities, and fundamental constants project dimensional analysis maps
Measurement
Conversion of units of measurement
Chemical engineering
Mechanical engineering
Environmental engineering | 0.776966 | 0.997781 | 0.775242 |
Evolutionary biology | Evolutionary biology is the subfield of biology that studies the evolutionary processes (natural selection, common descent, speciation) that produced the diversity of life on Earth. It is also defined as the study of the history of life forms on Earth. Evolution holds that all species are related and gradually change over generations. In a population, the genetic variations affect the phenotypes (physical characteristics) of an organism. These changes in the phenotypes will be an advantage to some organisms, which will then be passed on to their offspring. Some examples of evolution in species over many generations are the peppered moth and flightless birds. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology, systematics, and paleontology.
The investigational range of current research has widened to encompass the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution, such as sexual selection, genetic drift, and biogeography. Moreover, the newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis.
Subfields
Evolution is the central unifying concept in biology. Biology can be divided into various ways. One way is by the level of biological organization, from molecular to cell, organism to population. Another way is by perceived taxonomic group, with fields such as zoology, botany, and microbiology, reflecting what was once seen as the major divisions of life. A third way is by approaches, such as field biology, theoretical biology, experimental evolution, and paleontology. These alternative ways of dividing up the subject have been combined with evolutionary biology to create subfields like evolutionary ecology and evolutionary developmental biology.
More recently, the merge between biological science and applied sciences gave birth to new fields that are extensions of evolutionary biology, including evolutionary robotics, engineering, algorithms, economics, and architecture. The basic mechanisms of evolution are applied directly or indirectly to come up with novel designs or solve problems that are difficult to solve otherwise. The research generated in these applied fields, contribute towards progress, especially from work on evolution in computer science and engineering fields such as mechanical engineering.
Different types of evolution
Adaptive evolution
Adaptive evolution relates to evolutionary changes that happen due to the changes in the environment, this makes the organism suitable to its habitat. This change increases the chances of survival and reproduction of the organism (this can be referred to as an organism's fitness). For example, Darwin's Finches on Galapagos island developed different shaped beaks in order to survive for a long time. Adaptive evolution can also be convergent evolution if two distantly related species live in similar environments facing similar pressures.
Convergent evolution
Convergent evolution is the process in which related or distantly related organisms evolve similar characteristics independently. This type of evolution creates analogous structures which have a similar function, structure, or form between the two species. For example, sharks and dolphins look alike but they are not related. Likewise, birds, flying insects, and bats all have the ability to fly, but they are not related to each other. These similar traits tend to evolve from having similar environmental pressures.
Divergent evolution
Divergent evolution is the process of speciation. This can happen in several ways:
Allopatric speciation is when species are separated by a physical barrier that separates the population into two groups. Evolutionary mechanisms such as genetic drift and natural selection can then act independently on each population.
Peripatric speciation is a type of allopatric speciation that occurs when one of the new populations is considerably smaller than the other initial population. This leads to the founder's effect and the population can have different allele frequencies and phenotypes than the original population. These small populations are also more likely to see effects from genetic drift.
Parapatric speciation is allopatric speciation but occurs when the species diverge without a physical barrier separating the population. This tends to occur when a population of a species is incredibly large and occupies a vast environment.
Sympatric speciation is when a new species or subspecies sprouts from the original population while still occupying the same small environment, and without any physical barriers separating them from members of their original population. There is scientific debate as to whether sympatric speciation actually exists.
Artificial speciation is when scientists purposefully cause new species to emerge to use in laboratory procedures.
Coevolution
The influence of two closely associated species is known as coevolution. When two or more species evolve in company with each other, one species adapts to changes in other species. This type of evolution often happens in species that have symbiotic relationships. For example, predator-prey coevolution, this is the most common type of co-evolution. In this, the predator must evolve to become a more effective hunter because there is a selective pressure on the prey to steer clear of capture. The prey in turn need to develop better survival strategies. The Red Queen hypothesis is an example of predator-prey interations. The relationship between pollinating insects like bees and flowering plants, herbivores and plants, are also some common examples of diffuse or guild coevolution.
Mechanism: The process of evolution
The mechanisms of evolution focus mainly on mutation, genetic drift, gene flow, non-random mating, and natural selection.
Mutation: Mutation is a change in the DNA sequence inside a gene or a chromosome of an organism. Most mutations are deleterious, or neutral; i.e. they can neither harm nor benefit, but can also be beneficial sometimes.
Genetic drift: Genetic drift is a variational process, it happens as a result of the sampling errors from one generation to another generation where a random event that happens by chance in nature changes or influences allele frequency within a population. It has a much stronger effect on small populations than large ones.
Gene flow: Gene flow is the transfer of genetic material from the gene pool of one population to another. In a population, migration occurs from one species to another, resulting in the change of allele frequency.
Natural selection: The survival and reproductive rate of a species depends on the adaptability of the species to their environment. This process is called natural selection. Some species with certain traits in a population have higher survival and reproductive rate than others (fitness), and they pass on these genetic features to their offsprings.
Evolutionary developmental biology
In evolutionary developmental biology, scientists look at how the different processes in development play a role in how a specific organism reaches its current body plan. The genetic regulation of ontogeny and the phylogenetic process is what allows for this kind of understanding of biology to be possible. By looking at different processes during development, and going through the evolutionary tree, one can determine at which point a specific structure came about. For example, the three germ layers can be observed to not be present in cnidarians and ctenophores, which instead present in worms, being more or less developed depending on the kind of worm itself. Other structures like the development of Hox genes and sensory organs such as eyes can also be traced with this practice.
Phylogenetic Trees
Phylogenetic Trees are representations of genetic lineage. They are figures that show how related species are to one another. They formed by analyzing the physical traits as well as the similarities of the DNA between species. Then by using a molecular clock scientists can estimate when the species diverged. An example of a phylogeny would be the tree of life.
Homologs
Genes that have shared ancestry are homologs. If a speciation event occurs and one gene ends up in two different species the genes are now orthologous. If a gene is duplicated within the a singular species then it is a paralog. A molecular clock can be used to estimate when these events occurred.
History
The idea of evolution by natural selection was proposed by Charles Darwin in 1859, but evolutionary biology, as an academic discipline in its own right, emerged during the period of the modern synthesis in the 1930s and 1940s. It was not until the 1980s that many universities had departments of evolutionary biology. In the United States, many universities have created departments of molecular and cell biology or ecology and evolutionary biology, in place of the older departments of botany and zoology. Palaeontology is often grouped with earth science.
Microbiology too is becoming an evolutionary discipline now that microbial physiology and genomics are better understood. The quick generation time of bacteria and viruses such as bacteriophages makes it possible to explore evolutionary questions.
Many biologists have contributed to shaping the modern discipline of evolutionary biology. Theodosius Dobzhansky and E. B. Ford established an empirical research programme. Ronald Fisher, Sewall Wright, and J. B. S. Haldane created a sound theoretical framework. Ernst Mayr in systematics, George Gaylord Simpson in paleontology and G. Ledyard Stebbins in botany helped to form the modern synthesis. James Crow, Richard Lewontin, Dan Hartl, Marcus Feldman, and Brian Charlesworth trained a generation of evolutionary biologists.
Current research topics
Current research in evolutionary biology covers diverse topics and incorporates ideas from diverse areas, such as molecular genetics and computer science.
First, some fields of evolutionary research try to explain phenomena that were poorly accounted for in the modern evolutionary synthesis. These include speciation, the evolution of sexual reproduction, the evolution of cooperation, the evolution of ageing, and evolvability.
Second, some evolutionary biologists ask the most straightforward evolutionary question: "what happened and when?". This includes fields such as paleobiology, where paleobiologists and evolutionary biologists, including Thomas Halliday and Anjali Goswami, studied the evolution of early mammals going far back in time during the Mesozoic and Cenozoic eras (between 299 million to 12,000 years ago). Other fields related to generic exploration of evolution ("what happened and when?" ) include systematics and phylogenetics.
Third, the modern evolutionary synthesis was devised at a time when nobody understood the molecular basis of genes. Today, evolutionary biologists try to determine the genetic architecture of interesting evolutionary phenomena such as adaptation and speciation. They seek answers to questions such as how many genes are involved, how large are the effects of each gene, how interdependent are the effects of different genes, what do the genes do, and what changes happen to them (e.g., point mutations vs. gene duplication or even genome duplication). They try to reconcile the high heritability seen in twin studies with the difficulty in finding which genes are responsible for this heritability using genome-wide association studies.
One challenge in studying genetic architecture is that the classical population genetics that catalysed the modern evolutionary synthesis must be updated to take into account modern molecular knowledge. This requires a great deal of mathematical development to relate DNA sequence data to evolutionary theory as part of a theory of molecular evolution. For example, biologists try to infer which genes have been under strong selection by detecting selective sweeps.
Fourth, the modern evolutionary synthesis involved agreement about which forces contribute to evolution, but not about their relative importance. Current research seeks to determine this. Evolutionary forces include natural selection, sexual selection, genetic drift, genetic draft, developmental constraints, mutation bias and biogeography.
This evolutionary approach is key to much current research in organismal biology and ecology, such as life history theory. Annotation of genes and their function relies heavily on comparative approaches. The field of evolutionary developmental biology ("evo-devo") investigates how developmental processes work, and compares them in different organisms to determine how they evolved.
Many physicians do not have enough background in evolutionary biology, making it difficult to use it in modern medicine. However, there are efforts to gain a deeper understanding of disease through evolutionary medicine and to develop evolutionary therapies.
Drug resistance today
Evolution plays a role in resistance of drugs; for example, how HIV becomes resistant to medications and the body's immune system. The mutation of resistance of HIV is due to the natural selection of the survivors and their offspring. The few HIV that survive the immune system reproduced and had offspring that were also resistant to the immune system. Drug resistance also causes many problems for patients such as a worsening sickness or the sickness can mutate into something that can no longer be cured with medication. Without the proper medicine, a sickness can be the death of a patient. If their body has resistance to a certain number of drugs, then the right medicine will be harder and harder to find. Not completing the prescribed full course of antibiotic is also an example of resistance that will cause the bacteria against which the antibiotic is being taken to evolve and continue to spread in the body. When the full dosage of the medication does not enter the body and perform its proper job, the bacteria that survive the initial dosage will continue to reproduce. This can make for another bout of sickness later on that will be more difficult to cure because the bacteria involved will be resistant to the first medication used. Taking the full course of medicine that is prescribed is a vital step in avoiding antibiotic resistance.
Individuals with chronic illnesses, especially those that can recur throughout a lifetime, are at greater risk of antibiotic resistance than others. This is because overuse of a drug or too high of a dosage can cause a patient's immune system to weaken and the illness will evolve and grow stronger. For example, cancer patients will need a stronger and stronger dosage of medication because of their low functioning immune system.
Journals
Some scientific journals specialise exclusively in evolutionary biology as a whole, including the journals Evolution, Journal of Evolutionary Biology, and BMC Evolutionary Biology. Some journals cover sub-specialties within evolutionary biology, such as the journals Systematic Biology, Molecular Biology and Evolution and its sister journal Genome Biology and Evolution, and Cladistics.
Other journals combine aspects of evolutionary biology with other related fields. For example, Molecular Ecology, Proceedings of the Royal Society of London Series B, The American Naturalist and Theoretical Population Biology have overlap with ecology and other aspects of organismal biology. Overlap with ecology is also prominent in the review journals Trends in Ecology and Evolution and Annual Review of Ecology, Evolution, and Systematics. The journals Genetics and PLoS Genetics overlap with molecular genetics questions that are not obviously evolutionary in nature.
See also
Comparative anatomy
Computational phylogenetics
Evolutionary computation
Evolutionary dynamics
Evolutionary neuroscience
Evolutionary physiology
On the Origin of Species
Macroevolution
Phylogenetic comparative methods
Quantitative genetics
Selective breeding
Taxonomy (biology)
Speculative evolution
References
External links
Evolution and Paleobotany at the Encyclopædia Britannica
Philosophy of biology | 0.778101 | 0.996308 | 0.775229 |
Replication crisis | The replication crisis is an ongoing methodological crisis in which the results of many scientific studies are difficult or impossible to reproduce. Because the reproducibility of empirical results is an essential part of the scientific method, such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.
The replication crisis is frequently discussed in relation to psychology and medicine, where considerable efforts have been undertaken to reinvestigate classic results, to determine whether they are reliable, and if they turn out not to be, the reasons for the failure. Data strongly indicate that other natural and social sciences are affected as well.
The phrase replication crisis was coined in the early 2010s as part of a growing awareness of the problem. Considerations of causes and remedies have given rise to a new scientific discipline, metascience, which uses methods of empirical research to examine empirical research practice.
Considerations about reproducibility can be placed into two categories. Reproducibility in the narrow sense refers to re-examining and validating the analysis of a given set of data. Replication refers to repeating the experiment or study to obtain new, independent data with the goal of reaching the same or similar conclusions.
Background
Replication
Replication has been called "the cornerstone of science". Environmental health scientist Stefan Schmidt began a 2009 review with this description of replication:
But there is limited consensus on how to define replication and potentially related concepts. A number of types of replication have been identified:
Direct or exact replication, where an experimental procedure is repeated as closely as possible.
Systematic replication, where an experimental procedure is largely repeated, with some intentional changes.
Conceptual replication, where a finding or hypothesis is tested using a different procedure. Conceptual replication allows testing for generalizability and veracity of a result or hypothesis.
Reproducibility can also be distinguished from replication, as referring to reproducing the same results using the same data set. Reproducibility of this type is why many researchers make their data available to others for testing.
The replication crisis does not necessarily mean these fields are unscientific. Rather, this process is part of the scientific process in which old ideas or those that cannot withstand careful scrutiny are pruned, although this pruning process is not always effective.
A hypothesis is generally considered to be supported when the results match the predicted pattern and that pattern of results is found to be statistically significant. Results are considered significant whenever the relative frequency of the observed pattern falls below an arbitrarily chosen value (i.e. the significance level) when assuming the null hypothesis is true. This generally answers the question of how unlikely results would be if no difference existed at the level of the statistical population. If the probability associated with the test statistic exceeds the chosen critical value, the results are considered statistically significant. The corresponding probability of exceeding the critical value is depicted as p < 0.05, where p (typically referred to as the "p-value") is the probability level. This should result in 5% of hypotheses that are supported being false positives (an incorrect hypothesis being erroneously found correct), assuming the studies meet all of the statistical assumptions. Some fields use smaller p-values, such as p < 0.01 (1% chance of a false positive) or p < 0.001 (0.1% chance of a false positive). But a smaller chance of a false positive often requires greater sample sizes or a greater chance of a false negative (a correct hypothesis being erroneously found incorrect). Although p-value testing is the most commonly used method, it is not the only method.
Statistics
Certain terms commonly used in discussions of the replication crisis have technically precise meanings, which are presented here.
In the most common case, null hypothesis testing, there are two hypotheses, a null hypothesis and an alternative hypothesis . The null hypothesis is typically of the form "X and Y are statistically independent". For example, the null hypothesis might be "taking drug X does not change 1-year recovery rate from disease Y", and the alternative hypothesis is that it does change.
As testing for full statistical independence is difficult, the full null hypothesis is often reduced to a simplified null hypothesis "the effect size is 0", where "effect size" is a real number that is 0 if the full null hypothesis is true, and the larger the effect size is, the more the null hypothesis is false. For example, if X is binary, then the effect size might be defined as the change in the expectation of Y upon a change of X:Note that the effect size as defined above might be zero even if X and Y are not independent, such as when . Since different definitions of "effect size" capture different ways for X and Y to be dependent, there are many different definitions of effect size.
In practice, effect sizes cannot be directly observed, but must be measured by statistical estimators. For example, the above definition of effect size is often measured by Cohen's d estimator. The same effect size might have multiple estimators, as they have tradeoffs between efficiency, bias, variance, etc. This further increases the number of possible statistical quantities that can be computed on a single dataset. When an estimator for an effect size is used for statistical testing, it is called a test statistic.
A null hypothesis test is a decision procedure which takes in some data, and outputs either or . If it outputs , it is usually stated as "there is a statistically significant effect" or "the null hypothesis is rejected".
Often, the statistical test is a (one-sided) threshold test, which is structured as follows:
Gather data .
Compute a test statistic for the data.
Compare the test statistic against a critical value/threshold . If , then output , else, output .
A two-sided threshold test is similar, but with two thresholds, such that it outputs if either or
There are 4 possible outcomes of a null hypothesis test: false negative, true negative, false positive, true positive. A false negative means that is true, but the test outcome is ; a true negative means that is true, and the test outcome is , etc.
Significance level, false positive rate, or the alpha level, is the probability of finding the alternative to be true when the null hypothesis is true:For example, when the test is a one-sided threshold test, then where means "the data is sampled from ".
Statistical power, true positive rate, is the probability of finding the alternative to be true when the alternative hypothesis is true:where is also called the false negative rate. For example, when the test is a one-sided threshold test, then .
Given a statistical test and a data set , the corresponding p-value is the probability that the test statistic is at least as extreme, conditional on . For example, for a one-sided threshold test, If the null hypothesis is true, then the p-value is distributed uniformly on . Otherwise, it is typically peaked at and roughly exponential, though the precise shape of the p-value distribution depends on what the alternative hypothesis is.
Since the p-value is distributed uniformly on conditional on the null hypothesis, one may construct a statistical test with any significance level by simply computing the p-value, then output if . This is usually stated as "the null hypothesis is rejected at significance level ", or "", such as "smoking is correlated with cancer (p < 0.001)".
History
The beginning of the replication crisis can be traced to a number of events in the early 2010s. Philosopher of science and social epistemologist Felipe Romero identified four events that can be considered precursors to the ongoing crisis:
Controversies around social priming research: In the early 2010s, the well-known "elderly-walking" study by social psychologist John Bargh and colleagues failed to replicate in two direct replications. This experiment was part of a series of three studies that had been widely cited throughout the years, was regularly taught in university courses, and had inspired a large number of conceptual replications. Failures to replicate the study led to much controversy and a heated debate involving the original authors. Notably, many of the conceptual replications of the original studies also failed to replicate in subsequent direct replications.
Controversies around experiments on extrasensory perception: Social psychologist Daryl Bem conducted a series of experiments supposedly providing evidence for the controversial phenomenon of extrasensory perception. Bem was highly criticized for his study's methodology and upon reanalysis of the data, no evidence was found for the existence of extrasensory perception. The experiment also failed to replicate in subsequent direct replications. According to Romero, what the community found particularly upsetting was that many of the flawed procedures and statistical tools used in Bem's studies were part of common research practice in psychology.
Amgen and Bayer reports on lack of replicability in biomedical research: Scientists from biotech companies Amgen and Bayer Healthcare reported alarmingly low replication rates (11–20%) of landmark findings in preclinical oncological research.
Publication of studies on p-hacking and questionable research practices: Since the late 2000s, a number of studies in metascience showed how commonly adopted practices in many scientific fields, such as exploiting the flexibility of the process of data collection and reporting, could greatly increase the probability of false positive results. These studies suggested how a significant proportion of published literature in several scientific fields could be nonreplicable research.
This series of events generated a great deal of skepticism about the validity of existing research in light of widespread methodological flaws and failures to replicate findings. This led prominent scholars to declare a "crisis of confidence" in psychology and other fields, and the ensuing situation came to be known as the "replication crisis".
Although the beginning of the replication crisis can be traced to the early 2010s, some authors point out that concerns about replicability and research practices in the social sciences had been expressed much earlier. Romero notes that authors voiced concerns about the lack of direct replications in psychological research in the late 1960s and early 1970s. He also writes that certain studies in the 1990s were already reporting that journal editors and reviewers are generally biased against publishing replication studies.
In the social sciences, the blog Data Colada (whose three authors coined the term "p-hacking" in a 2014 paper) has been credited with contributing to the start of the replication crisis.
University of Virginia professor and cognitive psychologist Barbara A. Spellman has written that many criticisms of research practices and concerns about replicability of research are not new. She reports that between the late 1950s and the 1990s, scholars were already expressing concerns about a possible crisis of replication, a suspiciously high rate of positive findings, questionable research practices (QRPs), the effects of publication bias, issues with statistical power, and bad standards of reporting.
Spellman also identifies reasons that the reiteration of these criticisms and concerns in recent years led to a full-blown crisis and challenges to the status quo. First, technological improvements facilitated conducting and disseminating replication studies, and analyzing large swaths of literature for systemic problems. Second, the research community's increasing size and diversity made the work of established members more easily scrutinized by other community members unfamiliar with them. According to Spellman, these factors, coupled with increasingly limited resources and misaligned incentives for doing scientific work, led to a crisis in psychology and other fields.
According to Andrew Gelman, the works of Paul Meehl, Jacob Cohen, and Tversky and Kahneman in the 1960s-70s were early warnings of replication crisis. In discussing the origins of the problem, Kahneman himself noted historical precedents in subliminal perception and dissonance reduction replication failures.
It had been repeatedly pointed out since 1962 that most psychological studies have low power (true positive rate), but low power persisted for 50 years, indicating a structural and persistent problem in psychological research.
Prevalence
In psychology
Several factors have combined to put psychology at the center of the conversation. Some areas of psychology once considered solid, such as social priming and ego depletion, have come under increased scrutiny due to failed replications. Much of the focus has been on social psychology, although other areas of psychology such as clinical psychology, developmental psychology, and educational research have also been implicated.
In August 2015, the first open empirical study of reproducibility in psychology was published, called The Reproducibility Project: Psychology. Coordinated by psychologist Brian Nosek, researchers redid 100 studies in psychological science from three high-ranking psychology journals (Journal of Personality and Social Psychology, Journal of Experimental Psychology: Learning, Memory, and Cognition, and Psychological Science). 97 of the original studies had significant effects, but of those 97, only 36% of the replications yielded significant findings (p value below 0.05). The mean effect size in the replications was approximately half the magnitude of the effects reported in the original studies. The same paper examined the reproducibility rates and effect sizes by journal and discipline. Study replication rates were 23% for the Journal of Personality and Social Psychology, 48% for Journal of Experimental Psychology: Learning, Memory, and Cognition, and 38% for Psychological Science. Studies in the field of cognitive psychology had a higher replication rate (50%) than studies in the field of social psychology (25%).
Of the 64% of non-replications, only 25% disproved the original result (at statistical significance). The other 49% were inconclusive, neither supporting nor contradicting the original result. This is because many replications were underpowered, with a sample 2.5 times smaller than the original.
A study published in 2018 in Nature Human Behaviour replicated 21 social and behavioral science papers from Nature and Science, finding that only about 62% could successfully reproduce original results.
Similarly, in a study conducted under the auspices of the Center for Open Science, a team of 186 researchers from 60 different laboratories (representing 36 different nationalities from six different continents) conducted replications of 28 classic and contemporary findings in psychology. The study's focus was not only whether the original papers' findings replicated but also the extent to which findings varied as a function of variations in samples and contexts. Overall, 50% of the 28 findings failed to replicate despite massive sample sizes. But if a finding replicated, then it replicated in most samples. If a finding was not replicated, then it failed to replicate with little variation across samples and contexts. This evidence is inconsistent with a proposed explanation that failures to replicate in psychology are likely due to changes in the sample between the original and replication study.
Results of a 2022 study suggest that many earlier brain–phenotype studies ("brain-wide association studies" (BWAS)) produced invalid conclusions as the replication of such studies requires samples from thousands of individuals due to small effect sizes.
In medicine
Of 49 medical studies from 1990 to 2003 with more than 1000 citations, 92% found that the studied therapies were effective. Of these studies, 16% were contradicted by subsequent studies, 16% had found stronger effects than did subsequent studies, 44% were replicated, and 24% remained largely unchallenged. A 2011 analysis by researchers with pharmaceutical company Bayer found that, at most, a quarter of Bayer's in-house findings replicated the original results. But the analysis of Bayer's results found that the results that did replicate could often be successfully used for clinical applications.
In a 2012 paper, C. Glenn Begley, a biotech consultant working at Amgen, and Lee Ellis, a medical researcher at the University of Texas, found that only 11% of 53 pre-clinical cancer studies had replications that could confirm conclusions from the original studies. In late 2021, The Reproducibility Project: Cancer Biology examined 53 top papers about cancer published between 2010 and 2012 and showed that among studies that provided sufficient information to be redone, the effect sizes were 85% smaller on average than the original findings. A survey of cancer researchers found that half of them had been unable to reproduce a published result. Another report estimated that almost half of randomized controlled trials contained flawed data (based on the analysis of anonymized individual participant data (IPD) from more than 150 trials).
In other disciplines
In economics
Economics has lagged behind other social sciences and psychology in its attempts to assess replication rates and increase the number of studies that attempt replication. A 2016 study in the journal Science replicated 18 experimental studies published in two leading economics journals, The American Economic Review and the Quarterly Journal of Economics, between 2011 and 2014. It found that about 39% failed to reproduce the original results. About 20% of studies published in The American Economic Review are contradicted by other studies despite relying on the same or similar data sets. A study of empirical findings in the Strategic Management Journal found that about 30% of 27 retested articles showed statistically insignificant results for previously significant findings, whereas about 4% showed statistically significant results for previously insignificant findings.
In water resource management
A 2019 study in Scientific Data estimated with 95% confidence that of 1,989 articles on water resources and management published in 2017, study results might be reproduced for only 0.6% to 6.8%, largely because the articles did not provide sufficient information to allow for replication.
Across fields
A 2016 survey by Nature on 1,576 researchers who took a brief online questionnaire on reproducibility found that more than 70% of researchers have tried and failed to reproduce another scientist's experiment results (including 87% of chemists, 77% of biologists, 69% of physicists and engineers, 67% of medical researchers, 64% of earth and environmental scientists, and 62% of all others), and more than half have failed to reproduce their own experiments. But fewer than 20% had been contacted by another researcher unable to reproduce their work. The survey found that fewer than 31% of researchers believe that failure to reproduce results means that the original result is probably wrong, although 52% agree that a significant replication crisis exists. Most researchers said they still trust the published literature. In 2010, Fanelli (2010) found that 91.5% of psychiatry/psychology studies confirmed the effects they were looking for, and concluded that the odds of this happening (a positive result) was around five times higher than in fields such as astronomy or geosciences. Fanelli argued that this is because researchers in "softer" sciences have fewer constraints to their conscious and unconscious biases.
Early analysis of result-blind peer review, which is less affected by publication bias, has estimated that 61% of result-blind studies in biomedicine and psychology have led to null results, in contrast to an estimated 5% to 20% in earlier research.
In 2021, a study conducted by University of California, San Diego found that papers that cannot be replicated are more likely to be cited. Nonreplicable publications are often cited more even after a replication study is published.
Causes
There are many proposed causes for the replication crisis.
Historical and sociological causes
The replication crisis may be triggered by the "generation of new data and scientific publications at an unprecedented rate" that leads to "desperation to publish or perish" and failure to adhere to good scientific practice.
Predictions of an impending crisis in the quality-control mechanism of science can be traced back several decades. Derek de Solla Price—considered the father of scientometrics, the quantitative study of science—predicted in 1963 that science could reach "senility" as a result of its own exponential growth. Some present-day literature seems to vindicate this "overflow" prophecy, lamenting the decay in both attention and quality.
Historian Philip Mirowski argues that the decline of scientific quality can be connected to its commodification, especially spurred by major corporations' profit-driven decision to outsource their research to universities and contract research organizations.
Social systems theory, as expounded in the work of German sociologist Niklas Luhmann, inspires a similar diagnosis. This theory holds that each system, such as economy, science, religion, and media, communicates using its own code: true and false for science, profit and loss for the economy, news and no-news for the media, and so on. According to some sociologists, science's mediatization, commodification, and politicization, as a result of the structural coupling among systems, have led to a confusion of the original system codes.
Problems with the publication system in science
Publication bias
A major cause of low reproducibility is the publication bias stemming from the fact that statistically non-significant results and seemingly unoriginal replications are rarely published. Only a very small proportion of academic journals in psychology and neurosciences explicitly welcomed submissions of replication studies in their aim and scope or instructions to authors. This does not encourage reporting on, or even attempts to perform, replication studies. Among 1,576 researchers Nature surveyed in 2016, only a minority had ever attempted to publish a replication, and several respondents who had published failed replications noted that editors and reviewers demanded that they play down comparisons with the original studies. An analysis of 4,270 empirical studies in 18 business journals from 1970 to 1991 reported that less than 10% of accounting, economics, and finance articles and 5% of management and marketing articles were replication studies. Publication bias is augmented by the pressure to publish and the author's own confirmation bias, and is an inherent hazard in the field, requiring a certain degree of skepticism on the part of readers.
Publication bias leads to what psychologist Robert Rosenthal calls the "file drawer effect". The file drawer effect is the idea that as a consequence of the publication bias, a significant number of negative results are not published. According to philosopher of science Felipe Romero, this tends to produce "misleading literature and biased meta-analytic studies", and when publication bias is considered along with the fact that a majority of tested hypotheses might be false a priori, it is plausible that a considerable proportion of research findings might be false positives, as shown by metascientist John Ioannidis. In turn, a high proportion of false positives in the published literature can explain why many findings are nonreproducible.
Another publication bias is that studies that do not reject the null hypothesis are scrutinized asymmetrically. For example, they are likely to be rejected as being difficult to interpret or having a Type II error. Studies that do reject the null hypothesis are not likely to be rejected for those reasons.
In popular media, there is another element of publication bias: the desire to make research accessible to the public led to oversimplification and exaggeration of findings, creating unrealistic expectations and amplifying the impact of non-replications. In contrast, null results and failures to replicate tend to go unreported. This explanation may apply to power posing's replication crisis.
Mathematical errors
Even high-impact journals have a significant fraction of mathematical errors in their use of statistics. For example, 11% of statistical results published in Nature and BMJ in 2001 are "incongruent", meaning that the reported p-value is mathematically different from what it should be if it were correctly calculated from the reported test statistic. These errors were likely from typesetting, rounding, and transcription errors.
Among 157 neuroscience papers published in five top-ranking journals that attempt to show that two experimental effects are different, 78 erroneously tested instead for whether one effect is significant while the other is not, and 79 correctly tested for whether their difference is significantly different from 0.
"Publish or perish" culture
The consequences for replicability of the publication bias are exacerbated by academia's "publish or perish" culture. As explained by metascientist Daniele Fanelli, "publish or perish" culture is a sociological aspect of academia whereby scientists work in an environment with very high pressure to have their work published in recognized journals. This is the consequence of the academic work environment being hypercompetitive and of bibliometric parameters (e.g., number of publications) being increasingly used to evaluate scientific careers. According to Fanelli, this pushes scientists to employ a number of strategies aimed at making results "publishable". In the context of publication bias, this can mean adopting behaviors aimed at making results positive or statistically significant, often at the expense of their validity (see QRPs, section 4.3).
According to Center for Open Science founder Brian Nosek and his colleagues, "publish or perish" culture created a situation whereby the goals and values of single scientists (e.g., publishability) are not aligned with the general goals of science (e.g., pursuing scientific truth). This is detrimental to the validity of published findings.
Philosopher Brian D. Earp and psychologist Jim A. C. Everett argue that, although replication is in the best interests of academics and researchers as a group, features of academic psychological culture discourage replication by individual researchers. They argue that performing replications can be time-consuming, and take away resources from projects that reflect the researcher's original thinking. They are harder to publish, largely because they are unoriginal, and even when they can be published they are unlikely to be viewed as major contributions to the field. Replications "bring less recognition and reward, including grant money, to their authors".
In his 1971 book Scientific Knowledge and Its Social Problems, philosopher and historian of science Jerome R. Ravetz predicted that science—in its progression from "little" science composed of isolated communities of researchers to "big" science or "techno-science"—would suffer major problems in its internal system of quality control. He recognized that the incentive structure for modern scientists could become dysfunctional, creating perverse incentives to publish any findings, however dubious. According to Ravetz, quality in science is maintained only when there is a community of scholars, linked by a set of shared norms and standards, who are willing and able to hold each other accountable.
Standards of reporting
Certain publishing practices also make it difficult to conduct replications and to monitor the severity of the reproducibility crisis, for articles often come with insufficient descriptions for other scholars to reproduce the study. The Reproducibility Project: Cancer Biology showed that of 193 experiments from 53 top papers about cancer published between 2010 and 2012, only 50 experiments from 23 papers have authors who provided enough information for researchers to redo the studies, sometimes with modifications. None of the 193 papers examined had its experimental protocols fully described and replicating 70% of experiments required asking for key reagents. The aforementioned study of empirical findings in the Strategic Management Journal found that 70% of 88 articles could not be replicated due to a lack of sufficient information for data or procedures. In water resources and management, most of 1,987 articles published in 2017 were not replicable because of a lack of available information shared online. In studies of event-related potentials, only two-thirds the information needed to replicate a study were reported in a sample of 150 studies, highlighting that there are substantial gaps in reporting.
Procedural bias
By the Duhem-Quine thesis, scientific results are interpreted by both a substantive theory and a theory of instruments. For example, astronomical observations depend both on the theory of astronomical objects and the theory of telescopes. A large amount of non-replicable research might accumulate if there is a bias of the following kind: faced with a null result, a scientist prefers to treat the data as saying the instrument is insufficient; faced with a non-null result, a scientist prefers to accept the instrument as good, and treat the data as saying something about the substantive theory.
Cultural evolution
Smaldino and McElreath proposed a simple model for the cultural evolution of scientific practice. Each lab randomly decides to produce novel research or replication research, at different fixed levels of false positive rate, true positive rate, replication rate, and productivity (its "traits"). A lab might use more "effort", making the ROC curve more convex but decreasing productivity. A lab accumulates a score over its lifetime that increases with publications and decreases when another lab fails to replicate its results. At regular intervals, a random lab "dies" and another "reproduces" a child lab with a similar trait as its parent. Labs with higher scores are more likely to reproduce. Under certain parameter settings, the population of labs converge to maximum productivity even at the price of very high false positive rates.
Questionable research practices and fraud
Questionable research practices (QRPs) are intentional behaviors that capitalize on the gray area of acceptable scientific behavior or exploit the researcher degrees of freedom (researcher DF), which can contribute to the irreproducibility of results by increasing the probability of false positive results. Researcher DF are seen in hypothesis formulation, design of experiments, data collection and analysis, and reporting of research. Some examples of QRPs are data dredging, selective reporting, and HARKing (hypothesising after results are known). In medicine, irreproducible studies have six features in common. These include investigators not being blinded to the experimental versus the control arms, a failure to repeat experiments, a lack of positive and negative controls, failing to report all the data, inappropriate use of statistical tests, and use of reagents that were not appropriately validated.
QRPs do not include more explicit violations of scientific integrity, such as data falsification. Fraudulent research does occur, as in the case of scientific fraud by social psychologist Diederik Stapel, cognitive psychologist Marc Hauser and social psychologist Lawrence Sanna, but it appears to be uncommon.
Prevalence
According to IU professor Ernest O’Boyle and psychologist Martin Götz, around 50% of researchers surveyed across various studies admitted engaging in HARKing. In a survey of 2,000 psychologists by behavioral scientist Leslie K. John and colleagues, around 94% of psychologists admitted having employed at least one QRP. More specifically, 63% admitted failing to report all of a study's dependent measures, 28% to report all of a study's conditions, and 46% to selectively reporting studies that produced the desired pattern of results. In addition, 56% admitted having collected more data after having inspected already collected data, and 16% to having stopped data collection because the desired result was already visible. According to biotechnology researcher J. Leslie Glick's estimate in 1992, 10% to 20% of research and development studies involved either QRPs or outright fraud. The methodology used to estimate QRPs has been contested, and more recent studies suggested lower prevalence rates on average.
A 2009 meta-analysis found that 2% of scientists across fields admitted falsifying studies at least once and 14% admitted knowing someone who did. Such misconduct was, according to one study, reported more frequently by medical researchers than by others.
Statistical issues
Low statistical power
According to Deakin University professor Tom Stanley and colleagues, one plausible reason studies fail to replicate is low statistical power. This happens for three reasons. First, a replication study with low power is unlikely to succeed since, by definition, it has a low probability to detect a true effect. Second, if the original study has low power, it will yield biased effect size estimates. When conducting a priori power analysis for the replication study, this will result in underestimation of the required sample size. Third, if the original study has low power, the post-study odds of a statistically significant finding reflecting a true effect are quite low. It is therefore likely that a replication attempt of the original study would fail.
Mathematically, the probability of replicating a previous publication that rejected a null hypothesis in favor of an alternative is assuming significance is less than power. Thus, low power implies low probability of replication, regardless of how the previous publication was designed, and regardless of which hypothesis is really true.
Stanley and colleagues estimated the average statistical power of psychological literature by analyzing data from 200 meta-analyses. They found that on average, psychology studies have between 33.1% and 36.4% statistical power. These values are quite low compared to the 80% considered adequate statistical power for an experiment. Across the 200 meta-analyses, the median of studies with adequate statistical power was between 7.7% and 9.1%, implying that a positive result would replicate with probability less than 10%, regardless of whether the positive result was a true positive or a false positive.
The statistical power of neuroscience studies is quite low. The estimated statistical power of fMRI research is between .08 and .31, and that of studies of event-related potentials was estimated as .72‒.98 for large effect sizes, .35‒.73 for medium effects, and .10‒.18 for small effects.
In a study published in Nature, psychologist Katherine Button and colleagues conducted a similar study with 49 meta-analyses in neuroscience, estimating a median statistical power of 21%. Meta-scientist John Ioannidis and colleagues computed an estimate of average power for empirical economic research, finding a median power of 18% based on literature drawing upon 6.700 studies. In light of these results, it is plausible that a major reason for widespread failures to replicate in several scientific fields might be very low statistical power on average.
The same statistical test with the same significance level will have lower statistical power if the effect size is small under the alternative hypothesis. Complex inheritable traits are typically correlated with a large number of genes, each of small effect size, so high power requires a large sample size. In particular, many results from the candidate gene literature suffered from small effect sizes and small sample sizes and would not replicate. More data from genome-wide association studies (GWAS) come close to solving this problem. As a numeric example, most genes associated with schizophrenia risk have low effect size (genotypic relative risk, GRR). A statistical study with 1000 cases and 1000 controls has 0.03% power for a gene with GRR = 1.15, which is already large for schizophrenia. In contrast, the largest GWAS to date has ~100% power for it.
Positive effect size bias
Even when the study replicates, the replication typically have smaller effect size. Underpowered studies have a large effect size bias.
In studies that statistically estimate a regression factor, such as the in , when the dataset is large, noise tends to cause the regression factor to be underestimated, but when the dataset is small, noise tends to cause the regression factor to be overestimated.
Problems of meta-analysis
Meta-analyses have their own methodological problems and disputes, which leads to rejection of the meta-analytic method by researchers whose theory is challenged by meta-analysis.
Rosenthal proposed the "fail-safe number" (FSN) to avoid the publication bias against null results. It is defined as follows: Suppose the null hypothesis is true; how many publications would be required to make the current result indistinguishable from the null hypothesis?
Rosenthal's point is that certain effect sizes are large enough, such that even if there is a total publication bias against null results (the "file drawer problem"), the number of unpublished null results would be impossibly large to swamp out the effect size. Thus, the effect size must be statistically significant even after accounting for unpublished null results.
One objection to the FSN is that it is calculated as if unpublished results are unbiased samples from the null hypothesis. But if the file drawer problem is true, then unpublished results would have effect sizes concentrated around 0. Thus fewer unpublished null results would be necessary to swap out the effect size, and so the FSN is an overestimate.
Another problem with meta-analysis is that bad studies are "infectious" in the sense that one bad study might cause the entire meta-analysis to overestimate statistical significance.
P-hacking
Various statistical methods can be applied to make the p-value appear smaller than it really is. This need not be malicious, as moderately flexible data analysis, routine in research, can increase the false-positive rate to above 60%.
For example, if one collects some data, applies several different significance tests to it, and publishes only the one that happens to have a p-value less than 0.05, then the total p-value for "at least one significance test reaches p < 0.05" can be much larger than 0.05, because even if the null hypothesis were true, the probability that one out of many significance tests is extreme is not itself extreme.
Typically, a statistical study has multiple steps, with several choices at each step, such as during data collection, outlier rejection, choice of test statistic, choice of one-tailed or two-tailed test, etc. These choices in the "garden of forking paths" multiply, creating many "researcher degrees of freedom". The effect is similar to the file-drawer problem, as the paths not taken are not published.
Consider a simple illustration. Suppose the null hypothesis is true, and we have 20 possible significance tests to apply to the dataset. Also suppose the outcomes to the significance tests are independent. By definition of "significance", each test has probability 0.05 to pass with significance level 0.05. The probability that at least 1 out of 20 is significant is, by assumption of independence, .
Another possibility is the multiple comparisons problem. In 2009, it was twice noted that fMRI studies had a suspicious number of positive results with large effect sizes, more than would be expected since the studies have low power (one example had only 13 subjects). It pointed out that over half of the studies would test for correlation between a phenomenon and individual fMRI voxels, and only report on voxels exceeding chosen thresholds.
Optional stopping is a practice where one collects data until some stopping criterion is reached. Though a valid procedure, it is easily misused. The problem is that p-value of an optionally stopped statistical test is larger than it seems. Intuitively, this is because the p-value is supposed to be the sum of all events at least as rare as what is observed. With optional stopping, there are even rarer events that are difficult to account for, i.e. not triggering the optional stopping rule, and collecting even more data before stopping. Neglecting these events leads to a p-value that is too low. In fact, if the null hypothesis is true, any significance level can be reached if one is allowed to keep collecting data and stop when the desired p-value (calculated as if one has always been planning to collect exactly this much data) is obtained. For a concrete example of testing for a fair coin, see p-value#optional stopping.
More succinctly, the proper calculation of p-value requires accounting for counterfactuals, that is, what the experimenter could have done in reaction to data that might have been. Accounting for what might have been is hard even for honest researchers. One benefit of preregistration is to account for all counterfactuals, allowing the p-value to be calculated correctly.
The problem of early stopping is not just limited to researcher misconduct. There is often pressure to stop early if the cost of collecting data is high. Some animal ethics boards even mandate early stopping if the study obtains a significant result midway.
Such practices are widespread in psychology. In a 2012 survey, 56% of psychologists admitted to early stopping, 46% to only reporting analyses that "worked", and 38% to post hoc exclusion, that is, removing some data after analysis was already performed on the data before reanalyzing the remaining data (often on the premise of "outlier removal").
Statistical heterogeneity
As also reported by Stanley and colleagues, a further reason studies might fail to replicate is high heterogeneity of the to-be-replicated effects. In meta-analysis, "heterogeneity" refers to the variance in research findings that results from there being no single true effect size. Instead, findings in such cases are better seen as a distribution of true effects. Statistical heterogeneity is calculated using the I-squared statistic, defined as "the proportion (or percentage) of observed variation among reported effect sizes that cannot be explained by the calculated standard errors associated with these reported effect sizes". This variation can be due to differences in experimental methods, populations, cohorts, and statistical methods between replication studies. Heterogeneity poses a challenge to studies attempting to replicate previously found effect sizes. When heterogeneity is high, subsequent replications have a high probability of finding an effect size radically different than that of the original study.
Importantly, significant levels of heterogeneity are also found in direct/exact replications of a study. Stanley and colleagues discuss this while reporting a study by quantitative behavioral scientist Richard Klein and colleagues, where the authors attempted to replicate 15 psychological effects across 36 different sites in Europe and the U.S. In the study, Klein and colleagues found significant amounts of heterogeneity in 8 out of 16 effects (I-squared = 23% to 91%). Importantly, while the replication sites intentionally differed on a variety of characteristics, such differences could account for very little heterogeneity . According to Stanley and colleagues, this suggested that heterogeneity could have been a genuine characteristic of the phenomena being investigated. For instance, phenomena might be influenced by so-called "hidden moderators" – relevant factors that were previously not understood to be important in the production of a certain effect.
In their analysis of 200 meta-analyses of psychological effects, Stanley and colleagues found a median percent of heterogeneity of I-squared = 74%. According to the authors, this level of heterogeneity can be considered "huge". It is three times larger than the random sampling variance of effect sizes measured in their study. If considered along sampling error, heterogeneity yields a standard deviation from one study to the next even larger than the median effect size of the 200 meta-analyses they investigated. The authors conclude that if replication is defined by a subsequent study finding a sufficiently similar effect size to the original, replication success is not likely even if replications have very large sample sizes. Importantly, this occurs even if replications are direct or exact since heterogeneity nonetheless remains relatively high in these cases.
Others
Within economics, the replication crisis may be also exacerbated because econometric results are fragile: using different but plausible estimation procedures or data preprocessing techniques can lead to conflicting results.
Context sensitivity
New York University professor Jay Van Bavel and colleagues argue that a further reason findings are difficult to replicate is the sensitivity to context of certain psychological effects. On this view, failures to replicate might be explained by contextual differences between the original experiment and the replication, often called "hidden moderators". Van Bavel and colleagues tested the influence of context sensitivity by reanalyzing the data of the widely cited Reproducibility Project carried out by the Open Science Collaboration. They re-coded effects according to their sensitivity to contextual factors and then tested the relationship between context sensitivity and replication success in various regression models.
Context sensitivity was found to negatively correlate with replication success, such that higher ratings of context sensitivity were associated with lower probabilities of replicating an effect. Importantly, context sensitivity significantly correlated with replication success even when adjusting for other factors considered important for reproducing results (e.g., effect size and sample size of original, statistical power of the replication, methodological similarity between original and replication). In light of the results, the authors concluded that attempting a replication in a different time, place or with a different sample can significantly alter an experiment's results. Context sensitivity thus may be a reason certain effects fail to replicate in psychology.
Bayesian explanation
In the framework of Bayesian probability, by Bayes' theorem, rejecting the null hypothesis at significance level 5% does not mean that the posterior probability for the alternative hypothesis is 95%, and the posterior probability is also different from the probability of replication. Consider a simplified case where there are only two hypotheses. Let the prior probability of the null hypothesis be , and the alternative . For a given statistical study, let its false positive rate (significance level) be , and true positive rate (power) be . For illustrative purposes, let significance level be 0.05 and power be 0.45 (underpowered).
Now, by Bayes' theorem, conditional on the statistical studying finding to be true, the posterior probability of actually being true is not , but
and the probability of replicating the statistical study is which is also different from . In particular, for a fixed level of significance, the probability of replication increases with power, and prior probability for . If the prior probability for is small, then one would require a high power for replication.
For example, if the prior probability of the null hypothesis is , and the study found a positive result, then the posterior probability for is , and the replication probability is .
Problem with null hypothesis testing
Some argue that null hypothesis testing is itself inappropriate, especially in "soft sciences" like social psychology.
As repeatedly observed by statisticians, in complex systems, such as social psychology, "the null hypothesis is always false", or "everything is correlated". If so, then if the null hypothesis is not rejected, that does not show that the null hypothesis is true, but merely that it was a false negative, typically due to low power. Low power is especially prevalent in subject areas where effect sizes are small and data is expensive to acquire, such as social psychology.
Furthermore, when the null hypothesis is rejected, it might not be evidence for the substantial alternative hypothesis. In soft sciences, many hypotheses can predict a correlation between two variables. Thus, evidence against the null hypothesis "there is no correlation" is no evidence for one of the many alternative hypotheses that equally well predict "there is a correlation". Fisher developed the NHST for agronomy, where rejecting the null hypothesis is usually good proof of the alternative hypothesis, since there are not many of them. Rejecting the hypothesis "fertilizer does not help" is evidence for "fertilizer helps". But in psychology, there are many alternative hypotheses for every null hypothesis.
In particular, when statistical studies on extrasensory perception reject the null hypothesis at extremely low p-value (as in the case of Daryl Bem), it does not imply the alternative hypothesis "ESP exists". Far more likely is that there was a small (non-ESP) signal in the experiment setup that has been measured precisely.
Paul Meehl noted that statistical hypothesis testing is used differently in "soft" psychology (personality, social, etc.) from physics. In physics, a theory makes a quantitative prediction and is tested by checking whether the prediction falls within the statistically measured interval. In soft psychology, a theory makes a directional prediction and is tested by checking whether the null hypothesis is rejected in the right direction. Consequently, improved experimental technique makes theories more likely to be falsified in physics but less likely to be falsified in soft psychology, as the null hypothesis is always false since any two variables are correlated by a "crud factor" of about 0.30. The net effect is an accumulation of theories that remain unfalsified, but with no empirical evidence for preferring one over the others.
Base rate fallacy
According to philosopher Alexander Bird, a possible reason for the low rates of replicability in certain scientific fields is that a majority of tested hypotheses are false a priori. On this view, low rates of replicability could be consistent with quality science. Relatedly, the expectation that most findings should replicate would be misguided and, according to Bird, a form of base rate fallacy. Bird's argument works as follows. Assuming an ideal situation of a test of significance, whereby the probability of incorrectly rejecting the null hypothesis is 5% (i.e. Type I error) and the probability of correctly rejecting the null hypothesis is 80% (i.e. Power), in a context where a high proportion of tested hypotheses are false, it is conceivable that the number of false positives would be high compared to those of true positives. For example, in a situation where only 10% of tested hypotheses are actually true, one can calculate that as many as 36% of results will be false positives.
The claim that the falsity of most tested hypotheses can explain low rates of replicability is even more relevant when considering that the average power for statistical tests in certain fields might be much lower than 80%. For example, the proportion of false positives increases to a value between 55.2% and 57.6% when calculated with the estimates of an average power between 34.1% and 36.4% for psychology studies, as provided by Stanley and colleagues in their analysis of 200 meta-analyses in the field. A high proportion of false positives would then result in many research findings being non-replicable.
Bird notes that the claim that a majority of tested hypotheses are false a priori in certain scientific fields might be plausible given factors such as the complexity of the phenomena under investigation, the fact that theories are seldom undisputed, the "inferential distance" between theories and hypotheses, and the ease with which hypotheses can be generated. In this respect, the fields Bird takes as examples are clinical medicine, genetic and molecular epidemiology, and social psychology. This situation is radically different in fields where theories have outstanding empirical basis and hypotheses can be easily derived from theories (e.g., experimental physics).
Consequences
When effects are wrongly stated as relevant in the literature, failure to detect this by replication will lead to the canonization of such false facts.
A 2021 study found that papers in leading general interest, psychology and economics journals with findings that could not be replicated tend to be cited more over time than reproducible research papers, likely because these results are surprising or interesting. The trend is not affected by publication of failed reproductions, after which only 12% of papers that cite the original research will mention the failed replication. Further, experts are able to predict which studies will be replicable, leading the authors of the 2021 study, Marta Serra-Garcia and Uri Gneezy, to conclude that experts apply lower standards to interesting results when deciding whether to publish them.
Public awareness and perceptions
Concerns have been expressed within the scientific community that the general public may consider science less credible due to failed replications. Research supporting this concern is sparse, but a nationally representative survey in Germany showed that more than 75% of Germans have not heard of replication failures in science. The study also found that most Germans have positive perceptions of replication efforts: only 18% think that non-replicability shows that science cannot be trusted, while 65% think that replication research shows that science applies quality control, and 80% agree that errors and corrections are part of science.
Response in academia
With the replication crisis of psychology earning attention, Princeton University psychologist Susan Fiske drew controversy for speaking against critics of psychology for what she called bullying and undermining the science. She called these unidentified "adversaries" names such as "methodological terrorist" and "self-appointed data police", saying that criticism of psychology should be expressed only in private or by contacting the journals. Columbia University statistician and political scientist Andrew Gelman responded to Fiske, saying that she had found herself willing to tolerate the "dead paradigm" of faulty statistics and had refused to retract publications even when errors were pointed out. He added that her tenure as editor had been abysmal and that a number of published papers she edited were found to be based on extremely weak statistics; one of Fiske's own published papers had a major statistical error and "impossible" conclusions.
Credibility revolution
Some researchers in psychology indicate that the replication crisis is a foundation for a "credibility revolution", where changes in standards by which psychological science are evaluated may include emphasizing transparency and openness, preregistering research projects, and replicating research with higher standards for evidence to improve the strength of scientific claims. Such changes may diminish the productivity of individual researchers, but this effect could be avoided by data sharing and greater collaboration. A credibility revolution could be good for the research environment.
Remedies
Focus on the replication crisis has led to renewed efforts in psychology to retest important findings. A 2013 special edition of the journal Social Psychology focused on replication studies.
Standardization as well as (requiring) transparency of the used statistical and experimental methods have been proposed. Careful documentation of the experimental set-up is considered crucial for replicability of experiments and various variables may not be documented and standardized such as animals' diets in animal studies.
A 2016 article by John Ioannidis elaborated on "Why Most Clinical Research Is Not Useful". Ioannidis describes what he views as some of the problems and calls for reform, characterizing certain points for medical research to be useful again; one example he makes is the need for medicine to be patient-centered (e.g. in the form of the Patient-Centered Outcomes Research Institute) instead of the current practice to mainly take care of "the needs of physicians, investigators, or sponsors".
Reform in scientific publishing
Metascience
Metascience is the use of scientific methodology to study science itself. It seeks to increase the quality of scientific research while reducing waste. It is also known as "research on research" and "the science of science", as it uses research methods to study how research is done and where improvements can be made. Metascience is concerned with all fields of research and has been called "a bird's eye view of science." In Ioannidis's words, "Science is the best thing that has happened to human beings ... but we can do it better."
Meta-research continues to be conducted to identify the roots of the crisis and to address them. Methods of addressing the crisis include pre-registration of scientific studies and clinical trials as well as the founding of organizations such as CONSORT and the EQUATOR Network that issue guidelines for methodology and reporting. Efforts continue to reform the system of academic incentives, improve the peer review process, reduce the misuse of statistics, combat bias in scientific literature, and increase the overall quality and efficiency of the scientific process.
Presentation of methodology
Some authors have argued that the insufficient communication of experimental methods is a major contributor to the reproducibility crisis and that better reporting of experimental design and statistical analyses would improve the situation. These authors tend to plead for both a broad cultural change in the scientific community of how statistics are considered and a more coercive push from scientific journals and funding bodies. But concerns have been raised about the potential for standards for transparency and replication to be misapplied to qualitative as well as quantitative studies.
Business and management journals that have introduced editorial policies on data accessibility, replication, and transparency include the Strategic Management Journal, the Journal of International Business Studies, and the Management and Organization Review.
Result-blind peer review
In response to concerns in psychology about publication bias and data dredging, more than 140 psychology journals have adopted result-blind peer review. In this approach, studies are accepted not on the basis of their findings and after the studies are completed, but before they are conducted and on the basis of the methodological rigor of their experimental designs, and the theoretical justifications for their statistical analysis techniques before data collection or analysis is done. Early analysis of this procedure has estimated that 61% of result-blind studies have led to null results, in contrast to an estimated 5% to 20% in earlier research. In addition, large-scale collaborations between researchers working in multiple labs in different countries that regularly make their data openly available for different researchers to assess have become much more common in psychology.
Pre-registration of studies
Scientific publishing has begun using pre-registration reports to address the replication crisis. The registered report format requires authors to submit a description of the study methods and analyses prior to data collection. Once the method and analysis plan is vetted through peer-review, publication of the findings is provisionally guaranteed, based on whether the authors follow the proposed protocol. One goal of registered reports is to circumvent the publication bias toward significant findings that can lead to implementation of questionable research practices. Another is to encourage publication of studies with rigorous methods.
The journal Psychological Science has encouraged the preregistration of studies and the reporting of effect sizes and confidence intervals. The editor in chief also noted that the editorial staff will be asking for replication of studies with surprising findings from examinations using small sample sizes before allowing the manuscripts to be published.
Metadata and digital tools for tracking replications
It has been suggested that "a simple way to check how often studies have been repeated, and whether or not the original findings are confirmed" is needed. Categorizations and ratings of reproducibility at the study or results level, as well as addition of links to and rating of third-party confirmations, could be conducted by the peer-reviewers, the scientific journal, or by readers in combination with novel digital platforms or tools.
Statistical reform
Requiring smaller p-values
Many publications require a p-value of p < 0.05 to claim statistical significance. The paper "Redefine statistical significance", signed by a large number of scientists and mathematicians, proposes that in "fields where the threshold for defining statistical significance for new discoveries is p < 0.05, we propose a change to p < 0.005. This simple step would immediately improve the reproducibility of scientific research in many fields." Their rationale is that "a leading cause of non-reproducibility (is that the) statistical standards of evidence for claiming new discoveries in many fields of science are simply too low. Associating 'statistically significant' findings with p < 0.05 results in a high rate of false positives even in the absence of other experimental, procedural and reporting problems."
This call was subsequently criticised by another large group, who argued that "redefining" the threshold would not fix current problems, would lead to some new ones, and that in the end, all thresholds needed to be justified case-by-case instead of following general conventions.
Addressing misinterpretation of p-values
Although statisticians are unanimous that the use of "p < 0.05" as a standard for significance provides weaker evidence than is generally appreciated, there is a lack of unanimity about what should be done about it. Some have advocated that Bayesian methods should replace p-values. This has not happened on a wide scale, partly because it is complicated and partly because many users distrust the specification of prior distributions in the absence of hard data. A simplified version of the Bayesian argument, based on testing a point null hypothesis was suggested by pharmacologist David Colquhoun. The logical problems of inductive inference were discussed in "The Problem with p-values" (2016).
The hazards of reliance on p-values arises partly because even an observation of p = 0.001 is not necessarily strong evidence against the null hypothesis. Despite the fact that the likelihood ratio in favor of the alternative hypothesis over the null is close to 100, if the hypothesis was implausible, with a prior probability of a real effect being 0.1, even the observation of p = 0.001 would have a false positive risk of 8 percent. It would still fail to reach the 5 percent level.
It was recommended that the terms "significant" and "non-significant" should not be used. p-values and confidence intervals should still be specified, but they should be accompanied by an indication of the false-positive risk. It was suggested that the best way to do this is to calculate the prior probability that would be necessary to believe in order to achieve a false positive risk of a certain level, such as 5%. The calculations can be done with various computer software. This reverse Bayesian approach, which physicist Robert Matthews suggested in 2001, is one way to avoid the problem that the prior probability is rarely known.
Encouraging larger sample sizes
To improve the quality of replications, larger sample sizes than those used in the original study are often needed. Larger sample sizes are needed because estimates of effect sizes in published work are often exaggerated due to publication bias and large sampling variability associated with small sample sizes in an original study. Further, using significance thresholds usually leads to inflated effects, because particularly with small sample sizes, only the largest effects will become significant.
Cross-validation
One common statistical problem is overfitting, that is, when researchers fit a regression model over a large number of variables but a small number of data points. For example, a typical fMRI study of emotion, personality, and social cognition has fewer than 100 subjects, but each subject has 10,000 voxels. The study would fit a sparse linear regression model that uses the voxels to predict a variable of interest, such as self-reported stress. But the study would then report on the p-value of the model on the same data it was fitted to. The standard approach in statistics, where data is split into a training and a validation set, is resisted because test subjects are expensive to acquire.
One possible solution is cross-validation, which allows model validation while also allowing the whole dataset to be used for model-fitting.
Replication efforts
Funding
In July 2016, the Netherlands Organisation for Scientific Research made €3 million available for replication studies. The funding is for replication based on reanalysis of existing data and replication by collecting and analysing new data. Funding is available in the areas of social sciences, health research and healthcare innovation.
In 2013, the Laura and John Arnold Foundation funded the launch of The Center for Open Science with a $5.25 million grant. By 2017, it provided an additional $10 million in funding. It also funded the launch of the Meta-Research Innovation Center at Stanford at Stanford University run by Ioannidis and medical scientist Steven Goodman to study ways to improve scientific research. It also provided funding for the AllTrials initiative led in part by medical scientist Ben Goldacre.
Emphasis in post-secondary education
Based on coursework in experimental methods at MIT, Stanford, and the University of Washington, it has been suggested that methods courses in psychology and other fields should emphasize replication attempts rather than original studies. Such an approach would help students learn scientific methodology and provide numerous independent replications of meaningful scientific findings that would test the replicability of scientific findings. Some have recommended that graduate students should be required to publish a high-quality replication attempt on a topic related to their doctoral research prior to graduation.
Replication database
There has been a concern that replication attempts have been growing. As a result, this may lead to lead to research waste. In turn, this has led to a need to systematically track replication attempts. As a result, several databases have been created (e.g. ). However, the databases have either created a Replication Database that includes psychology, speech-language therapy among other disciplines to promote theory-driven research and optimize the use of academic and institutional resource, while promoting trust in science.
Final year thesis
Some institutions require undergraduate students to submit a final year thesis that consists of an original piece of research. Daniel Quintana, a psychologist at the University of Oslo in Norway, has recommended that students should be encouraged to perform replication studies in thesis projects, as well as being taught about open science.
Semi-automated
Researchers demonstrated a way of semi-automated testing for reproducibility: statements about experimental results were extracted from, as of 2022 non-semantic, gene expression cancer research papers and subsequently reproduced via robot scientist "Eve". Problems of this approach include that it may not be feasible for many areas of research and that sufficient experimental data may not get extracted from some or many papers even if available.
Involving original authors
Psychologist Daniel Kahneman argued that, in psychology, the original authors should be involved in the replication effort because the published methods are often too vague. Others, such as psychologist Andrew Wilson, disagree, arguing that the original authors should write down the methods in detail. An investigation of replication rates in psychology in 2012 indicated higher success rates of replication in replication studies when there was author overlap with the original authors of a study (91.7% successful replication rates in studies with author overlap compared to 64.6% successful replication rates without author overlap).
Big team science
The replication crisis has led to the formation and development of various large-scale and collaborative communities to pool their resources to address a single question across cultures, countries and disciplines. The focus is on replication, to ensure that the effect generalizes beyond a specific culture and investigate whether the effect is replicable and genuine. This allows interdisciplinary internal reviews, multiple perspectives, uniform protocols across labs, and recruiting larger and more diverse samples. Researchers can collaborate by coordinating data collection or fund data collection by researchers who may not have access to the funds, allowing larger sample sizes and increasing the robustness of the conclusions.
Broader changes to scientific approach
Emphasize triangulation, not just replication
Psychologist Marcus R. Munafò and Epidemiologist George Davey Smith argue, in a piece published by Nature, that research should emphasize triangulation, not just replication, to protect against flawed ideas. They claim that,
Complex systems paradigm
The dominant scientific and statistical model of causation is the linear model. The linear model assumes that mental variables are stable properties which are independent of each other. In other words, these variables are not expected to influence each other. Instead, the model assumes that the variables will have an independent, linear effect on observable outcomes.
Social scientists Sebastian Wallot and Damian Kelty-Stephen argue that the linear model is not always appropriate. An alternative is the complex system model which assumes that mental variables are interdependent. These variables are not assumed to be stable, rather they will interact and adapt to each specific context. They argue that the complex system model is often more appropriate in psychology, and that the use of the linear model when the complex system model is more appropriate will result in failed replications.
Replication should seek to revise theories
Replication is fundamental for scientific progress to confirm original findings. However, replication alone is not sufficient to resolve the replication crisis. Replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. This approach therefore involves pruning existing theories, comparing all the alternative theories, and making replication efforts more generative and engaged in theory-building. However, replication alone is not enough, it is important to assess the extent that results generalise across geographical, historical and social contexts is important for several scientific fields, especially practitioners and policy makers to make analyses in order to guide important strategic decisions. Reproducible and replicable findings was the best predictor of generalisability beyond historical and geographical contexts, indicating that for social sciences, results from a certain time period and place can meaningfully drive as to what is universally present in individuals.
Open science
Open data, open source software and open source hardware all are critical to enabling reproducibility in the sense of validation of the original data analysis. The use of proprietary software, the lack of the publication of analysis software and the lack of open data prevents the replication of studies. Unless software used in research is open source, reproducing results with different software and hardware configurations is impossible. CERN has both Open Data and CERN Analysis Preservation projects for storing data, all relevant information, and all software and tools needed to preserve an analysis at the large experiments of the LHC. Aside from all software and data, preserved analysis assets include metadata that enable understanding of the analysis workflow, related software, systematic uncertainties, statistics procedures and meaningful ways to search for the analysis, as well as references to publications and to backup material. CERN software is open source and available for use outside of particle physics and there is some guidance provided to other fields on the broad approaches and strategies used for open science in contemporary particle physics.
Online repositories where data, protocols, and findings can be stored and evaluated by the public seek to improve the integrity and reproducibility of research. Examples of such repositories include the Open Science Framework, Registry of Research Data Repositories, and Psychfiledrawer.org. Sites like Open Science Framework offer badges for using open science practices in an effort to incentivize scientists. However, there have been concerns that those who are most likely to provide their data and code for analyses are the researchers that are likely the most sophisticated. Ioannidis suggested that "the paradox may arise that the most meticulous and sophisticated and method-savvy and careful researchers may become more susceptible to criticism and reputation attacks by reanalyzers who hunt for errors, no matter how negligible these errors are".
See also
Base rate fallacy
Black swan theory
Correlation does not imply causation
Data dredging
Decline effect
Estimation statistics
Exploratory data analysis
Extension neglect
Falsifiability
Invalid science
Misuse of statistics
Naturalism
Observer bias
p-value
Problem of induction
Sampling bias
Selection bias
Statistical hypothesis testing
Uniformitarianism
Notes
References
Further reading
Bonett, D.G. (2021). Design and analysis of replication studies. Organizational Research Methods, 24, 513–529. https://doi.org/10.1177/1094428120911088
Book Review (November 2020, The American Conservative)
review of
Scientific method
Criticism of science
Ethics and statistics
Metascience
Statistical reliability | 0.77696 | 0.997765 | 0.775223 |
Amino acid | Amino acids are organic compounds that contain both amino and carboxylic acid functional groups. Although over 500 amino acids exist in nature, by far the most important are the 22 α-amino acids incorporated into proteins. Only these 22 appear in the genetic code of life.
Amino acids can be classified according to the locations of the core structural functional groups (alpha- (α-), beta- (β-), gamma- (γ-) amino acids, etc.); other categories relate to polarity, ionization, and side-chain group type (aliphatic, acyclic, aromatic, polar, etc.). In the form of proteins, amino-acid residues form the second-largest component (water being the largest) of human muscles and other tissues. Beyond their role as residues in proteins, amino acids participate in a number of processes such as neurotransmitter transport and biosynthesis. It is thought that they played a key role in enabling life on Earth and its emergence.
Amino acids are formally named by the IUPAC-IUBMB Joint Commission on Biochemical Nomenclature in terms of the fictitious "neutral" structure shown in the illustration. For example, the systematic name of alanine is 2-aminopropanoic acid, based on the formula . The Commission justified this approach as follows:
The systematic names and formulas given refer to hypothetical forms in which amino groups are unprotonated and carboxyl groups are undissociated. This convention is useful to avoid various nomenclatural problems but should not be taken to imply that these structures represent an appreciable fraction of the amino-acid molecules.
History
The first few amino acids were discovered in the early 1800s. In 1806, French chemists Louis-Nicolas Vauquelin and Pierre Jean Robiquet isolated a compound from asparagus that was subsequently named asparagine, the first amino acid to be discovered. Cystine was discovered in 1810, although its monomer, cysteine, remained undiscovered until 1884. Glycine and leucine were discovered in 1820. The last of the 20 common amino acids to be discovered was threonine in 1935 by William Cumming Rose, who also determined the essential amino acids and established the minimum daily requirements of all amino acids for optimal growth.
The unity of the chemical category was recognized by Wurtz in 1865, but he gave no particular name to it. The first use of the term "amino acid" in the English language dates from 1898, while the German term, , was used earlier. Proteins were found to yield amino acids after enzymatic digestion or acid hydrolysis. In 1902, Emil Fischer and Franz Hofmeister independently proposed that proteins are formed from many amino acids, whereby bonds are formed between the amino group of one amino acid with the carboxyl group of another, resulting in a linear structure that Fischer termed "peptide".
General structure
2-, alpha-, or α-amino acids have the generic formula in most cases, where R is an organic substituent known as a "side chain".
Of the many hundreds of described amino acids, 22 are proteinogenic ("protein-building"). It is these 22 compounds that combine to give a vast array of peptides and proteins assembled by ribosomes. Non-proteinogenic or modified amino acids may arise from post-translational modification or during nonribosomal peptide synthesis.
Chirality
The carbon atom next to the carboxyl group is called the α–carbon. In proteinogenic amino acids, it bears the amine and the R group or side chain specific to each amino acid. With four distinct substituents, the α–carbon is stereogenic in all α-amino acids except glycine. All chiral proteogenic amino acids have the L configuration. They are "left-handed" enantiomers, which refers to the stereoisomers of the alpha carbon.
A few D-amino acids ("right-handed") have been found in nature, e.g., in bacterial envelopes, as a neuromodulator (D-serine), and in some antibiotics. Rarely, D-amino acid residues are found in proteins, and are converted from the L-amino acid as a post-translational modification.
Side chains
Polar charged side chains
Five amino acids possess a charge at neutral pH. Often these side chains appear at the surfaces on proteins to enable their solubility in water, and side chains with opposite charges form important electrostatic contacts called salt bridges that maintain structures within a single protein or between interfacing proteins. Many proteins bind metal into their structures specifically, and these interactions are commonly mediated by charged side chains such as aspartate, glutamate and histidine. Under certain conditions, each ion-forming group can be charged, forming double salts.
The two negatively charged amino acids at neutral pH are aspartate (Asp, D) and glutamate (Glu, E). The anionic carboxylate groups behave as Brønsted bases in most circumstances. Enzymes in very low pH environments, like the aspartic protease pepsin in mammalian stomachs, may have catalytic aspartate or glutamate residues that act as Brønsted acids.
There are three amino acids with side chains that are cations at neutral pH: arginine (Arg, R), lysine (Lys, K) and histidine (His, H). Arginine has a charged guanidino group and lysine a charged alkyl amino group, and are fully protonated at pH 7. Histidine's imidazole group has a pKa of 6.0, and is only around 10% protonated at neutral pH. Because histidine is easily found in its basic and conjugate acid forms it often participates in catalytic proton transfers in enzyme reactions.
Polar uncharged side chains
The polar, uncharged amino acids serine (Ser, S), threonine (Thr, T), asparagine (Asn, N) and glutamine (Gln, Q) readily form hydrogen bonds with water and other amino acids. They do not ionize in normal conditions, a prominent exception being the catalytic serine in serine proteases. This is an example of severe perturbation, and is not characteristic of serine residues in general. Threonine has two chiral centers, not only the L (2S) chiral center at the α-carbon shared by all amino acids apart from achiral glycine, but also (3R) at the β-carbon. The full stereochemical specification is (2S,3R)-L-threonine.
Hydrophobic side chains
Nonpolar amino acid interactions are the primary driving force behind the processes that fold proteins into their functional three dimensional structures. None of these amino acids' side chains ionize easily, and therefore do not have pKas, with the exception of tyrosine (Tyr, Y). The hydroxyl of tyrosine can deprotonate at high pH forming the negatively charged phenolate. Because of this one could place tyrosine into the polar, uncharged amino acid category, but its very low solubility in water matches the characteristics of hydrophobic amino acids well.
Special case side chains
Several side chains are not described well by the charged, polar and hydrophobic categories. Glycine (Gly, G) could be considered a polar amino acid since its small size means that its solubility is largely determined by the amino and carboxylate groups. However, the lack of any side chain provides glycine with a unique flexibility among amino acids with large ramifications to protein folding. Cysteine (Cys, C) can also form hydrogen bonds readily, which would place it in the polar amino acid category, though it can often be found in protein structures forming covalent bonds, called disulphide bonds, with other cysteines. These bonds influence the folding and stability of proteins, and are essential in the formation of antibodies. Proline (Pro, P) has an alkyl side chain and could be considered hydrophobic, but because the side chain joins back onto the alpha amino group it becomes particularly inflexible when incorporated into proteins. Similar to glycine this influences protein structure in a way unique among amino acids. Selenocysteine (Sec, U) is a rare amino acid not directly encoded by DNA, but is incorporated into proteins via the ribosome. Selenocysteine has a lower redox potential compared to the similar cysteine, and participates in several unique enzymatic reactions. Pyrrolysine (Pyl, O) is another amino acid not encoded in DNA, but synthesized into protein by ribosomes. It is found in archaeal species where it participates in the catalytic activity of several methyltransferases.
β- and γ-amino acids
Amino acids with the structure , such as β-alanine, a component of carnosine and a few other peptides, are β-amino acids. Ones with the structure are γ-amino acids, and so on, where X and Y are two substituents (one of which is normally H).
Zwitterions
The common natural forms of amino acids have a zwitterionic structure, with ( in the case of proline) and functional groups attached to the same C atom, and are thus α-amino acids, and are the only ones found in proteins during translation in the ribosome.
In aqueous solution at pH close to neutrality, amino acids exist as zwitterions, i.e. as dipolar ions with both and in charged states, so the overall structure is . At physiological pH the so-called "neutral forms" are not present to any measurable degree. Although the two charges in the zwitterion structure add up to zero it is misleading to call a species with a net charge of zero "uncharged".
In strongly acidic conditions (pH below 3), the carboxylate group becomes protonated and the structure becomes an ammonio carboxylic acid, . This is relevant for enzymes like pepsin that are active in acidic environments such as the mammalian stomach and lysosomes, but does not significantly apply to intracellular enzymes. In highly basic conditions (pH greater than 10, not normally seen in physiological conditions), the ammonio group is deprotonated to give .
Although various definitions of acids and bases are used in chemistry, the only one that is useful for chemistry in aqueous solution is that of Brønsted: an acid is a species that can donate a proton to another species, and a base is one that can accept a proton. This criterion is used to label the groups in the above illustration. The carboxylate side chains of aspartate and glutamate residues are the principal Brønsted bases in proteins. Likewise, lysine, tyrosine and cysteine will typically act as a Brønsted acid. Histidine under these conditions can act both as a Brønsted acid and a base.
Isoelectric point
For amino acids with uncharged side-chains the zwitterion predominates at pH values between the two pKa values, but coexists in equilibrium with small amounts of net negative and net positive ions. At the midpoint between the two pKa values, the trace amount of net negative and trace of net positive ions balance, so that average net charge of all forms present is zero. This pH is known as the isoelectric point pI, so pI = (pKa1 + pKa2).
For amino acids with charged side chains, the pKa of the side chain is involved. Thus for aspartate or glutamate with negative side chains, the terminal amino group is essentially entirely in the charged form , but this positive charge needs to be balanced by the state with just one C-terminal carboxylate group is negatively charged. This occurs halfway between the two carboxylate pKa values: pI = (pKa1 + pKa(R)), where pKa(R) is the side chain pKa.
Similar considerations apply to other amino acids with ionizable side-chains, including not only glutamate (similar to aspartate), but also cysteine, histidine, lysine, tyrosine and arginine with positive side chains.
Amino acids have zero mobility in electrophoresis at their isoelectric point, although this behaviour is more usually exploited for peptides and proteins than single amino acids. Zwitterions have minimum solubility at their isoelectric point, and some amino acids (in particular, with nonpolar side chains) can be isolated by precipitation from water by adjusting the pH to the required isoelectric point.
Physicochemical properties
The 20 canonical amino acids can be classified according to their properties. Important factors are charge, hydrophilicity or hydrophobicity, size, and functional groups. These properties influence protein structure and protein–protein interactions. The water-soluble proteins tend to have their hydrophobic residues (Leu, Ile, Val, Phe, and Trp) buried in the middle of the protein, whereas hydrophilic side chains are exposed to the aqueous solvent. (In biochemistry, a residue refers to a specific monomer within the polymeric chain of a polysaccharide, protein or nucleic acid.) The integral membrane proteins tend to have outer rings of exposed hydrophobic amino acids that anchor them in the lipid bilayer. Some peripheral membrane proteins have a patch of hydrophobic amino acids on their surface that sticks to the membrane. In a similar fashion, proteins that have to bind to positively charged molecules have surfaces rich in negatively charged amino acids such as glutamate and aspartate, while proteins binding to negatively charged molecules have surfaces rich in positively charged amino acids like lysine and arginine. For example, lysine and arginine are present in large amounts in the low-complexity regions of nucleic-acid binding proteins. There are various hydrophobicity scales of amino acid residues.
Some amino acids have special properties. Cysteine can form covalent disulfide bonds to other cysteine residues. Proline forms a cycle to the polypeptide backbone, and glycine is more flexible than other amino acids.
Glycine and proline are strongly present within low complexity regions of both eukaryotic and prokaryotic proteins, whereas the opposite is the case with cysteine, phenylalanine, tryptophan, methionine, valine, leucine, isoleucine, which are highly reactive, or complex, or hydrophobic.
Many proteins undergo a range of posttranslational modifications, whereby additional chemical groups are attached to the amino acid residue side chains sometimes producing lipoproteins (that are hydrophobic), or glycoproteins (that are hydrophilic) allowing the protein to attach temporarily to a membrane. For example, a signaling protein can attach and then detach from a cell membrane, because it contains cysteine residues that can have the fatty acid palmitic acid added to them and subsequently removed.
Table of standard amino acid abbreviations and properties
Although one-letter symbols are included in the table, IUPAC–IUBMB recommend that "Use of the one-letter symbols should be restricted to the comparison of long sequences".
The one-letter notation was chosen by IUPAC-IUB based on the following rules:
Initial letters are used where there is no ambuiguity: C cysteine, H histidine, I isoleucine, M methionine, S serine, V valine,
Where arbitrary assignment is needed, the structurally simpler amino acids are given precedence: A Alanine, G glycine, L leucine, P proline, T threonine,
F PHenylalanine and R aRginine are assigned by being phonetically suggestive,
W tryptophan is assigned based on the double ring being visually suggestive to the bulky letter W,
K lysine and Y tyrosine are assigned as alphabetically nearest to their initials L and T (note that U was avoided for its similarity with V, while X was reserved for undetermined or atypical amino acids); for tyrosine the mnemonic tYrosine was also proposed,
D aspartate was assigned arbitrarily, with the proposed mnemonic asparDic acid; E glutamate was assigned in alphabetical sequence being larger by merely one methylene –CH2– group,
N asparagine was assigned arbitrarily, with the proposed mnemonic asparagiNe; Q glutamine was assigned in alphabetical sequence of those still available (note again that O was avoided due to similarity with D), with the proposed mnemonic Qlutamine.
Two additional amino acids are in some species coded for by codons that are usually interpreted as stop codons:
In addition to the specific amino acid codes, placeholders are used in cases where chemical or crystallographic analysis of a peptide or protein cannot conclusively determine the identity of a residue. They are also used to summarize conserved protein sequence motifs. The use of single letters to indicate sets of similar residues is similar to the use of abbreviation codes for degenerate bases.
Unk is sometimes used instead of Xaa, but is less standard.
Ter or * (from termination) is used in notation for mutations in proteins when a stop codon occurs. It corresponds to no amino acid at all.
In addition, many nonstandard amino acids have a specific code. For example, several peptide drugs, such as Bortezomib and MG132, are artificially synthesized and retain their protecting groups, which have specific codes. Bortezomib is Pyz–Phe–boroLeu, and MG132 is Z–Leu–Leu–Leu–al. To aid in the analysis of protein structure, photo-reactive amino acid analogs are available. These include photoleucine (pLeu) and photomethionine (pMet).
Occurrence and functions in biochemistry
Proteinogenic amino acids
Amino acids are the precursors to proteins. They join by condensation reactions to form short polymer chains called peptides or longer chains called either polypeptides or proteins. These chains are linear and unbranched, with each amino acid residue within the chain attached to two neighboring amino acids. In nature, the process of making proteins encoded by RNA genetic material is called translation and involves the step-by-step addition of amino acids to a growing protein chain by a ribozyme that is called a ribosome. The order in which the amino acids are added is read through the genetic code from an mRNA template, which is an RNA derived from one of the organism's genes.
Twenty-two amino acids are naturally incorporated into polypeptides and are called proteinogenic or natural amino acids. Of these, 20 are encoded by the universal genetic code. The remaining 2, selenocysteine and pyrrolysine, are incorporated into proteins by unique synthetic mechanisms. Selenocysteine is incorporated when the mRNA being translated includes a SECIS element, which causes the UGA codon to encode selenocysteine instead of a stop codon. Pyrrolysine is used by some methanogenic archaea in enzymes that they use to produce methane. It is coded for with the codon UAG, which is normally a stop codon in other organisms.
Several independent evolutionary studies have suggested that Gly, Ala, Asp, Val, Ser, Pro, Glu, Leu, Thr may belong to a group of amino acids that constituted the early genetic code, whereas Cys, Met, Tyr, Trp, His, Phe may belong to a group of amino acids that constituted later additions of the genetic code.
Standard vs nonstandard amino acids
The 20 amino acids that are encoded directly by the codons of the universal genetic code are called standard or canonical amino acids. A modified form of methionine (N-formylmethionine) is often incorporated in place of methionine as the initial amino acid of proteins in bacteria, mitochondria and plastids (including chloroplasts). Other amino acids are called nonstandard or non-canonical. Most of the nonstandard amino acids are also non-proteinogenic (i.e. they cannot be incorporated into proteins during translation), but two of them are proteinogenic, as they can be incorporated translationally into proteins by exploiting information not encoded in the universal genetic code.
The two nonstandard proteinogenic amino acids are selenocysteine (present in many non-eukaryotes as well as most eukaryotes, but not coded directly by DNA) and pyrrolysine (found only in some archaea and at least one bacterium). The incorporation of these nonstandard amino acids is rare. For example, 25 human proteins include selenocysteine in their primary structure, and the structurally characterized enzymes (selenoenzymes) employ selenocysteine as the catalytic moiety in their active sites. Pyrrolysine and selenocysteine are encoded via variant codons. For example, selenocysteine is encoded by stop codon and SECIS element.
N-formylmethionine (which is often the initial amino acid of proteins in bacteria, mitochondria, and chloroplasts) is generally considered as a form of methionine rather than as a separate proteinogenic amino acid. Codon–tRNA combinations not found in nature can also be used to "expand" the genetic code and form novel proteins known as alloproteins incorporating non-proteinogenic amino acids.
Non-proteinogenic amino acids
Aside from the 22 proteinogenic amino acids, many non-proteinogenic amino acids are known. Those either are not found in proteins (for example carnitine, GABA, levothyroxine) or are not produced directly and in isolation by standard cellular machinery. For example, hydroxyproline, is synthesised from proline. Another example is selenomethionine).
Non-proteinogenic amino acids that are found in proteins are formed by post-translational modification. Such modifications can also determine the localization of the protein, e.g., the addition of long hydrophobic groups can cause a protein to bind to a phospholipid membrane. Examples:
the carboxylation of glutamate allows for better binding of calcium cations,
Hydroxyproline, generated by hydroxylation of proline, is a major component of the connective tissue collagen.
Hypusine in the translation initiation factor EIF5A, contains a modification of lysine.
Some non-proteinogenic amino acids are not found in proteins. Examples include 2-aminoisobutyric acid and the neurotransmitter gamma-aminobutyric acid. Non-proteinogenic amino acids often occur as intermediates in the metabolic pathways for standard amino acids – for example, ornithine and citrulline occur in the urea cycle, part of amino acid catabolism (see below). A rare exception to the dominance of α-amino acids in biology is the β-amino acid beta alanine (3-aminopropanoic acid), which is used in plants and microorganisms in the synthesis of pantothenic acid (vitamin B5), a component of coenzyme A.
In mammalian nutrition
Amino acids are not typical component of food: animals eat proteins. The protein is broken down into amino acids in the process of digestion. They are then used to synthesize new proteins, other biomolecules, or are oxidized to urea and carbon dioxide as a source of energy. The oxidation pathway starts with the removal of the amino group by a transaminase; the amino group is then fed into the urea cycle. The other product of transamidation is a keto acid that enters the citric acid cycle. Glucogenic amino acids can also be converted into glucose, through gluconeogenesis.
Of the 20 standard amino acids, nine (His, Ile, Leu, Lys, Met, Phe, Thr, Trp and Val) are called essential amino acids because the human body cannot synthesize them from other compounds at the level needed for normal growth, so they must be obtained from food.
Semi-essential and conditionally essential amino acids, and juvenile requirements
In addition, cysteine, tyrosine, and arginine are considered semiessential amino acids, and taurine a semi-essential aminosulfonic acid in children. Some amino acids are conditionally essential for certain ages or medical conditions. Essential amino acids may also vary from species to species. The metabolic pathways that synthesize these monomers are not fully developed.
Non-protein functions
Many proteinogenic and non-proteinogenic amino acids have biological functions beyond being precursors to proteins and peptides.In humans, amino acids also have important roles in diverse biosynthetic pathways. Defenses against herbivores in plants sometimes employ amino acids. Examples:
Standard amino acids
Tryptophan is a precursor of the neurotransmitter serotonin.
Tyrosine (and its precursor phenylalanine) are precursors of the catecholamine neurotransmitters dopamine, epinephrine and norepinephrine and various trace amines.
Phenylalanine is a precursor of phenethylamine and tyrosine in humans. In plants, it is a precursor of various phenylpropanoids, which are important in plant metabolism.
Glycine is a precursor of porphyrins such as heme.
Arginine is a precursor of nitric oxide.
Ornithine and S-adenosylmethionine are precursors of polyamines.
Aspartate, glycine, and glutamine are precursors of nucleotides. However, not all of the functions of other abundant nonstandard amino acids are known.
Roles for nonstandard amino acids
Carnitine is used in lipid transport.
gamma-aminobutyric acid is a neurotransmitter.
5-HTP (5-hydroxytryptophan) is used for experimental treatment of depression.
L-DOPA (L-dihydroxyphenylalanine) for Parkinson's treatment,
Eflornithine inhibits ornithine decarboxylase and used in the treatment of sleeping sickness.
Canavanine, an analogue of arginine found in many legumes is an antifeedant, protecting the plant from predators.
Mimosine found in some legumes, is another possible antifeedant. This compound is an analogue of tyrosine and can poison animals that graze on these plants.
Uses in industry
Animal feed
Amino acids are sometimes added to animal feed because some of the components of these feeds, such as soybeans, have low levels of some of the essential amino acids, especially of lysine, methionine, threonine, and tryptophan. Likewise amino acids are used to chelate metal cations in order to improve the absorption of minerals from feed supplements.
Food
The food industry is a major consumer of amino acids, especially glutamic acid, which is used as a flavor enhancer, and aspartame (aspartylphenylalanine 1-methyl ester), which is used as an artificial sweetener. Amino acids are sometimes added to food by manufacturers to alleviate symptoms of mineral deficiencies, such as anemia, by improving mineral absorption and reducing negative side effects from inorganic mineral supplementation.
Chemical building blocks
Amino acids are low-cost feedstocks used in chiral pool synthesis as enantiomerically pure building blocks.
Amino acids are used in the synthesis of some cosmetics.
Aspirational uses
Fertilizer
The chelating ability of amino acids is sometimes used in fertilizers to facilitate the delivery of minerals to plants in order to correct mineral deficiencies, such as iron chlorosis. These fertilizers are also used to prevent deficiencies from occurring and to improve the overall health of the plants.
Biodegradable plastics
Amino acids have been considered as components of biodegradable polymers, which have applications as environmentally friendly packaging and in medicine in drug delivery and the construction of prosthetic implants. An interesting example of such materials is polyaspartate, a water-soluble biodegradable polymer that may have applications in disposable diapers and agriculture. Due to its solubility and ability to chelate metal ions, polyaspartate is also being used as a biodegradable antiscaling agent and a corrosion inhibitor.
Synthesis
Chemical synthesis
The commercial production of amino acids usually relies on mutant bacteria that overproduce individual amino acids using glucose as a carbon source. Some amino acids are produced by enzymatic conversions of synthetic intermediates. 2-Aminothiazoline-4-carboxylic acid is an intermediate in one industrial synthesis of L-cysteine for example. Aspartic acid is produced by the addition of ammonia to fumarate using a lyase.
Biosynthesis
In plants, nitrogen is first assimilated into organic compounds in the form of glutamate, formed from alpha-ketoglutarate and ammonia in the mitochondrion. For other amino acids, plants use transaminases to move the amino group from glutamate to another alpha-keto acid. For example, aspartate aminotransferase converts glutamate and oxaloacetate to alpha-ketoglutarate and aspartate. Other organisms use transaminases for amino acid synthesis, too.
Nonstandard amino acids are usually formed through modifications to standard amino acids. For example, homocysteine is formed through the transsulfuration pathway or by the demethylation of methionine via the intermediate metabolite S-adenosylmethionine, while hydroxyproline is made by a post translational modification of proline.
Microorganisms and plants synthesize many uncommon amino acids. For example, some microbes make 2-aminoisobutyric acid and lanthionine, which is a sulfide-bridged derivative of alanine. Both of these amino acids are found in peptidic lantibiotics such as alamethicin. However, in plants, 1-aminocyclopropane-1-carboxylic acid is a small disubstituted cyclic amino acid that is an intermediate in the production of the plant hormone ethylene.
Primordial synthesis
The formation of amino acids and peptides are assumed to precede and perhaps induce the emergence of life on earth. Amino acids can form from simple precursors under various conditions. Surface-based chemical metabolism of amino acids and very small compounds may have led to the build-up of amino acids, coenzymes and phosphate-based small carbon molecules. Amino acids and similar building blocks could have been elaborated into proto-peptides, with peptides being considered key players in the origin of life.
In the famous Urey-Miller experiment, the passage of an electric arc through a mixture of methane, hydrogen, and ammonia produces a large number of amino acids. Since then, scientists have discovered a range of ways and components by which the potentially prebiotic formation and chemical evolution of peptides may have occurred, such as condensing agents, the design of self-replicating peptides and a number of non-enzymatic mechanisms by which amino acids could have emerged and elaborated into peptides. Several hypotheses invoke the Strecker synthesis whereby hydrogen cyanide, simple aldehydes, ammonia, and water produce amino acids.
According to a review, amino acids, and even peptides, "turn up fairly regularly in the various experimental broths that have been allowed to be cooked from simple chemicals. This is because nucleotides are far more difficult to synthesize chemically than amino acids." For a chronological order, it suggests that there must have been a 'protein world' or at least a 'polypeptide world', possibly later followed by the 'RNA world' and the 'DNA world'. Codon–amino acids mappings may be the biological information system at the primordial origin of life on Earth. While amino acids and consequently simple peptides must have formed under different experimentally probed geochemical scenarios, the transition from an abiotic world to the first life forms is to a large extent still unresolved.
Reactions
Amino acids undergo the reactions expected of the constituent functional groups.
Peptide bond formation
As both the amine and carboxylic acid groups of amino acids can react to form amide bonds, one amino acid molecule can react with another and become joined through an amide linkage. This polymerization of amino acids is what creates proteins. This condensation reaction yields the newly formed peptide bond and a molecule of water. In cells, this reaction does not occur directly; instead, the amino acid is first activated by attachment to a transfer RNA molecule through an ester bond. This aminoacyl-tRNA is produced in an ATP-dependent reaction carried out by an aminoacyl tRNA synthetase. This aminoacyl-tRNA is then a substrate for the ribosome, which catalyzes the attack of the amino group of the elongating protein chain on the ester bond. As a result of this mechanism, all proteins made by ribosomes are synthesized starting at their N-terminus and moving toward their C-terminus.
However, not all peptide bonds are formed in this way. In a few cases, peptides are synthesized by specific enzymes. For example, the tripeptide glutathione is an essential part of the defenses of cells against oxidative stress. This peptide is synthesized in two steps from free amino acids. In the first step, gamma-glutamylcysteine synthetase condenses cysteine and glutamate through a peptide bond formed between the side chain carboxyl of the glutamate (the gamma carbon of this side chain) and the amino group of the cysteine. This dipeptide is then condensed with glycine by glutathione synthetase to form glutathione.
In chemistry, peptides are synthesized by a variety of reactions. One of the most-used in solid-phase peptide synthesis uses the aromatic oxime derivatives of amino acids as activated units. These are added in sequence onto the growing peptide chain, which is attached to a solid resin support. Libraries of peptides are used in drug discovery through high-throughput screening.
The combination of functional groups allow amino acids to be effective polydentate ligands for metal–amino acid chelates.
The multiple side chains of amino acids can also undergo chemical reactions.
Catabolism
Degradation of an amino acid often involves deamination by moving its amino group to α-ketoglutarate, forming glutamate. This process involves transaminases, often the same as those used in amination during synthesis. In many vertebrates, the amino group is then removed through the urea cycle and is excreted in the form of urea. However, amino acid degradation can produce uric acid or ammonia instead. For example, serine dehydratase converts serine to pyruvate and ammonia. After removal of one or more amino groups, the remainder of the molecule can sometimes be used to synthesize new amino acids, or it can be used for energy by entering glycolysis or the citric acid cycle, as detailed in image at right.
Complexation
Amino acids are bidentate ligands, forming transition metal amino acid complexes.
Chemical analysis
The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available.
See also
Amino acid dating
Beta-peptide
Degron
Erepsin
Homochirality
Hyperaminoacidemia
Leucines
Miller–Urey experiment
Nucleic acid sequence
RNA codon table
Notes
References
Further reading
External links
Nitrogen cycle
Zwitterions | 0.775516 | 0.999608 | 0.775212 |
Coprecipitation | In chemistry, coprecipitation (CPT) or co-precipitation is the carrying down by a precipitate of substances normally soluble under the conditions employed. Analogously, in medicine, coprecipitation (referred to as immunoprecipitation) is specifically "an assay designed to purify a single antigen from a complex mixture using a specific antibody attached to a beaded support".
Coprecipitation is an important topic in chemical analysis, where it can be undesirable, but can also be usefully exploited. In gravimetric analysis, which consists on precipitating the analyte and measuring its mass to determine its concentration or purity, coprecipitation is a problem because undesired impurities often coprecipitate with the analyte, resulting in excess mass. This problem can often be mitigated by "digestion" (waiting for the precipitate to equilibrate and form larger and purer particles) or by redissolving the sample and precipitating it again.
On the other hand, in the analysis of trace elements, as is often the case in radiochemistry, coprecipitation is often the only way of separating an element. Since the trace element is too dilute (sometimes less than a part per trillion) to precipitate by conventional means, it is typically coprecipitated with a carrier, a substance that has a similar crystalline structure that can incorporate the desired element. An example is the separation of francium from other radioactive elements by coprecipitating it with caesium salts such as caesium perchlorate. Otto Hahn is credited for promoting the use of coprecipitation in radiochemistry.
There are three main mechanisms of coprecipitation: inclusion, occlusion, and adsorption. An inclusion (incorporation in the crystal lattice) occurs when the impurity occupies a lattice site in the crystal structure of the carrier, resulting in a crystallographic defect; this can happen when the ionic radius and charge of the impurity are similar to those of the carrier. An adsorbate is an impurity that is weakly, or strongly, bound (adsorbed) to the surface of the precipitate. An occlusion occurs when an adsorbed impurity gets physically trapped inside the crystal as it grows.
Besides its applications in chemical analysis and in radiochemistry, coprecipitation is also important to many environmental issues related to water resources, including acid mine drainage, radionuclide migration around waste repositories, toxic heavy metal transport at industrial and defense sites, metal concentrations in aquatic systems, and wastewater treatment technology.
Coprecipitation is also used as a method of magnetic nanoparticle synthesis.
Distribution between precipitate and solution
There are two models describing of the distribution of the tracer compound between the two phases (the precipitate and the solution):
Doerner-Hoskins law (logarithmic):
Berthelot-Nernst law:
where:
a and b are the initial concentrations of the tracer and carrier, respectively;
a − x and b − y are the concentrations of tracer and carrier after separation;
x and y are the amounts of the tracer and carrier on the precipitate;
D and λ are the distribution coefficients.
For D and λ greater than 1, the precipitate is enriched in the tracer.
Depending on the co-precipitation system and conditions either λ or D may be constant.
The derivation of the Doerner-Hoskins law assumes that there in no mass exchange between the interior of the precipitating crystals and the solution. When this assumption is fulfilled, then the content of the tracer in the crystal is non-uniform (the crystals are said to be heterogeneous). When the Berthelot-Nernst law applies, then the concentration of the tracer in the interior of the crystal is uniform (and the crystals are said to be homogeneous). This is the case when diffusion in the interior is possible (like in the liquids) or when the initial small crystals are allowed to recrystallize. Kinetic effects (like speed of crystallization and presence of mixing) play a role.
See also
Fajans–Paneth–Hahn Law
References
Chemical processes
Analytical chemistry
Radiochemistry | 0.795191 | 0.97485 | 0.775192 |
Thought | In their most common sense, the terms thought and thinking refer to cognitive processes that can happen independently of sensory stimulation. Their most paradigmatic forms are judging, reasoning, concept formation, problem solving, and deliberation. But other mental processes, like considering an idea, memory, or imagination, are also often included. These processes can happen internally independent of the sensory organs, unlike perception. But when understood in the widest sense, any mental event may be understood as a form of thinking, including perception and unconscious mental processes. In a slightly different sense, the term thought refers not to the mental processes themselves but to mental states or systems of ideas brought about by these processes.
Various theories of thinking have been proposed, some of which aim to capture the characteristic features of thought. Platonists hold that thinking consists in discerning and inspecting Platonic forms and their interrelations. It involves the ability to discriminate between the pure Platonic forms themselves and the mere imitations found in the sensory world. According to Aristotelianism, to think about something is to instantiate in one's mind the universal essence of the object of thought. These universals are abstracted from sense experience and are not understood as existing in a changeless intelligible world, in contrast to Platonism. Conceptualism is closely related to Aristotelianism: it identifies thinking with mentally evoking concepts instead of instantiating essences. Inner speech theories claim that thinking is a form of inner speech in which words are silently expressed in the thinker's mind. According to some accounts, this happens in a regular language, like English or French. The language of thought hypothesis, on the other hand, holds that this happens in the medium of a unique mental language called Mentalese. Central to this idea is that linguistic representational systems are built up from atomic and compound representations and that this structure is also found in thought. Associationists understand thinking as the succession of ideas or images. They are particularly interested in the laws of association that govern how the train of thought unfolds. Behaviorists, by contrast, identify thinking with behavioral dispositions to engage in public intelligent behavior as a reaction to particular external stimuli. Computationalism is the most recent of these theories. It sees thinking in analogy to how computers work in terms of the storage, transmission, and processing of information.
Various types of thinking are discussed in academic literature. A judgment is a mental operation in which a proposition is evoked and then either affirmed or denied. Reasoning, on the other hand, is the process of drawing conclusions from premises or evidence. Both judging and reasoning depend on the possession of the relevant concepts, which are acquired in the process of concept formation. In the case of problem solving, thinking aims at reaching a predefined goal by overcoming certain obstacles. Deliberation is an important form of practical thought that consists in formulating possible courses of action and assessing the reasons for and against them. This may lead to a decision by choosing the most favorable option. Both episodic memory and imagination present objects and situations internally, in an attempt to accurately reproduce what was previously experienced or as a free rearrangement, respectively. Unconscious thought is thought that happens without being directly experienced. It is sometimes posited to explain how difficult problems are solved in cases where no conscious thought was employed.
Thought is discussed in various academic disciplines. Phenomenology is interested in the experience of thinking. An important question in this field concerns the experiential character of thinking and to what extent this character can be explained in terms of sensory experience. Metaphysics is, among other things, interested in the relation between mind and matter. This concerns the question of how thinking can fit into the material world as described by the natural sciences. Cognitive psychology aims to understand thought as a form of information processing. Developmental psychology, on the other hand, investigates the development of thought from birth to maturity and asks which factors this development depends on. Psychoanalysis emphasizes the role of the unconscious in mental life. Other fields concerned with thought include linguistics, neuroscience, artificial intelligence, biology, and sociology. Various concepts and theories are closely related to the topic of thought. The term "law of thought" refers to three fundamental laws of logic: the law of contradiction, the law of excluded middle, and the principle of identity. Counterfactual thinking involves mental representations of non-actual situations and events in which the thinker tries to assess what would be the case if things had been different. Thought experiments often employ counterfactual thinking in order to illustrate theories or to test their plausibility. Critical thinking is a form of thinking that is reasonable, reflective, and focused on determining what to believe or how to act. Positive thinking involves focusing one's attention on the positive aspects of one's situation and is intimately related to optimism.
Definition
The terms "thought" and "thinking" refer to a wide variety of psychological activities. In their most common sense, they are understood as conscious processes that can happen independently of sensory stimulation. This includes various different mental processes, like considering an idea or proposition or judging it to be true. In this sense, memory and imagination are forms of thought but perception is not. In a more restricted sense, only the most paradigmatic cases are considered thought. These involve conscious processes that are conceptual or linguistic and sufficiently abstract, like judging, inferring, problem solving, and deliberating. Sometimes the terms "thought" and "thinking" are understood in a very wide sense as referring to any form of mental process, conscious or unconscious. In this sense, they may be used synonymously with the term "mind". This usage is encountered, for example, in the Cartesian tradition, where minds are understood as thinking things, and in the cognitive sciences. But this sense may include the restriction that such processes have to lead to intelligent behavior to be considered thought. A contrast sometimes found in the academic literature is that between thinking and feeling. In this context, thinking is associated with a sober, dispassionate, and rational approach to its topic while feeling involves a direct emotional engagement.
The terms "thought" and "thinking" can also be used to refer not to the mental processes themselves but to mental states or systems of ideas brought about by these processes. In this sense, they are often synonymous with the term "belief" and its cognates and may refer to the mental states which either belong to an individual or are common among a certain group of people. Discussions of thought in the academic literature often leave it implicit which sense of the term they have in mind.
The word thought comes from Old English þoht, or geþoht, from the stem of þencan "to conceive of in the mind, consider".
Theories of thinking
Various theories of thinking have been proposed. They aim to capture the characteristic features of thinking. The theories listed here are not exclusive: it may be possible to combine some without leading to a contradiction.
Platonism
According to Platonism, thinking is a spiritual activity in which Platonic forms and their interrelations are discerned and inspected. This activity is understood as a form of silent inner speech in which the soul talks to itself. Platonic forms are seen as universals that exist in a changeless realm different from the sensible world. Examples include the forms of goodness, beauty, unity, and sameness. On this view, the difficulty of thinking consists in being unable to grasp the Platonic forms and to distinguish them as the original from the mere imitations found in the sensory world. This means, for example, distinguishing beauty itself from derivative images of beauty. One problem for this view is to explain how humans can learn and think about Platonic forms belonging to a different realm. Plato himself tries to solve this problem through his theory of recollection, according to which the soul already was in contact with the Platonic forms before and is therefore able to remember what they are like. But this explanation depends on various assumptions usually not accepted in contemporary thought.
Aristotelianism and conceptualism
Aristotelians hold that the mind is able to think about something by instantiating the essence of the object of thought. So while thinking about trees, the mind instantiates tree-ness. This instantiation does not happen in matter, as is the case for actual trees, but in mind, though the universal essence instantiated in both cases is the same. In contrast to Platonism, these universals are not understood as Platonic forms existing in a changeless intelligible world. Instead, they only exist to the extent that they are instantiated. The mind learns to discriminate universals through abstraction from experience. This explanation avoids various of the objections raised against Platonism.
Conceptualism is closely related to Aristotelianism. It states that thinking consists in mentally evoking concepts. Some of these concepts may be innate, but most have to be learned through abstraction from sense experience before they can be used in thought.
It has been argued against these views that they have problems in accounting for the logical form of thought. For example, to think that it will either rain or snow, it is not sufficient to instantiate the essences of rain and snow or to evoke the corresponding concepts. The reason for this is that the disjunctive relation between the rain and the snow is not captured this way. Another problem shared by these positions is the difficulty of giving a satisfying account of how essences or concepts are learned by the mind through abstraction.
Inner speech theory
Inner speech theories claim that thinking is a form of inner speech. This view is sometimes termed psychological nominalism. It states that thinking involves silently evoking words and connecting them to form mental sentences. The knowledge a person has of their thoughts can be explained as a form of overhearing one's own silent monologue. Three central aspects are often ascribed to inner speech: it is in an important sense similar to hearing sounds, it involves the use of language and it constitutes a motor plan that could be used for actual speech. This connection to language is supported by the fact that thinking is often accompanied by muscle activity in the speech organs. This activity may facilitate thinking in certain cases but is not necessary for it in general. According to some accounts, thinking happens not in a regular language, like English or French, but has its own type of language with the corresponding symbols and syntax. This theory is known as the language of thought hypothesis.
Inner speech theory has a strong initial plausibility since introspection suggests that indeed many thoughts are accompanied by inner speech. But its opponents usually contend that this is not true for all types of thinking. It has been argued, for example, that forms of daydreaming constitute non-linguistic thought. This issue is relevant to the question of whether animals have the capacity to think. If thinking is necessarily tied to language then this would suggest that there is an important gap between humans and animals since only humans have a sufficiently complex language. But the existence of non-linguistic thoughts suggests that this gap may not be that big and that some animals do indeed think.
Language of thought hypothesis
There are various theories about the relation between language and thought. One prominent version in contemporary philosophy is called the language of thought hypothesis. It states that thinking happens in the medium of a mental language. This language, often referred to as Mentalese, is similar to regular languages in various respects: it is composed of words that are connected to each other in syntactic ways to form sentences. This claim does not merely rest on an intuitive analogy between language and thought. Instead, it provides a clear definition of the features a representational system has to embody in order to have a linguistic structure. On the level of syntax, the representational system has to possess two types of representations: atomic and compound representations. Atomic representations are basic whereas compound representations are constituted either by other compound representations or by atomic representations. On the level of semantics, the semantic content or the meaning of the compound representations should depend on the semantic contents of its constituents. A representational system is linguistically structured if it fulfills these two requirements.
The language of thought hypothesis states that the same is true for thinking in general. This would mean that thought is composed of certain atomic representational constituents that can be combined as described above. Apart from this abstract characterization, no further concrete claims are made about how human thought is implemented by the brain or which other similarities to natural language it has. The language of thought hypothesis was first introduced by Jerry Fodor. He argues in favor of this claim by holding that it constitutes the best explanation of the characteristic features of thinking. One of these features is productivity: a system of representations is productive if it can generate an infinite number of unique representations based on a low number of atomic representations. This applies to thought since human beings are capable of entertaining an infinite number of distinct thoughts even though their mental capacities are quite limited. Other characteristic features of thinking include systematicity and inferential coherence. Fodor argues that the language of thought hypothesis is true as it explains how thought can have these features and because there is no good alternative explanation. Some arguments against the language of thought hypothesis are based on neural networks, which are able to produce intelligent behavior without depending on representational systems. Other objections focus on the idea that some mental representations happen non-linguistically, for example, in the form of maps or images.
Computationalists have been especially interested in the language of thought hypothesis since it provides ways to close the gap between thought in the human brain and computational processes implemented by computers. The reason for this is that processes over representations that respect syntax and semantics, like inferences according to the modus ponens, can be implemented by physical systems using causal relations. The same linguistic systems may be implemented through different material systems, like brains or computers. In this way, computers can think.
Associationism
An important view in the empiricist tradition has been associationism, the view that thinking consists in the succession of ideas or images. This succession is seen as being governed by laws of association, which determine how the train of thought unfolds. These laws are different from logical relations between the contents of thoughts, which are found in the case of drawing inferences by moving from the thought of the premises to the thought of the conclusion. Various laws of association have been suggested. According to the laws of similarity and contrast, ideas tend to evoke other ideas that are either very similar to them or their opposite. The law of contiguity, on the other hand, states that if two ideas were frequently experienced together, then the experience of one tends to cause the experience of the other. In this sense, the history of an organism's experience determines which thoughts the organism has and how these thoughts unfold. But such an association does not guarantee that the connection is meaningful or rational. For example, because of the association between the terms "cold" and "Idaho", the thought "this coffee shop is cold" might lead to the thought "Russia should annex Idaho".
One form of associationism is imagism. It states that thinking involves entertaining a sequence of images where earlier images conjure up later images based on the laws of association. One problem with this view is that we can think about things that we cannot imagine. This is especially relevant when the thought involves very complex objects or infinities, which is common, for example, in mathematical thought. One criticism directed at associationism in general is that its claim is too far-reaching. There is wide agreement that associative processes as studied by associationists play some role in how thought unfolds. But the claim that this mechanism is sufficient to understand all thought or all mental processes is usually not accepted.
Behaviorism
According to behaviorism, thinking consists in behavioral dispositions to engage in certain publicly observable behavior as a reaction to particular external stimuli. On this view, having a particular thought is the same as having a disposition to behave in a certain way. This view is often motivated by empirical considerations: it is very difficult to study thinking as a private mental process but it is much easier to study how organisms react to a certain situation with a given behavior. In this sense, the capacity to solve problems not through existing habits but through creative new approaches is particularly relevant. The term "behaviorism" is also sometimes used in a slightly different sense when applied to thinking to refer to a specific form of inner speech theory. This view focuses on the idea that the relevant inner speech is a derivative form of regular outward speech. This sense overlaps with how behaviorism is understood more commonly in philosophy of mind since these inner speech acts are not observed by the researcher but merely inferred from the subject's intelligent behavior. This remains true to the general behaviorist principle that behavioral evidence is required for any psychological hypothesis.
One problem for behaviorism is that the same entity often behaves differently despite being in the same situation as before. This problem consists in the fact that individual thoughts or mental states usually do not correspond to one particular behavior. So thinking that the pie is tasty does not automatically lead to eating the pie, since various other mental states may still inhibit this behavior, for example, the belief that it would be impolite to do so or that the pie is poisoned.
Computationalism
Computationalist theories of thinking, often found in the cognitive sciences, understand thinking as a form of information processing. These views developed with the rise of computers in the second part of the 20th century, when various theorists saw thinking in analogy to computer operations. On such views, the information may be encoded differently in the brain, but in principle, the same operations take place there as well, corresponding to the storage, transmission, and processing of information. But while this analogy has some intuitive attraction, theorists have struggled to give a more explicit explanation of what computation is. A further problem consists in explaining the sense in which thinking is a form of computing. The traditionally dominant view defines computation in terms of Turing machines, though contemporary accounts often focus on neural networks for their analogies. A Turing machine is capable of executing any algorithm based on a few very basic principles, such as reading a symbol from a cell, writing a symbol to a cell, and executing instructions based on the symbols read. This way it is possible to perform deductive reasoning following the inference rules of formal logic as well as simulating many other functions of the mind, such as language processing, decision making, and motor control. But computationalism does not only claim that thinking is in some sense similar to computation. Instead, it is claimed that thinking just is a form of computation or that the mind is a Turing machine.
Computationalist theories of thought are sometimes divided into functionalist and representationalist approaches. Functionalist approaches define mental states through their causal roles but allow both external and internal events in their causal network. Thought may be seen as a form of program that can be executed in the same way by many different systems, including humans, animals, and even robots. According to one such view, whether something is a thought only depends on its role "in producing further internal states and verbal outputs". Representationalism, on the other hand, focuses on the representational features of mental states and defines thoughts as sequences of intentional mental states. In this sense, computationalism is often combined with the language of thought hypothesis by interpreting these sequences as symbols whose order is governed by syntactic rules.
Various arguments have been raised against computationalism. In one sense, it seems trivial since almost any physical system can be described as executing computations and therefore as thinking. For example, it has been argued that the molecular movements in a regular wall can be understood as computing an algorithm since they are "isomorphic to the formal structure of the program" in question under the right interpretation. This would lead to the implausible conclusion that the wall is thinking. Another objection focuses on the idea that computationalism captures only some aspects of thought but is unable to account for other crucial aspects of human cognition.
Types of thinking
A great variety of types of thinking are discussed in the academic literature. A common approach divides them into those forms that aim at the creation of theoretical knowledge and those that aim at producing actions or correct decisions, but there is no universally accepted taxonomy summarizing all these types.
Entertaining, judging, and reasoning
Thinking is often identified with the act of judging. A judgment is a mental operation in which a proposition is evoked and then either affirmed or denied. It involves deciding what to believe and aims at determining whether the judged proposition is true or false. Various theories of judgment have been proposed. The traditionally dominant approach is the combination theory. It states that judgments consist in the combination of concepts. On this view, to judge that "all men are mortal" is to combine the concepts "man" and "mortal". The same concepts can be combined in different ways, corresponding to different forms of judgment, for example, as "some men are mortal" or "no man is mortal".
Other theories of judgment focus more on the relation between the judged proposition and reality. According to Franz Brentano, a judgment is either a belief or a disbelief in the existence of some entity. In this sense, there are only two fundamental forms of judgment: "A exists" and "A does not exist". When applied to the sentence "all men are mortal", the entity in question is "immortal men", of whom it is said that they do not exist. Important for Brentano is the distinction between the mere representation of the content of the judgment and the affirmation or the denial of the content. The mere representation of a proposition is often referred to as "entertaining a proposition". This is the case, for example, when one considers a proposition but has not yet made up one's mind about whether it is true or false. The term "thinking" can refer both to judging and to mere entertaining. This difference is often explicit in the way the thought is expressed: "thinking that" usually involves a judgment whereas "thinking about" refers to the neutral representation of a proposition without an accompanying belief. In this case, the proposition is merely entertained but not yet judged. Some forms of thinking may involve the representation of objects without any propositions, as when someone is thinking about their grandmother.
Reasoning is one of the most paradigmatic forms of thinking. It is the process of drawing conclusions from premises or evidence. Types of reasoning can be divided into deductive and non-deductive reasoning. Deductive reasoning is governed by certain rules of inference, which guarantee the truth of the conclusion if the premises are true. For example, given the premises "all men are mortal" and "Socrates is a man", it follows deductively that "Socrates is mortal". Non-deductive reasoning, also referred to as defeasible reasoning or non-monotonic reasoning, is still rationally compelling but the truth of the conclusion is not ensured by the truth of the premises. Induction is one form of non-deductive reasoning, for example, when one concludes that "the sun will rise tomorrow" based on one's experiences of all the previous days. Other forms of non-deductive reasoning include the inference to the best explanation and analogical reasoning.
Fallacies are faulty forms of thinking that go against the norms of correct reasoning. Formal fallacies concern faulty inferences found in deductive reasoning. Denying the antecedent is one type of formal fallacy, for example, "If Othello is a bachelor, then he is male. Othello is not a bachelor. Therefore, Othello is not male". Informal fallacies, on the other hand, apply to all types of reasoning. The source of their flaw is to be found in the content or the context of the argument. This is often caused by ambiguous or vague expressions in natural language, as in "Feathers are light. What is light cannot be dark. Therefore, feathers cannot be dark". An important aspect of fallacies is that they seem to be rationally compelling on the first look and thereby seduce people into accepting and committing them. Whether an act of reasoning constitutes a fallacy does not depend on whether the premises are true or false but on their relation to the conclusion and, in some cases, on the context.
Concept formation
Concepts are general notions that constitute the fundamental building blocks of thought. They are rules that govern how objects are sorted into different classes. A person can only think about a proposition if they possess the concepts involved in this proposition. For example, the proposition "wombats are animals" involves the concepts "wombat" and "animal". Someone who does not possess the concept "wombat" may still be able to read the sentence but cannot entertain the corresponding proposition. Concept formation is a form of thinking in which new concepts are acquired. It involves becoming familiar with the characteristic features shared by all instances of the corresponding type of entity and developing the ability to identify positive and negative cases. This process usually corresponds to learning the meaning of the word associated with the type in question. There are various theories concerning how concepts and concept possession are to be understood. The use of metaphor may aid in the processes of concept formation.
According to one popular view, concepts are to be understood in terms of abilities. On this view, two central aspects characterize concept possession: the ability to discriminate between positive and negative cases and the ability to draw inferences from this concept to related concepts. Concept formation corresponds to acquiring these abilities. It has been suggested that animals are also able to learn concepts to some extent, due to their ability to discriminate between different types of situations and to adjust their behavior accordingly.
Problem solving
In the case of problem solving, thinking aims at reaching a predefined goal by overcoming certain obstacles. This process often involves two different forms of thinking. On the one hand, divergent thinking aims at coming up with as many alternative solutions as possible. On the other hand, convergent thinking tries to narrow down the range of alternatives to the most promising candidates. Some researchers identify various steps in the process of problem solving. These steps include recognizing the problem, trying to understand its nature, identifying general criteria the solution should meet, deciding how these criteria should be prioritized, monitoring the progress, and evaluating the results.
An important distinction concerns the type of problem that is faced. For well-structured problems, it is easy to determine which steps need to be taken to solve them, but executing these steps may still be difficult. For ill-structured problems, on the other hand, it is not clear what steps need to be taken, i.e. there is no clear formula that would lead to success if followed correctly. In this case, the solution may sometimes come in a flash of insight in which the problem is suddenly seen in a new light. Another way to categorize different forms of problem solving is by distinguishing between algorithms and heuristics. An algorithm is a formal procedure in which each step is clearly defined. It guarantees success if applied correctly. The long multiplication usually taught in school is an example of an algorithm for solving the problem of multiplying big numbers. Heuristics, on the other hand, are informal procedures. They are rough rules-of-thumb that tend to bring the thinker closer to the solution but success is not guaranteed in every case even if followed correctly. Examples of heuristics are working forward and working backward. These approaches involve planning one step at a time, either starting at the beginning and moving forward or starting at the end and moving backward. So when planning a trip, one could plan the different stages of the trip from origin to destiny in the chronological order of how the trip will be realized, or in the reverse order.
Obstacles to problem solving can arise from the thinker's failure to take certain possibilities into account by fixating on one specific course of action. There are important differences between how novices and experts solve problems. For example, experts tend to allocate more time for conceptualizing the problem and work with more complex representations whereas novices tend to devote more time to executing putative solutions.
Deliberation and decision
Deliberation is an important form of practical thinking. It aims at formulating possible courses of action and assessing their value by considering the reasons for and against them. This involves foresight to anticipate what might happen. Based on this foresight, different courses of action can be formulated in order to influence what will happen. Decisions are an important part of deliberation. They are about comparing alternative courses of action and choosing the most favorable one. Decision theory is a formal model of how ideal rational agents would make decisions. It is based on the idea that they should always choose the alternative with the highest expected value. Each alternative can lead to various possible outcomes, each of which has a different value. The expected value of an alternative consists in the sum of the values of each outcome associated with it multiplied by the probability that this outcome occurs. According to decision theory, a decision is rational if the agent chooses the alternative associated with the highest expected value, as assessed from the agent's own perspective.
Various theorists emphasize the practical nature of thought, i.e. that thinking is usually guided by some kind of task it aims to solve. In this sense, thinking has been compared to trial-and-error seen in animal behavior when faced with a new problem. On this view, the important difference is that this process happens inwardly as a form of simulation. This process is often much more efficient since once the solution is found in thought, only the behavior corresponding to the found solution has to be outwardly carried out and not all the others.
Episodic memory and imagination
When thinking is understood in a wide sense, it includes both episodic memory and imagination. In episodic memory, events one experienced in the past are relived. It is a form of mental time travel in which the past experience is re-experienced. But this does not constitute an exact copy of the original experience since the episodic memory involves additional aspects and information not present in the original experience. This includes both a feeling of familiarity and chronological information about the past event in relation to the present. Memory aims at representing how things actually were in the past, in contrast to imagination, which presents objects without aiming to show how things actually are or were. Because of this missing link to actuality, more freedom is involved in most forms of imagination: its contents can be freely varied, changed, and recombined to create new arrangements never experienced before. Episodic memory and imagination have in common with other forms of thought that they can arise internally without any stimulation of the sensory organs. But they are still closer to sensation than more abstract forms of thought since they present sensory contents that could, at least in principle, also be perceived.
Unconscious thought
Conscious thought is the paradigmatic form of thinking and is often the focus of the corresponding research. But it has been argued that some forms of thought also happen on the unconscious level. Unconscious thought is thought that happens in the background without being experienced. It is therefore not observed directly. Instead, its existence is usually inferred by other means. For example, when someone is faced with an important decision or a difficult problem, they may not be able to solve it straight away. But then, at a later time, the solution may suddenly flash before them even though no conscious steps of thinking were taken towards this solution in the meantime. In such cases, the cognitive labor needed to arrive at a solution is often explained in terms of unconscious thoughts. The central idea is that a cognitive transition happened and we need to posit unconscious thoughts to be able to explain how it happened.
It has been argued that conscious and unconscious thoughts differ not just concerning their relation to experience but also concerning their capacities. According to unconscious thought theorists, for example, conscious thought excels at simple problems with few variables but is outperformed by unconscious thought when complex problems with many variables are involved. This is sometimes explained through the claim that the number of items one can consciously think about at the same time is rather limited whereas unconscious thought lacks such limitations. But other researchers have rejected the claim that unconscious thought is often superior to conscious thought. Other suggestions for the difference between the two forms of thinking include that conscious thought tends to follow formal logical laws while unconscious thought relies more on associative processing and that only conscious thinking is conceptually articulated and happens through the medium of language.
In various disciplines
Phenomenology
Phenomenology is the science of the structure and contents of experience. The term "cognitive phenomenology" refers to the experiential character of thinking or what it feels like to think. Some theorists claim that there is no distinctive cognitive phenomenology. On such a view, the experience of thinking is just one form of sensory experience. According to one version, thinking just involves hearing a voice internally. According to another, there is no experience of thinking apart from the indirect effects thinking has on sensory experience. A weaker version of such an approach allows that thinking may have a distinct phenomenology but contends that thinking still depends on sensory experience because it cannot occur on its own. On this view, sensory contents constitute the foundation from which thinking may arise.
An often-cited thought experiment in favor of the existence of a distinctive cognitive phenomenology involves two persons listening to a radio broadcast in French, one who understands French and the other who does not. The idea behind this example is that both listeners hear the same sounds and therefore have the same non-cognitive experience. In order to explain the difference, a distinctive cognitive phenomenology has to be posited: only the experience of the first person has this additional cognitive character since it is accompanied by a thought that corresponds to the meaning of what is said. Other arguments for the experience of thinking focus on the direct introspective access to thinking or on the thinker's knowledge of their own thoughts.
Phenomenologists are also concerned with the characteristic features of the experience of thinking. Making a judgment is one of the prototypical forms of cognitive phenomenology. It involves epistemic agency, in which a proposition is entertained, evidence for and against it is considered, and, based on this reasoning, the proposition is either affirmed or rejected. It is sometimes argued that the experience of truth is central to thinking, i.e. that thinking aims at representing how the world is. It shares this feature with perception but differs from it in the way how it represents the world: without the use of sensory contents.
One of the characteristic features often ascribed to thinking and judging is that they are predicative experiences, in contrast to the pre-predicative experience found in immediate perception. On such a view, various aspects of perceptual experience resemble judgments without being judgments in the strict sense. For example, the perceptual experience of the front of a house brings with it various expectations about aspects of the house not directly seen, like the size and shape of its other sides. This process is sometimes referred to as apperception. These expectations resemble judgments and can be wrong. This would be the case when it turns out upon walking around the "house" that it is no house at all but only a front facade of a house with nothing behind it. In this case, the perceptual expectations are frustrated and the perceiver is surprised. There is disagreement as to whether these pre-predicative aspects of regular perception should be understood as a form of cognitive phenomenology involving thinking. This issue is also important for understanding the relation between thought and language. The reason for this is that the pre-predicative expectations do not depend on language, which is sometimes taken as an example for non-linguistic thought. Various theorists have argued that pre-predicative experience is more basic or fundamental since predicative experience is in some sense built on top of it and therefore depends on it.
Another way how phenomenologists have tried to distinguish the experience of thinking from other types of experiences is in relation to empty intentions in contrast to intuitive intentions. In this context, "intention" means that some kind of object is experienced. In intuitive intentions, the object is presented through sensory contents. Empty intentions, on the other hand, present their object in a more abstract manner without the help of sensory contents. So when perceiving a sunset, it is presented through sensory contents. The same sunset can also be presented non-intuitively when merely thinking about it without the help of sensory contents. In these cases, the same properties are ascribed to objects. The difference between these modes of presentation concerns not what properties are ascribed to the presented object but how the object is presented. Because of this commonality, it is possible for representations belonging to different modes to overlap or to diverge. For example, when searching one's glasses one may think to oneself that one left them on the kitchen table. This empty intention of the glasses lying on the kitchen table are then intuitively fulfilled when one sees them lying there upon arriving in the kitchen. This way, a perception can confirm or refute a thought depending on whether the empty intuitions are later fulfilled or not.
Metaphysics
The mind–body problem concerns the explanation of the relationship that exists between minds, or mental processes, and bodily states or processes. The main aim of philosophers working in this area is to determine the nature of the mind and mental states/processes, and how—or even if—minds are affected by and can affect the body.
Human perceptual experiences depend on stimuli which arrive at one's various sensory organs from the external world and these stimuli cause changes in one's mental state, ultimately causing one to feel a sensation, which may be pleasant or unpleasant. Someone's desire for a slice of pizza, for example, will tend to cause that person to move his or her body in a specific manner and in a specific direction to obtain what he or she wants. The question, then, is how it can be possible for conscious experiences to arise out of a lump of gray matter endowed with nothing but electrochemical properties. A related problem is to explain how someone's propositional attitudes (e.g. beliefs and desires) can cause that individual's neurons to fire and his muscles to contract in exactly the correct manner. These comprise some of the puzzles that have confronted epistemologists and philosophers of mind from at least the time of René Descartes.
The above reflects a classical, functional description of how we work as cognitive, thinking systems. However the apparently irresolvable mind–body problem is said to be overcome, and bypassed, by the embodied cognition approach, with its roots in the work of Heidegger, Piaget, Vygotsky, Merleau-Ponty and the pragmatist John Dewey.
This approach states that the classical approach of separating the mind and analysing its processes is misguided: instead, we should see that the mind, actions of an embodied agent, and the environment it perceives and envisions, are all parts of a whole which determine each other. Therefore, functional analysis of the mind alone will always leave us with the mind–body problem which cannot be solved.
Psychology
Psychologists have concentrated on thinking as an intellectual exertion aimed at finding an answer to a question or the solution of a practical problem. Cognitive psychology is a branch of psychology that investigates internal mental processes such as problem solving, memory, and language; all of which are used in thinking. The school of thought arising from this approach is known as cognitivism, which is interested in how people mentally represent information processing. It had its foundations in the Gestalt psychology of Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who provided a theory of stages/phases that describes children's cognitive development.
Cognitive psychologists use psychophysical and experimental approaches to understand, diagnose, and solve problems, concerning themselves with the mental processes which mediate between stimulus and response. They study various aspects of thinking, including the psychology of reasoning, and how people make decisions and choices, solve problems, as well as engage in creative discovery and imaginative thought. Cognitive theory contends that solutions to problems either take the form of algorithms: rules that are not necessarily understood but promise a solution, or of heuristics: rules that are understood but that do not always guarantee solutions. Cognitive science differs from cognitive psychology in that algorithms that are intended to simulate human behavior are implemented or implementable on a computer. In other instances, solutions may be found through insight, a sudden awareness of relationships.
In developmental psychology, Jean Piaget was a pioneer in the study of the development of thought from birth to maturity. In his theory of cognitive development, thought is based on actions on the environment. That is, Piaget suggests that the environment is understood through assimilations of objects in the available schemes of action and these accommodate to the objects to the extent that the available schemes fall short of the demands. As a result of this interplay between assimilation and accommodation, thought develops through a sequence of stages that differ qualitatively from each other in mode of representation and complexity of inference and understanding. That is, thought evolves from being based on perceptions and actions at the sensorimotor stage in the first two years of life to internal representations in early childhood. Subsequently, representations are gradually organized into logical structures which first operate on the concrete properties of the reality, in the stage of concrete operations, and then operate on abstract principles that organize concrete properties, in the stage of formal operations. In recent years, the Piagetian conception of thought was integrated with information processing conceptions. Thus, thought is considered as the result of mechanisms that are responsible for the representation and processing of information. In this conception, speed of processing, cognitive control, and working memory are the main functions underlying thought. In the neo-Piagetian theories of cognitive development, the development of thought is considered to come from increasing speed of processing, enhanced cognitive control, and increasing working memory.
Positive psychology emphasizes the positive aspects of human psychology as equally important as the focus on mood disorders and other negative symptoms. In Character Strengths and Virtues, Peterson and Seligman list a series of positive characteristics. One person is not expected to have every strength, nor are they meant to fully capsulate that characteristic entirely. The list encourages positive thought that builds on a person's strengths, rather than how to "fix" their "symptoms".
Psychoanalysis
The "id", "ego" and "super-ego" are the three parts of the "psychic apparatus" defined in Sigmund Freud's structural model of the psyche; they are the three theoretical constructs in terms of whose activity and interaction mental life is described. According to this model, the uncoordinated instinctual trends are encompassed by the "id", the organized realistic part of the psyche is the "ego", and the critical, moralizing function is the "super-ego".
For psychoanalysis, the unconscious does not include all that is not conscious, rather only what is actively repressed from conscious thought or what the person is averse to knowing consciously. In a sense this view places the self in relationship to their unconscious as an adversary, warring with itself to keep what is unconscious hidden. If a person feels pain, all he can think of is alleviating the pain. Any of his desires, to get rid of pain or enjoy something, command the mind what to do. For Freud, the unconscious was a repository for socially unacceptable ideas, wishes or desires, traumatic memories, and painful emotions put out of mind by the mechanism of psychological repression. However, the contents did not necessarily have to be solely negative. In the psychoanalytic view, the unconscious is a force that can only be recognized by its effects—it expresses itself in the symptom.
The collective unconscious, sometimes known as collective subconscious, is a term of analytical psychology, coined by Carl Jung. It is a part of the unconscious mind, shared by a society, a people, or all humanity, in an interconnected system that is the product of all common experiences and contains such concepts as science, religion, and morality. While Freud did not distinguish between "individual psychology" and "collective psychology", Jung distinguished the collective unconscious from the personal subconscious particular to each human being. The collective unconscious is also known as "a reservoir of the experiences of our species".
In the "Definitions" chapter of Jung's seminal work Psychological Types, under the definition of "collective" Jung references representations collectives, a term coined by Lucien Lévy-Bruhl in his 1910 book How Natives Think. Jung says this is what he describes as the collective unconscious. Freud, on the other hand, did not accept the idea of a collective unconscious.
Related concepts and theories
Laws of thought
Traditionally, the term "laws of thought" refers to three fundamental laws of logic: the law of contradiction, the law of excluded middle, and the principle of identity. These laws by themselves are not sufficient as axioms of logic but they can be seen as important precursors to the modern axiomatization of logic. The law of contradiction states that for any proposition, it is impossible that both it and its negation are true: . According to the law of excluded middle, for any proposition, either it or its opposite is true: . The principle of identity asserts that any object is identical to itself: . There are different conceptions of how the laws of thought are to be understood. The interpretations most relevant to thinking are to understand them as prescriptive laws of how one should think or as formal laws of propositions that are true only because of their form and independent of their content or context. Metaphysical interpretations, on the other hand, see them as expressing the nature of "being as such".
While there is a very wide acceptance of these three laws among logicians, they are not universally accepted. Aristotle, for example, held that there are some cases in which the law of excluded middle is false. This concerns primarily uncertain future events. On his view, it is currently "not ... either true or false that there will be a naval battle tomorrow". Modern intuitionist logic also rejects the law of excluded middle. This rejection is based on the idea that mathematical truth depends on verification through a proof. The law fails for cases where no such proof is possible, which exist in every sufficiently strong formal system, according to Gödel's incompleteness theorems. Dialetheists, on the other hand, reject the law of contradiction by holding that some propositions are both true and false. One motivation of this position is to avoid certain paradoxes in classical logic and set theory, like the liar's paradox and Russell's paradox. One of its problems is to find a formulation that circumvents the principle of explosion, i.e. that anything follows from a contradiction.
Some formulations of the laws of thought include a fourth law: the principle of sufficient reason. It states that everything has a sufficient reason, ground, or cause. It is closely connected to the idea that everything is intelligible or can be explained in reference to its sufficient reason. According to this idea, there should always be a full explanation, at least in principle, to questions like why the sky is blue or why World War II happened. One problem for including this principle among the laws of thought is that it is a metaphysical principle, unlike the other three laws, which pertain primarily to logic.
Counterfactual thinking
Counterfactual thinking involves mental representations of non-actual situations and events, i.e. of what is "contrary to the facts". It is usually conditional: it aims at assessing what would be the case if a certain condition had obtained. In this sense, it tries to answer "What if"-questions. For example, thinking after an accident that one would be dead if one had not used the seatbelt is a form of counterfactual thinking: it assumes, contrary to the facts, that one had not used the seatbelt and tries to assess the result of this state of affairs. In this sense, counterfactual thinking is normally counterfactual only to a small degree since just a few facts are changed, like concerning the seatbelt, while most other facts are kept in place, like that one was driving, one's gender, the laws of physics, etc. When understood in the widest sense, there are forms of counterfactual thinking that do not involve anything contrary to the facts at all. This is the case, for example, when one tries to anticipate what might happen in the future if an uncertain event occurs and this event actually occurs later and brings with it the anticipated consequences. In this wider sense, the term "subjunctive conditional" is sometimes used instead of "counterfactual conditional". But the paradigmatic cases of counterfactual thinking involve alternatives to past events.
Counterfactual thinking plays an important role since we evaluate the world around us not only by what actually happened but also by what could have happened. Humans have a greater tendency to engage in counterfactual thinking after something bad happened because of some kind of action the agent performed. In this sense, many regrets are associated with counterfactual thinking in which the agent contemplates how a better outcome could have been obtained if only they had acted differently. These cases are known as upward counterfactuals, in contrast to downward counterfactuals, in which the counterfactual scenario is worse than actuality. Upward counterfactual thinking is usually experienced as unpleasant, since it presents the actual circumstances in a bad light. This contrasts with the positive emotions associated with downward counterfactual thinking. But both forms are important since it is possible to learn from them and to adjust one's behavior accordingly to get better results in the future.
Thought experiments
Thought experiments involve thinking about imaginary situations, often with the aim of investigating the possible consequences of a change to the actual sequence of events. It is a controversial issue to what extent thought experiments should be understood as actual experiments. They are experiments in the sense that a certain situation is set up and one tries to learn from this situation by understanding what follows from it. They differ from regular experiments in that imagination is used to set up the situation and counterfactual reasoning is employed to evaluate what follows from it, instead of setting it up physically and observing the consequences through perception. Counterfactual thinking, therefore, plays a central role in thought experiments.
The Chinese room argument is a famous thought experiment proposed by John Searle. It involves a person sitting inside a closed-off room, tasked with responding to messages written in Chinese. This person does not know Chinese but has a giant rule book that specifies exactly how to reply to any possible message, similar to how a computer would react to messages. The core idea of this thought experiment is that neither the person nor the computer understands Chinese. This way, Searle aims to show that computers lack a mind capable of deeper forms of understanding despite acting intelligently.
Thought experiments are employed for various purposes, for example, for entertainment, education, or as arguments for or against theories. Most discussions focus on their use as arguments. This use is found in fields like philosophy, the natural sciences, and history. It is controversial since there is a lot of disagreement concerning the epistemic status of thought experiments, i.e. how reliable they are as evidence supporting or refuting a theory. Central to the rejection of this usage is the fact that they pretend to be a source of knowledge without the need to leave one's armchair in search of any new empirical data. Defenders of thought experiments usually contend that the intuitions underlying and guiding the thought experiments are, at least in some cases, reliable. But thought experiments can also fail if they are not properly supported by intuitions or if they go beyond what the intuitions support. In the latter sense, sometimes counter thought experiments are proposed that modify the original scenario in slight ways in order to show that initial intuitions cannot survive this change. Various taxonomies of thought experiments have been suggested. They can be distinguished, for example, by whether they are successful or not, by the discipline that uses them, by their role in a theory, or by whether they accept or modify the actual laws of physics.
Critical thinking
Critical thinking is a form of thinking that is reasonable, reflective, and focused on determining what to believe or how to act. It holds itself to various standards, like clarity and rationality. In this sense, it involves not just cognitive processes trying to solve the issue at hand but at the same time meta-cognitive processes ensuring that it lives up to its own standards. This includes assessing both that the reasoning itself is sound and that the evidence it rests on is reliable. This means that logic plays an important role in critical thinking. It concerns not just formal logic, but also informal logic, specifically to avoid various informal fallacies due to vague or ambiguous expressions in natural language. No generally accepted standard definition of "critical thinking" exists but there is significant overlap between the proposed definitions in their characterization of critical thinking as careful and goal-directed. According to some versions, only the thinker's own observations and experiments are accepted as evidence in critical thinking. Some restrict it to the formation of judgments but exclude action as its goal.
A concrete everyday example of critical thinking, due to John Dewey, involves observing foam bubbles moving in a direction that is contrary to one's initial expectations. The critical thinker tries to come up with various possible explanations of this behavior and then slightly modifies the original situation in order to determine which one is the right explanation. But not all forms of cognitively valuable processes involve critical thinking. Arriving at the correct solution to a problem by blindly following the steps of an algorithm does not qualify as critical thinking. The same is true if the solution is presented to the thinker in a sudden flash of insight and accepted straight away.
Critical thinking plays an important role in education: fostering the student's ability to think critically is often seen as an important educational goal. In this sense, it is important to convey not just a set of true beliefs to the student but also the ability to draw one's own conclusions and to question pre-existing beliefs. The abilities and dispositions learned this way may profit not just the individual but also society at large. Critics of the emphasis on critical thinking in education have argued that there is no universal form of correct thinking. Instead, they contend that different subject matters rely on different standards and education should focus on imparting these subject-specific skills instead of trying to teach universal methods of thinking. Other objections are based on the idea that critical thinking and the attitude underlying it involve various unjustified biases, like egocentrism, distanced objectivity, indifference, and an overemphasis of the theoretical in contrast to the practical.
Positive thinking
Positive thinking is an important topic in positive psychology. It involves focusing one's attention on the positive aspects of one's situation and thereby withdrawing one's attention from its negative sides. This is usually seen as a global outlook that applies especially to thinking but includes other mental processes, like feeling, as well. In this sense, it is closely related to optimism. It includes expecting positive things to happen in the future. This positive outlook makes it more likely for people to seek to attain new goals. It also increases the probability of continuing to strive towards pre-existing goals that seem difficult to reach instead of just giving up.
The effects of positive thinking are not yet thoroughly researched, but some studies suggest that there is a correlation between positive thinking and well-being. For example, students and pregnant women with a positive outlook tend to be better at dealing with stressful situations. This is sometimes explained by pointing out that stress is not inherent in stressful situations but depends on the agent's interpretation of the situation. Reduced stress may therefore be found in positive thinkers because they tend to see such situations in a more positive light. But the effects also include the practical domain in that positive thinkers tend to employ healthier coping strategies when faced with difficult situations. This effects, for example, the time needed to fully recover from surgeries and the tendency to resume physical exercise afterward.
But it has been argued that whether positive thinking actually leads to positive outcomes depends on various other factors. Without these factors, it may lead to negative results. For example, the tendency of optimists to keep striving in difficult situations can backfire if the course of events is outside the agent's control. Another danger associated with positive thinking is that it may remain only on the level of unrealistic fantasies and thereby fail to make a positive practical contribution to the agent's life. Pessimism, on the other hand, may have positive effects since it can mitigate disappointments by anticipating failures.
Positive thinking is a recurrent topic in the self-help literature. Here, often the claim is made that one can significantly improve one's life by trying to think positively, even if this means fostering beliefs that are contrary to evidence. Such claims and the effectiveness of the suggested methods are controversial and have been criticized due to their lack of scientific evidence. In the New Thought movement, positive thinking figures in the law of attraction, the pseudoscientific claim that positive thoughts can directly influence the external world by attracting positive outcomes.
See also
Animal cognition
Freethought
Outline of human intelligence – topic tree presenting the traits, capacities, models, and research fields of human intelligence, and more
Outline of thought – topic tree that identifies many types of thoughts, types of thinking, aspects of thought, related fields, and more
Rethinking
References
Further reading
Bayne, Tim (21 September 2013), "Thoughts", New Scientist. 7-page feature article on the topic.
Fields, R. Douglas, "The Brain Learns in Unexpected Ways: Neuroscientists have discovered a set of unfamiliar cellular mechanisms for making fresh memories", Scientific American, vol. 322, no. 3 (March 2020), pp. 74–79. "Myelin, long considered inert insulation on axons, is now seen as making a contribution to learning by controlling the speed at which signals travel along neural wiring." (p. 79.)
Rajvanshi, Anil K. (2010), Nature of Human Thought, .
Simon, Herbert, Models of Thought, Vol I, 1979, ; Vol II, 1989, , Yale University Press.
External links
Concepts in epistemology
Concepts in metaphilosophy
Concepts in metaphysics
Concepts in the philosophy of mind
Mental content
Neuropsychological assessment
Psychological concepts
Sensory systems
Sources of knowledge
Unsolved problems in neuroscience | 0.776039 | 0.998893 | 0.775179 |
Thermodynamic free energy | In thermodynamics, the thermodynamic free energy is one of the state functions of a thermodynamic system (the others being internal energy, enthalpy, entropy, etc.). The change in the free energy is the maximum amount of work that the system can perform in a process at constant temperature, and its sign indicates whether the process is thermodynamically favorable or forbidden. Since free energy usually contains potential energy, it is not absolute but depends on the choice of a zero point. Therefore, only relative free energy values, or changes in free energy, are physically meaningful.
The free energy is the portion of any first-law energy that is available to perform thermodynamic work at constant temperature, i.e., work mediated by thermal energy. Free energy is subject to irreversible loss in the course of such work. Since first-law energy is always conserved, it is evident that free energy is an expendable, second-law kind of energy. Several free energy functions may be formulated based on system criteria. Free energy functions are Legendre transforms of the internal energy.
The Gibbs free energy is given by , where is the enthalpy, is the absolute temperature, and is the entropy. , where is the internal energy, is the pressure, and is the volume. is the most useful for processes involving a system at constant pressure and temperature , because, in addition to subsuming any entropy change due merely to heat, a change in also excludes the work needed to "make space for additional molecules" produced by various processes. Gibbs free energy change therefore equals work not associated with system expansion or compression, at constant temperature and pressure, hence its utility to solution-phase chemists, including biochemists.
The historically earlier Helmholtz free energy is defined in contrast as . Its change is equal to the amount of reversible work done on, or obtainable from, a system at constant . Thus its appellation "work content", and the designation . Since it makes no reference to any quantities involved in work (such as and ), the Helmholtz function is completely general: its decrease is the maximum amount of work which can be done by a system at constant temperature, and it can increase at most by the amount of work done on a system isothermally. The Helmholtz free energy has a special theoretical importance since it is proportional to the logarithm of the partition function for the canonical ensemble in statistical mechanics. (Hence its utility to physicists; and to gas-phase chemists and engineers, who do not want to ignore work.)
Historically, the term 'free energy' has been used for either quantity. In physics, free energy most often refers to the Helmholtz free energy, denoted by (or ), while in chemistry, free energy most often refers to the Gibbs free energy. The values of the two free energies are usually quite similar and the intended free energy function is often implicit in manuscripts and presentations.
Meaning of "free"
The basic definition of "energy" is a measure of a body's (in thermodynamics, the system's) ability to cause change. For example, when a person pushes a heavy box a few metres forward, that person exerts mechanical energy, also known as work, on the box over a distance of a few meters forward. The mathematical definition of this form of energy is the product of the force exerted on the object and the distance by which the box moved. Because the person changed the stationary position of the box, that person exerted energy on that box. The work exerted can also be called "useful energy", because energy was converted from one form into the intended purpose, i.e. mechanical use. For the case of the person pushing the box, the energy in the form of internal (or potential) energy obtained through metabolism was converted into work to push the box. This energy conversion, however, was not straightforward: while some internal energy went into pushing the box, some was diverted away (lost) in the form of heat (transferred thermal energy).
For a reversible process, heat is the product of the absolute temperature and the change in entropy of a body (entropy is a measure of disorder in a system). The difference between the change in internal energy, which is , and the energy lost in the form of heat is what is called the "useful energy" of the body, or the work of the body performed on an object. In thermodynamics, this is what is known as "free energy". In other words, free energy is a measure of work (useful energy) a system can perform at constant temperature.
Mathematically, free energy is expressed as
This expression has commonly been interpreted to mean that work is extracted from the internal energy while represents energy not available to perform work. However, this is incorrect. For instance, in an isothermal expansion of an ideal gas, the internal energy change is and the expansion work is derived exclusively from the term supposedly not available to perform work. But it is noteworthy that the derivative form of the free energy: (for Helmholtz free energy) does indeed indicate that a spontaneous change in a non-reactive system's free energy (NOT the internal energy) comprises the available energy to do work (compression in this case) and the unavailable energy . Similar expression can be written for the Gibbs free energy change.
In the 18th and 19th centuries, the theory of heat, i.e., that heat is a form of energy having relation to vibratory motion, was beginning to supplant both the caloric theory, i.e., that heat is a fluid, and the four element theory, in which heat was the lightest of the four elements. In a similar manner, during these years, heat was beginning to be distinguished into different classification categories, such as "free heat", "combined heat", "radiant heat", specific heat, heat capacity, "absolute heat", "latent caloric", "free" or "perceptible" caloric (calorique sensible), among others.
In 1780, for example, Laplace and Lavoisier stated: “In general, one can change the first hypothesis into the second by changing the words ‘free heat, combined heat, and heat released’ into ‘vis viva, loss of vis viva, and increase of vis viva.’" In this manner, the total mass of caloric in a body, called absolute heat, was regarded as a mixture of two components; the free or perceptible caloric could affect a thermometer, whereas the other component, the latent caloric, could not. The use of the words "latent heat" implied a similarity to latent heat in the more usual sense; it was regarded as chemically bound to the molecules of the body. In the adiabatic compression of a gas, the absolute heat remained constant but the observed rise in temperature implied that some latent caloric had become "free" or perceptible.
During the early 19th century, the concept of perceptible or free caloric began to be referred to as "free heat" or "heat set free". In 1824, for example, the French physicist Sadi Carnot, in his famous "Reflections on the Motive Power of Fire", speaks of quantities of heat ‘absorbed or set free’ in different transformations. In 1882, the German physicist and physiologist Hermann von Helmholtz coined the phrase ‘free energy’ for the expression , in which the change in A (or G) determines the amount of energy ‘free’ for work under the given conditions, specifically constant temperature.
Thus, in traditional use, the term "free" was attached to Gibbs free energy for systems at constant pressure and temperature, or to Helmholtz free energy for systems at constant temperature, to mean ‘available in the form of useful work.’ With reference to the Gibbs free energy, we need to add the qualification that it is the energy free for non-volume work and compositional changes.
An increasing number of books and journal articles do not include the attachment "free", referring to G as simply Gibbs energy (and likewise for the Helmholtz energy). This is the result of a 1988 IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective ‘free’ was supposedly banished. This standard, however, has not yet been universally adopted, and many published articles and books still include the descriptive ‘free’.
Application
Just like the general concept of energy, free energy has a few definitions suitable for different conditions. In physics, chemistry, and biology, these conditions are thermodynamic parameters (temperature , volume , pressure , etc.). Scientists have come up with several ways to define free energy. The mathematical expression of Helmholtz free energy is:
This definition of free energy is useful for gas-phase reactions or in physics when modeling the behavior of isolated systems kept at a constant volume. For example, if a researcher wanted to perform a combustion reaction in a bomb calorimeter, the volume is kept constant throughout the course of a reaction. Therefore, the heat of the reaction is a direct measure of the free energy change, . In solution chemistry, on the other hand, most chemical reactions are kept at constant pressure. Under this condition, the heat of the reaction is equal to the enthalpy change of the system. Under constant pressure and temperature, the free energy in a reaction is known as Gibbs free energy .
These functions have a minimum in chemical equilibrium, as long as certain variables (, and or ) are held constant. In addition, they also have theoretical importance in deriving Maxwell relations. Work other than may be added, e.g., for electrochemical cells, or work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors.
In most cases of interest there are internal degrees of freedom and processes, such as chemical reactions and phase transitions, which create entropy. Even for homogeneous "bulk" materials, the free energy functions depend on the (often suppressed) composition, as do all proper thermodynamic potentials (extensive functions), including the internal energy.
is the number of molecules (alternatively, moles) of type in the system. If these quantities do not appear, it is impossible to describe compositional changes. The differentials for processes at uniform pressure and temperature are (assuming only work):
where μi is the chemical potential for the ith component in the system. The second relation is especially useful at constant and , conditions which are easy to achieve experimentally, and which approximately characterize living creatures. Under these conditions, it simplifies to
Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as times a corresponding increase in the entropy of the system and/or its surrounding.
An example is surface free energy, the amount of increase of free energy when the area of surface increases by every unit area.
The path integral Monte Carlo method is a numerical approach for determining the values of free energies, based on quantum dynamical principles.
Work and free energy change
For a reversible isothermal process, ΔS = qrev/T and therefore the definition of A results in
(at constant temperature)
This tells us that the change in free energy equals the reversible or maximum work for a process performed at constant temperature. Under other conditions, free-energy change is not equal to work; for instance, for a reversible adiabatic expansion of an ideal gas, Importantly, for a heat engine, including the Carnot cycle, the free-energy change after a full cycle is zero, while the engine produces nonzero work. It is important to note that for heat engines and other thermal systems, the free energies do not offer convenient characterizations; internal energy and enthalpy are the preferred potentials for characterizing thermal systems.
Free energy change and spontaneous processes
According to the second law of thermodynamics, for any process that occurs in a closed system, the inequality of Clausius, ΔS > q/Tsurr, applies. For a process at constant temperature and pressure without non-PV work, this inequality transforms into . Similarly, for a process at constant temperature and volume, . Thus, a negative value of the change in free energy is a necessary condition for a process to be spontaneous; this is the most useful form of the second law of thermodynamics in chemistry. In chemical equilibrium at constant T and p without electrical work, dG = 0.
History
The quantity called "free energy" is a more advanced and accurate replacement for the outdated term affinity, which was used by chemists in previous years to describe the force that caused chemical reactions. The term affinity, as used in chemical relation, dates back to at least the time of Albertus Magnus.
From the 1998 textbook Modern Thermodynamics by Nobel Laureate and chemistry professor Ilya Prigogine we find: "As motion was explained by the Newtonian concept of force, chemists wanted a similar concept of ‘driving force’ for chemical change. Why do chemical reactions occur, and why do they stop at certain points? Chemists called the ‘force’ that caused chemical reactions affinity, but it lacked a clear definition."
During the entire 18th century, the dominant view with regard to heat and light was that put forth by Isaac Newton, called the Newtonian hypothesis, which states that light and heat are forms of matter attracted or repelled by other forms of matter, with forces analogous to gravitation or to chemical affinity.
In the 19th century, the French chemist Marcellin Berthelot and the Danish chemist Julius Thomsen had attempted to quantify affinity using heats of reaction. In 1875, after quantifying the heats of reaction for a large number of compounds, Berthelot proposed the principle of maximum work, in which all chemical changes occurring without intervention of outside energy tend toward the production of bodies or of a system of bodies which liberate heat.
In addition to this, in 1780 Antoine Lavoisier and Pierre-Simon Laplace laid the foundations of thermochemistry by showing that the heat given out in a reaction is equal to the heat absorbed in the reverse reaction. They also investigated the specific heat and latent heat of a number of substances, and amounts of heat given out in combustion. In a similar manner, in 1840 Swiss chemist Germain Hess formulated the principle that the evolution of heat in a reaction is the same whether the process is accomplished in one-step process or in a number of stages. This is known as Hess' law. With the advent of the mechanical theory of heat in the early 19th century, Hess's law came to be viewed as a consequence of the law of conservation of energy.
Based on these and other ideas, Berthelot and Thomsen, as well as others, considered the heat given out in the formation of a compound as a measure of the affinity, or the work done by the chemical forces. This view, however, was not entirely correct. In 1847, the English physicist James Joule showed that he could raise the temperature of water by turning a paddle wheel in it, thus showing that heat and mechanical work were equivalent or proportional to each other, i.e., approximately, . This statement came to be known as the mechanical equivalent of heat and was a precursory form of the first law of thermodynamics.
By 1865, the German physicist Rudolf Clausius had shown that this equivalence principle needed amendment. That is, one can use the heat derived from a combustion reaction in a coal furnace to boil water, and use this heat to vaporize steam, and then use the enhanced high-pressure energy of the vaporized steam to push a piston. Thus, we might naively reason that one can entirely convert the initial combustion heat of the chemical reaction into the work of pushing the piston. Clausius showed, however, that we must take into account the work that the molecules of the working body, i.e., the water molecules in the cylinder, do on each other as they pass or transform from one step of or state of the engine cycle to the next, e.g., from to. Clausius originally called this the "transformation content" of the body, and then later changed the name to entropy. Thus, the heat used to transform the working body of molecules from one state to the next cannot be used to do external work, e.g., to push the piston. Clausius defined this transformation heat as .
In 1873, Willard Gibbs published A Method of Geometrical Representation of the Thermodynamic Properties of Substances by Means of Surfaces, in which he introduced the preliminary outline of the principles of his new equation able to predict or estimate the tendencies of various natural processes to ensue when bodies or systems are brought into contact. By studying the interactions of homogeneous substances in contact, i.e., bodies, being in composition part solid, part liquid, and part vapor, and by using a three-dimensional volume-entropy-internal energy graph, Gibbs was able to determine three states of equilibrium, i.e., "necessarily stable", "neutral", and "unstable", and whether or not changes will ensue. In 1876, Gibbs built on this framework by introducing the concept of chemical potential so to take into account chemical reactions and states of bodies that are chemically different from each other. In his own words, to summarize his results in 1873, Gibbs states:
In this description, as used by Gibbs, ε refers to the internal energy of the body, η refers to the entropy of the body, and ν is the volume of the body.
Hence, in 1882, after the introduction of these arguments by Clausius and Gibbs, the German scientist Hermann von Helmholtz stated, in opposition to Berthelot and Thomas' hypothesis that chemical affinity is a measure of the heat of reaction of chemical reaction as based on the principle of maximal work, that affinity is not the heat given out in the formation of a compound but rather it is the largest quantity of work which can be gained when the reaction is carried out in a reversible manner, e.g., electrical work in a reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Gibbs free energy G at T = constant, P = constant or Helmholtz free energy A at T = constant, V = constant), whilst the heat given out is usually a measure of the diminution of the total energy of the system (Internal energy). Thus, G or A is the amount of energy "free" for work under the given conditions.
Up until this point, the general view had been such that: “all chemical reactions drive the system to a state of equilibrium in which the affinities of the reactions vanish”. Over the next 60 years, the term affinity came to be replaced with the term free energy. According to chemistry historian Henry Leicester, the influential 1923 textbook Thermodynamics and the Free Energy of Chemical Reactions by Gilbert N. Lewis and Merle Randall led to the replacement of the term "affinity" by the term "free energy" in much of the English-speaking world.
See also
Energy
Exergy
Merle Randall
Second law of thermodynamics
Superconductivity
References
Energy (physics)
State functions | 0.778716 | 0.995448 | 0.775171 |
Chemical decomposition | Chemical decomposition, or chemical breakdown, is the process or effect of simplifying a single chemical entity (normal molecule, reaction intermediate, etc.) into two or more fragments. Chemical decomposition is usually regarded and defined as the exact opposite of chemical synthesis. In short, the chemical reaction in which two or more products are formed from a single reactant is called a decomposition reaction.
The details of a decomposition process are not always well defined. Nevertheless, some activation energy is generally needed to break the involved bonds and as such, higher temperatures generally accelerates decomposition. The net reaction can be an endothermic process, or in the case of spontaneous decompositions, an exothermic process.
The stability of a chemical compound is eventually limited when exposed to extreme environmental conditions such as heat, radiation, humidity, or the acidity of a solvent. Because of this chemical decomposition is often an undesired chemical reaction. However chemical decomposition can be desired, such as in various waste treatment processes.
For example, this method is employed for several analytical techniques, notably mass spectrometry, traditional gravimetric analysis, and thermogravimetric analysis. Additionally decomposition reactions are used today for a number of other reasons in the production of a wide variety of products. One of these is the explosive breakdown reaction of sodium azide [(NaN3)2] into nitrogen gas (N2) and sodium (Na). It is this process which powers the life-saving airbags present in virtually all of today's automobiles.
Decomposition reactions can be generally classed into three categories; thermal, electrolytic, and photolytic decomposition reactions.
Reaction formula
In the breakdown of a compound into its constituent parts, the generalized reaction for chemical decomposition is:
AB → A + B (AB represents the reactant that begins the reaction, and A and B represent the products of the reaction)
An example is the electrolysis of water to the gases hydrogen and oxygen:
2 H2O() → 2 H2() + O2()
Additional examples
An example of a spontaneous (without addition of an external energy source) decomposition is that of hydrogen peroxide which slowly decomposes into water and oxygen (see video at right):
2 H2O2 → 2 H2O + O2
This reaction is one of the exceptions to the endothermic nature of decomposition reactions.
Other reactions involving decomposition do require the input of external energy. This energy can be in the form of heat, radiation, electricity, or light. The latter being the reason some chemical compounds, such as many prescription medicines, are kept and stored in dark bottles which reduce or eliminate the possibility of light reaching them and initiating decomposition.
When heated, carbonates will decompose. A notable exception is carbonic acid, (H2CO3). Commonly seen as the "fizz" in carbonated beverages, carbonic acid will spontaneously decompose over time into carbon dioxide and water. The reaction is written as:
H2CO3 → H2O + CO2
Other carbonates will decompose when heated to produce their corresponding metal oxide and carbon dioxide. The following equation is an example, where M represents the given metal:
MCO3 → MO + CO2
A specific example is that involving calcium carbonate:
CaCO3 → CaO + CO2
Metal chlorates also decompose when heated. In this type of decomposition reaction, a metal chloride and oxygen gas are the products. Here, again, M represents the metal:
2 MClO3 → 2 MCl+ 3 O2
A common decomposition of a chlorate is in the reaction of potassium chlorate where oxygen is the product. This can be written as:
2 KClO3 → 2 KCl + 3 O2
See also
Analytical chemistry
Thermal decomposition
References
External links
https://quizlet.com/42968634/types-of-decomposition-reactions-flash-cards/ PDF
Biodegradation database
Inorganic chemistry
Organic chemistry
Chemical reactions | 0.782848 | 0.990169 | 0.775151 |
Phlogiston theory | The phlogiston theory, a superseded scientific theory, postulated the existence of a fire-like element dubbed phlogiston contained within combustible bodies and released during combustion. The name comes from the Ancient Greek (burning up), from (flame). The idea of a substance was first proposed in 1667 by Johann Joachim Becher and later put together more formally in 1703 by Georg Ernst Stahl. Phlogiston theory attempted to explain chemical processes such as combustion and rusting, now collectively known as oxidation. The theory was challenged by the concomitant weight increase and was abandoned before the end of the 18th century following experiments by Antoine Lavoisier in the 1770s and by other scientists. Phlogiston theory led to experiments that ultimately resulted in the identification, and naming (1777), of oxygen by Joseph Priestley and Antoine Lavoisier, respectively.
Theory
Phlogiston theory states that phlogisticated substances contain phlogiston and that they dephlogisticate when burned, releasing stored phlogiston, which is absorbed by the air. Growing plants then absorb this phlogiston, which is why air does not spontaneously combust and also why plant matter burns. This method of accounting for combustion was inverse to the oxygen theory by Antoine Lavoisier.
In general, substances that burned in the air were said to be rich in phlogiston; the fact that combustion soon ceased in an enclosed space was taken as clear-cut evidence that air had the capacity to absorb only a finite amount of phlogiston. When the air had become completely phlogisticated it would no longer serve to support the combustion of any material, nor would a metal heated in it yield a calx; nor could phlogisticated air support life. Breathing was thought to take phlogiston out of the body.
Joseph Black's Scottish student Daniel Rutherford discovered nitrogen in 1772, and the pair used the theory to explain his results. The residue of air left after burning, in fact, a mixture of nitrogen and carbon dioxide, was sometimes referred to as phlogisticated air, having taken up all of the phlogiston. Conversely, when Joseph Priestley discovered oxygen, he believed it to be dephlogisticated air, capable of combining with more phlogiston and thus supporting combustion for longer than ordinary air.
History
Empedocles had formulated the classical theory that there were four elements—water, earth, fire, and air—and Aristotle reinforced this idea by characterising them as moist, dry, hot, and cold. Fire was thus thought of as a substance, and burning was seen as a process of decomposition that applied only to compounds. Experience had shown that burning was not always accompanied by a loss of material, and a better theory was needed to account for this.
Johann Joachim Becher
In 1667, Johann Joachim Becher published his book , which contained the first instance of what would become the phlogiston theory. In his book, Becher eliminated fire and air from the classical element model and replaced them with three forms of the earth: , , and . was the element that imparted oily, sulphurous, or combustible properties. Becher believed that was a key feature of combustion and was released when combustible substances were burned. Becher did not have much to do with phlogiston theory as we know it now, but he had a large influence on his student Stahl. Becher's main contribution was the start of the theory itself, however much of it was changed after him. Becher's idea was that combustible substances contain an ignitable matter, the .
Georg Ernst Stahl
In 1703, Georg Ernst Stahl, a professor of medicine and chemistry at Halle, proposed a variant of the theory in which he renamed Becher's to phlogiston, and it was in this form that the theory probably had its greatest influence. The term 'phlogiston' itself was not something that Stahl invented. There is evidence that the word was used as early as 1606, and in a way that was very similar to what Stahl was using it for. The term was derived from a Greek word meaning inflame. The following paragraph describes Stahl's view of phlogiston:
Stahl's first definition of phlogiston first appeared in his , published in 1697. His most quoted definition was found in the treatise on chemistry entitled in 1723. According to Stahl, phlogiston was a substance that was not able to be put into a bottle but could be transferred nonetheless. To him, wood was just a combination of ash and phlogiston, and making a metal was as simple as getting a metal calx and adding phlogiston. Soot was almost pure phlogiston, which is why heating it with a metallic calx transforms the calx into the metal and Stahl attempted to prove that the phlogiston in soot and sulphur were identical by converting sulphates to liver of sulphur using charcoal. He did not account for the increase in weight on combustion of tin and lead that were known at the time.
J. H. Pott
Johann Heinrich Pott, a student of one of Stahl's students, expanded the theory and attempted to make it much more understandable to a general audience. He compared phlogiston to light or fire, saying that all three were substances whose natures were widely understood but not easily defined. He thought that phlogiston should not be considered as a particle but as an essence that permeates substances, arguing that in a pound of any substance, one could not simply pick out the particles of phlogiston. Pott also observed the fact that when certain substances are burned they increase in mass instead of losing the mass of the phlogiston as it escapes; according to him, phlogiston was the basic fire principle and could not be obtained by itself. Flames were considered to be a mix of phlogiston and water, while a phlogiston-and-earthy mixture could not burn properly. Phlogiston permeates everything in the universe, it could be released as heat when combined with an acid. Pott proposed the following properties:
The form of phlogiston consists of a circular movement around its axis.
When homogeneous it cannot be consumed or dissipated in a fire.
The reason it causes expansion in most bodies is unknown, but not accidental. It is proportional to the compactness of the texture of the bodies or to the intimacy of their constitution.
The increase of weight during calcination is evident only after a long time, and is due either to the fact that the particles of the body become more compact, decrease the volume and hence increase the density as in the case of lead, or those little heavy particles of air become lodged in the substance as in the case of powdered zinc oxide.
Air attracts the phlogiston of bodies.
When set in motion, phlogiston is the chief active principle in nature of all inanimate bodies.
It is the basis of colours.
It is the principal agent in fermentation.
Pott's formulations proposed little new theory; he merely supplied further details and rendered existing theory more approachable to the common man.
Others
Johann Juncker also created a very complete picture of phlogiston. When reading Stahl's work, he assumed that phlogiston was in fact very material. He, therefore, came to the conclusion that phlogiston has the property of levity, or that it makes the compound that it is in much lighter than it would be without the phlogiston. He also showed that air was needed for combustion by putting substances in a sealed flask and trying to burn them.
Guillaume-François Rouelle brought the theory of phlogiston to France, where he was a very influential scientist and teacher, popularizing the theory very quickly. Many of his students became very influential scientists in their own right, Lavoisier included. The French viewed phlogiston as a very subtle principle that vanishes in all analysis, yet it is in all bodies. Essentially they followed straight from Stahl's theory.
Giovanni Antonio Giobert introduced Lavoisier's work in Italy. Giobert won a prize competition from the Academy of Letters and Sciences of Mantua in 1792 for his work refuting phlogiston theory. He presented a paper at the of Turin on 18 March 1792, entitled ("Chemical examination of the doctrine of phlogiston and the doctrine of pneumatists in relation to the nature of water"), which is considered the most original defence of Lavoisier's theory of water composition to appear in Italy.
Challenge and demise
Eventually, quantitative experiments revealed problems, including the fact that some metals gained weight after they burned, even though they were supposed to have lost phlogiston.
Some phlogiston proponents, like Robert Boyle, explained this by concluding that phlogiston has negative mass; others, such as Louis-Bernard Guyton de Morveau, gave the more conventional argument that it is lighter than air. However, a more detailed analysis based on Archimedes' principle, the densities of magnesium and its combustion product showed that just being lighter than air could not account for the increase in weight. Stahl himself did not address the problem of the metals that burn gaining weight, but those who followed his school of thought were the ones that worked on this problem.
During the eighteenth century, as it became clear that metals gained weight after they were oxidized, phlogiston was increasingly regarded as a principle rather than a material substance. By the end of the eighteenth century, for the few chemists who still used the term phlogiston, the concept was linked to hydrogen. Joseph Priestley, for example, in referring to the reaction of steam on iron, while fully acknowledging that the iron gains weight after it binds with oxygen to form a calx, iron oxide, iron also loses "the basis of inflammable air (hydrogen), and this is the substance or principle, to which we give the name phlogiston". Following Lavoisier's description of oxygen as the oxidizing principle (hence its name, from Ancient Greek: , "sharp"; , "birth" referring to oxygen's supposed role in the formation of acids), Priestley described phlogiston as the alkaline principle.
Phlogiston remained the dominant theory until the 1770s when Antoine-Laurent de Lavoisier showed that combustion requires a gas that has weight (specifically, oxygen) and could be measured by means of weighing closed vessels. The use of closed vessels by Lavoisier and earlier by the Russian scientist Mikhail Lomonosov also negated the buoyancy that had disguised the weight of the gases of combustion, and culminated in the principle of mass conservation. These observations solved the mass paradox and set the stage for the new oxygen theory of combustion. The British chemist Elizabeth Fulhame demonstrated through experiment that many oxidation reactions occur only in the presence of water, that they directly involve water, and that water is regenerated and is detectable at the end of the reaction. Based on her experiments, she disagreed with some of the conclusions of Lavoisier as well as with the phlogiston theorists that he critiqued. Her book on the subject appeared in print soon after Lavoisier's execution for Farm-General membership during the French Revolution.
Experienced chemists who supported Stahl's phlogiston theory attempted to respond to the challenges suggested by Lavoisier and the newer chemists. In doing so, phlogiston theory became more complicated and assumed too much, contributing to the overall demise of the theory. Many people tried to remodel their theories on phlogiston to have the theory work with what Lavoisier was doing in his experiments. Pierre Macquer reworded his theory many times, and even though he is said to have thought the theory of phlogiston was doomed, he stood by phlogiston and tried to make the theory work.
See also
References
External links
1667 introductions
1667 in science
Combustion
Obsolete theories in chemistry
Misidentified chemical elements
Obsolete theories in physics | 0.777313 | 0.997171 | 0.775113 |
Hess's law | Hess’ law of constant heat summation, also known simply as Hess' law, is a relationship in physical chemistry named after Germain Hess, a Swiss-born Russian chemist and physician who published it in 1840. The law states that the total enthalpy change during the complete course of a chemical reaction is independent of the sequence of steps taken.
Hess' law is now understood as an expression of the fact that the enthalpy of a chemical process is independent of the path taken from the initial to the final state (i.e. enthalpy is a state function). According to the first law of thermodynamics, the enthalpy change in a system due to a reaction at constant pressure is equal to the heat absorbed (or the negative of the heat released), which can be determined by calorimetry for many reactions. The values are usually stated for reactions with the same initial and final temperatures and pressures (while conditions are allowed to vary during the course of the reactions). Hess' law can be used to determine the overall energy required for a chemical reaction that can be divided into synthetic steps that are individually easier to characterize. This affords the compilation of standard enthalpies of formation, which may be used to predict the enthalpy change in complex synthesis.
Theory
Hess’ law states that the change of enthalpy in a chemical reaction is the same regardless of whether the reaction takes place in one step or several steps, provided the initial and final states of the reactants and products are the same. Enthalpy is an extensive property, meaning that its value is proportional to the system size. Because of this, the enthalpy change is proportional to the number of moles participating in a given reaction.
In other words, if a chemical change takes place by several different routes, the overall enthalpy change is the same, regardless of the route by which the chemical change occurs (provided the initial and final condition are the same). If this were not true, then one could violate the first law of thermodynamics.
Hess' law allows the enthalpy change (ΔH) for a reaction to be calculated even when it cannot be measured directly. This is accomplished by performing basic algebraic operations based on the chemical equations of reactions using previously determined values for the enthalpies of formation.
Combination of chemical equations leads to a net or overall equation. If the enthalpy changes are known for all the equations in the sequence, their sum will be the enthalpy change for the net equation. If the net enthalpy change is negative, the reaction is exothermic and is more likely to be spontaneous; positive ΔH values correspond to endothermic reactions. (Entropy also plays an important role in determining spontaneity, as some reactions with a positive enthalpy change are nevertheless spontaneous due to an entropy increase in the reaction system.)
Use of enthalpies of formation
Hess' law states that enthalpy changes are additive. Thus the value of the standard enthalpy of reaction can be calculated from standard enthalpies of formation of products and reactants as follows:
Here, the first sum is over all products and the second over all reactants, and are the stoichiometric coefficients of products and reactants respectively, and are the standard enthalpies of formation of products and reactants respectively, and the o superscript indicates standard state values. This may be considered as the sum of two (real or fictitious) reactions:
Reactants → Elements (in their standard states)
and Elements → Products
Examples
Given:
Cgraphite + O2 → CO2() ( ΔH = −393.5 kJ/mol) (direct step)
Cgraphite + 1/2 O2 → CO() (ΔH = −110.5 kJ/mol)
CO() +1/2 O2 → CO2() (ΔH = −283.0 kJ/mol)
Reaction (a) is the sum of reactions (b) and (c), for which the total ΔH = −393.5 kJ/mol, which is equal to ΔH in (a).
Given:
B2O3() + 3H2O() → 3O2() + B2H6() (ΔH = 2035 kJ/mol)
H2O() → H2O() (ΔH = 44 kJ/mol)
H2() + 1/2 O2() → H2O() (ΔH = −286 kJ/mol)
2B() + 3H2() → B2H6() (ΔH = 36 kJ/mol)
Find the ΔfH of:
2B() + 3/2 O2() → B2O3()
After multiplying the equations (and their enthalpy changes) by appropriate factors and reversing the direction when necessary, the result is:
B2H6() + 3O2() → B2O3() + 3H2O() (ΔH = 2035 × (−1) = −2035 kJ/mol)
3H2O() → 3H2O() (ΔH = 44 × (−3) = −132 kJ/mol)
3H2O() → 3H2() + (3/2) O2() (ΔH = −286 × (−3) = 858 kJ/mol)
2B() + 3H2() → B2H6() (ΔH = 36 kJ/mol)
Adding these equations and canceling out the common terms on both sides, we obtain
2B() + 3/2 O2() → B2O3() (ΔH = −1273 kJ/mol)
Extension to free energy and entropy
The concepts of Hess' law can be expanded to include changes in entropy and in Gibbs free energy, since these are also state functions. The Bordwell thermodynamic cycle is an example of such an extension that takes advantage of easily measured equilibria and redox potentials to determine experimentally inaccessible Gibbs free energy values. Combining ΔGo values from Bordwell thermodynamic cycles and ΔHo values found with Hess’ law can be helpful in determining entropy values that have not been measured directly and therefore need to be calculated through alternative paths.
For the free energy:
For entropy, the situation is a little different. Because entropy can be measured as an absolute value, not relative to those of the elements in their reference states (as with ΔHo and ΔGo), there is no need to use the entropy of formation; one simply uses the absolute entropies for products and reactants:
Applications
Hess' law is useful in the determination of enthalpies of the following:
Heats of formation of unstable intermediates like CO(g) and NO(g).
Heat changes in phase transitions and allotropic transitions.
Lattice energies of ionic substances by constructing Born–Haber cycles if the electron affinity to form the anion is known, or
Electron affinities using a Born–Haber cycle with a theoretical lattice energy.
See also
Thermochemistry
Thermodynamics
References
Further reading
External links
Hess' paper (1840) on which his law is based (at ChemTeam site)
a Hess’ Law experiment
Chemical thermodynamics
Physical chemistry
Thermochemistry | 0.782745 | 0.990237 | 0.775103 |
Glycolysis | Glycolysis is the metabolic pathway that converts glucose into pyruvate and, in most organisms, occurs in the liquid part of cells (the cytosol). The free energy released in this process is used to form the high-energy molecules adenosine triphosphate (ATP) and reduced nicotinamide adenine dinucleotide (NADH). Glycolysis is a sequence of ten reactions catalyzed by enzymes.
The wide occurrence of glycolysis in other species indicates that it is an ancient metabolic pathway. Indeed, the reactions that make up glycolysis and its parallel pathway, the pentose phosphate pathway, can occur in the oxygen-free conditions of the Archean oceans, also in the absence of enzymes, catalyzed by metal ions, meaning this is a plausible prebiotic pathway for abiogenesis.
The most common type of glycolysis is the Embden–Meyerhof–Parnas (EMP) pathway, which was discovered by Gustav Embden, Otto Meyerhof, and Jakub Karol Parnas. Glycolysis also refers to other pathways, such as the Entner–Doudoroff pathway and various heterofermentative and homofermentative pathways. However, the discussion here will be limited to the Embden–Meyerhof–Parnas pathway.
The glycolysis pathway can be separated into two phases:
Investment phase – wherein ATP is consumed
Yield phase – wherein more ATP is produced than originally consumed
Overview
The overall reaction of glycolysis is:
The use of symbols in this equation makes it appear unbalanced with respect to oxygen atoms, hydrogen atoms, and charges. Atom balance is maintained by the two phosphate (Pi) groups:
Each exists in the form of a hydrogen phosphate anion, dissociating to contribute overall
Each liberates an oxygen atom when it binds to an adenosine diphosphate (ADP) molecule, contributing 2O overall
Charges are balanced by the difference between ADP and ATP. In the cellular environment, all three hydroxyl groups of ADP dissociate into −O− and H+, giving ADP3−, and this ion tends to exist in an ionic bond with Mg2+, giving ADPMg−. ATP behaves identically except that it has four hydroxyl groups, giving ATPMg2−. When these differences along with the true charges on the two phosphate groups are considered together, the net charges of −4 on each side are balanced.
For simple fermentations, the metabolism of one molecule of glucose to two molecules of pyruvate has a net yield of two molecules of ATP. Most cells will then carry out further reactions to "repay" the used NAD+ and produce a final product of ethanol or lactic acid. Many bacteria use inorganic compounds as hydrogen acceptors to regenerate the NAD+.
Cells performing aerobic respiration synthesize much more ATP, but not as part of glycolysis. These further aerobic reactions use pyruvate, and NADH + H+ from glycolysis. Eukaryotic aerobic respiration produces approximately 34 additional molecules of ATP for each glucose molecule, however most of these are produced by a mechanism vastly different from the substrate-level phosphorylation in glycolysis.
The lower-energy production, per glucose, of anaerobic respiration relative to aerobic respiration, results in greater flux through the pathway under hypoxic (low-oxygen) conditions, unless alternative sources of anaerobically oxidizable substrates, such as fatty acids, are found.
History
The pathway of glycolysis as it is known today took almost 100 years to fully elucidate. The combined results of many smaller experiments were required in order to understand the intricacies of the entire pathway.
The first steps in understanding glycolysis began in the nineteenth century with the wine industry. For economic reasons, the French wine industry sought to investigate why wine sometimes turned distasteful, instead of fermenting into alcohol. French scientist Louis Pasteur researched this issue during the 1850s, and the results of his experiments began the long road to elucidating the pathway of glycolysis. His experiments showed that fermentation occurs by the action of living microorganisms, yeasts, and that yeast's glucose consumption decreased under aerobic conditions of fermentation, in comparison to anaerobic conditions (the Pasteur effect).
Insight into the component steps of glycolysis were provided by the non-cellular fermentation experiments of Eduard Buchner during the 1890s. Buchner demonstrated that the conversion of glucose to ethanol was possible using a non-living extract of yeast, due to the action of enzymes in the extract. This experiment not only revolutionized biochemistry, but also allowed later scientists to analyze this pathway in a more controlled laboratory setting. In a series of experiments (1905–1911), scientists Arthur Harden and William Young discovered more pieces of glycolysis. They discovered the regulatory effects of ATP on glucose consumption during alcohol fermentation. They also shed light on the role of one compound as a glycolysis intermediate: fructose 1,6-bisphosphate.
The elucidation of fructose 1,6-bisphosphate was accomplished by measuring levels when yeast juice was incubated with glucose. production increased rapidly then slowed down. Harden and Young noted that this process would restart if an inorganic phosphate (Pi) was added to the mixture. Harden and Young deduced that this process produced organic phosphate esters, and further experiments allowed them to extract fructose diphosphate (F-1,6-DP).
Arthur Harden and William Young along with Nick Sheppard determined, in a second experiment, that a heat-sensitive high-molecular-weight subcellular fraction (the enzymes) and a heat-insensitive low-molecular-weight cytoplasm fraction (ADP, ATP and NAD+ and other cofactors) are required together for fermentation to proceed. This experiment begun by observing that dialyzed (purified) yeast juice could not ferment or even create a sugar phosphate. This mixture was rescued with the addition of undialyzed yeast extract that had been boiled. Boiling the yeast extract renders all proteins inactive (as it denatures them). The ability of boiled extract plus dialyzed juice to complete fermentation suggests that the cofactors were non-protein in character.
In the 1920s Otto Meyerhof was able to link together some of the many individual pieces of glycolysis discovered by Buchner, Harden, and Young. Meyerhof and his team were able to extract different glycolytic enzymes from muscle tissue, and combine them to artificially create the pathway from glycogen to lactic acid.
In one paper, Meyerhof and scientist Renate Junowicz-Kockolaty investigated the reaction that splits fructose 1,6-diphosphate into the two triose phosphates. Previous work proposed that the split occurred via 1,3-diphosphoglyceraldehyde plus an oxidizing enzyme and cozymase. Meyerhoff and Junowicz found that the equilibrium constant for the isomerase and aldoses reaction were not affected by inorganic phosphates or any other cozymase or oxidizing enzymes. They further removed diphosphoglyceraldehyde as a possible intermediate in glycolysis.
With all of these pieces available by the 1930s, Gustav Embden proposed a detailed, step-by-step outline of that pathway we now know as glycolysis. The biggest difficulties in determining the intricacies of the pathway were due to the very short lifetime and low steady-state concentrations of the intermediates of the fast glycolytic reactions. By the 1940s, Meyerhof, Embden and many other biochemists had finally completed the puzzle of glycolysis. The understanding of the isolated pathway has been expanded in the subsequent decades, to include further details of its regulation and integration with other metabolic pathways.
Sequence of reactions
Summary of reactions
Preparatory phase
The first five steps of Glycolysis are regarded as the preparatory (or investment) phase, since they consume energy to convert the glucose into two three-carbon sugar phosphates (G3P).
Once glucose enters the cell, the first step is phosphorylation of glucose by a family of enzymes called hexokinases to form glucose 6-phosphate (G6P). This reaction consumes ATP, but it acts to keep the glucose concentration inside the cell low, promoting continuous transport of blood glucose into the cell through the plasma membrane transporters. In addition, phosphorylation blocks the glucose from leaking out – the cell lacks transporters for G6P, and free diffusion out of the cell is prevented due to the charged nature of G6P. Glucose may alternatively be formed from the phosphorolysis or hydrolysis of intracellular starch or glycogen.
In animals, an isozyme of hexokinase called glucokinase is also used in the liver, which has a much lower affinity for glucose (Km in the vicinity of normal glycemia), and differs in regulatory properties. The different substrate affinity and alternate regulation of this enzyme are a reflection of the role of the liver in maintaining blood sugar levels.
Cofactors: Mg2+
G6P is then rearranged into fructose 6-phosphate (F6P) by glucose phosphate isomerase. Fructose can also enter the glycolytic pathway by phosphorylation at this point.
The change in structure is an isomerization, in which the G6P has been converted to F6P. The reaction requires an enzyme, phosphoglucose isomerase, to proceed. This reaction is freely reversible under normal cell conditions. However, it is often driven forward because of a low concentration of F6P, which is constantly consumed during the next step of glycolysis. Under conditions of high F6P concentration, this reaction readily runs in reverse. This phenomenon can be explained through Le Chatelier's Principle. Isomerization to a keto sugar is necessary for carbanion stabilization in the fourth reaction step (below).
The energy expenditure of another ATP in this step is justified in 2 ways: The glycolytic process (up to this step) becomes irreversible, and the energy supplied destabilizes the molecule. Because the reaction catalyzed by phosphofructokinase 1 (PFK-1) is coupled to the hydrolysis of ATP (an energetically favorable step) it is, in essence, irreversible, and a different pathway must be used to do the reverse conversion during gluconeogenesis. This makes the reaction a key regulatory point (see below).
Furthermore, the second phosphorylation event is necessary to allow the formation of two charged groups (rather than only one) in the subsequent step of glycolysis, ensuring the prevention of free diffusion of substrates out of the cell.
The same reaction can also be catalyzed by pyrophosphate-dependent phosphofructokinase (PFP or PPi-PFK), which is found in most plants, some bacteria, archea, and protists, but not in animals. This enzyme uses pyrophosphate (PPi) as a phosphate donor instead of ATP. It is a reversible reaction, increasing the flexibility of glycolytic metabolism. A rarer ADP-dependent PFK enzyme variant has been identified in archaean species.
Cofactors: Mg2+
Destabilizing the molecule in the previous reaction allows the hexose ring to be split by aldolase into two triose sugars: dihydroxyacetone phosphate (a ketose), and glyceraldehyde 3-phosphate (an aldose). There are two classes of aldolases: class I aldolases, present in animals and plants, and class II aldolases, present in fungi and bacteria; the two classes use different mechanisms in cleaving the ketose ring.
Electrons delocalized in the carbon-carbon bond cleavage associate with the alcohol group. The resulting carbanion is stabilized by the structure of the carbanion itself via resonance charge distribution and by the presence of a charged ion prosthetic group.
Triosephosphate isomerase rapidly interconverts dihydroxyacetone phosphate with glyceraldehyde 3-phosphate (GADP) that proceeds further into glycolysis. This is advantageous, as it directs dihydroxyacetone phosphate down the same pathway as glyceraldehyde 3-phosphate, simplifying regulation.
Pay-off phase
The second half of glycolysis is known as the pay-off phase, characterised by a net gain of the energy-rich molecules ATP and NADH. Since glucose leads to two triose sugars in the preparatory phase, each reaction in the pay-off phase occurs twice per glucose molecule. This yields 2 NADH molecules and 4 ATP molecules, leading to a net gain of 2 NADH molecules and 2 ATP molecules from the glycolytic pathway per glucose.
The aldehyde groups of the triose sugars are oxidised, and inorganic phosphate is added to them, forming 1,3-bisphosphoglycerate.
The hydrogen is used to reduce two molecules of NAD+, a hydrogen carrier, to give NADH + H+ for each triose.
Hydrogen atom balance and charge balance are both maintained because the phosphate (Pi) group actually exists in the form of a hydrogen phosphate anion, which dissociates to contribute the extra H+ ion and gives a net charge of -3 on both sides.
Here, arsenate, an anion akin to inorganic phosphate may replace phosphate as a substrate to form 1-arseno-3-phosphoglycerate. This, however, is unstable and readily hydrolyzes to form 3-phosphoglycerate, the intermediate in the next step of the pathway. As a consequence of bypassing this step, the molecule of ATP generated from 1-3 bisphosphoglycerate in the next reaction will not be made, even though the reaction proceeds. As a result, arsenate is an uncoupler of glycolysis.
This step is the enzymatic transfer of a phosphate group from 1,3-bisphosphoglycerate to ADP by phosphoglycerate kinase, forming ATP and 3-phosphoglycerate. At this step, glycolysis has reached the break-even point: 2 molecules of ATP were consumed, and 2 new molecules have now been synthesized. This step, one of the two substrate-level phosphorylation steps, requires ADP; thus, when the cell has plenty of ATP (and little ADP), this reaction does not occur. Because ATP decays relatively quickly when it is not metabolized, this is an important regulatory point in the glycolytic pathway.
ADP actually exists as ADPMg−, and ATP as ATPMg2−, balancing the charges at −5 both sides.
Cofactors: Mg2+
Phosphoglycerate mutase isomerises 3-phosphoglycerate into 2-phosphoglycerate.
Enolase next converts 2-phosphoglycerate to phosphoenolpyruvate. This reaction is an elimination reaction involving an E1cB mechanism.
Cofactors: 2 Mg2+, one "conformational" ion to coordinate with the carboxylate group of the substrate, and one "catalytic" ion that participates in the dehydration.
A final substrate-level phosphorylation now forms a molecule of pyruvate and a molecule of ATP by means of the enzyme pyruvate kinase. This serves as an additional regulatory step, similar to the phosphoglycerate kinase step.
Cofactors: Mg2+
Biochemical logic
The existence of more than one point of regulation indicates that intermediates between those points enter and leave the glycolysis pathway by other processes. For example, in the first regulated step, hexokinase converts glucose into glucose-6-phosphate. Instead of continuing through the glycolysis pathway, this intermediate can be converted into glucose storage molecules, such as glycogen or starch. The reverse reaction, breaking down, e.g., glycogen, produces mainly glucose-6-phosphate; very little free glucose is formed in the reaction. The glucose-6-phosphate so produced can enter glycolysis after the first control point.
In the second regulated step (the third step of glycolysis), phosphofructokinase converts fructose-6-phosphate into fructose-1,6-bisphosphate, which then is converted into glyceraldehyde-3-phosphate and dihydroxyacetone phosphate. The dihydroxyacetone phosphate can be removed from glycolysis by conversion into glycerol-3-phosphate, which can be used to form triglycerides. Conversely, triglycerides can be broken down into fatty acids and glycerol; the latter, in turn, can be converted into dihydroxyacetone phosphate, which can enter glycolysis after the second control point.
Free energy changes
The change in free energy, ΔG, for each step in the glycolysis pathway can be calculated using ΔG = ΔG°′ + RTln Q, where Q is the reaction quotient. This requires knowing the concentrations of the metabolites. All of these values are available for erythrocytes, with the exception of the concentrations of NAD+ and NADH. The ratio of NAD+ to NADH in the cytoplasm is approximately 1000, which makes the oxidation of glyceraldehyde-3-phosphate (step 6) more favourable.
Using the measured concentrations of each step, and the standard free energy changes, the actual free energy change can be calculated. (Neglecting this is very common—the delta G of ATP hydrolysis in cells is not the standard free energy change of ATP hydrolysis quoted in textbooks).
From measuring the physiological concentrations of metabolites in an erythrocyte it seems that about seven of the steps in glycolysis are in equilibrium for that cell type. Three of the steps—the ones with large negative free energy changes—are not in equilibrium and are referred to as irreversible; such steps are often subject to regulation.
Step 5 in the figure is shown behind the other steps, because that step is a side-reaction that can decrease or increase the concentration of the intermediate glyceraldehyde-3-phosphate. That compound is converted to dihydroxyacetone phosphate by the enzyme triose phosphate isomerase, which is a catalytically perfect enzyme; its rate is so fast that the reaction can be assumed to be in equilibrium. The fact that ΔG is not zero indicates that the actual concentrations in the erythrocyte are not accurately known.
Regulation
The enzymes that catalyse glycolysis are regulated via a range of biological mechanisms in order to control overall flux though the pathway. This is vital for both homeostatsis in a static environment, and metabolic adaptation to a changing environment or need. The details of regulation for some enzymes are highly conserved between species, whereas others vary widely.
Gene Expression: Firstly, the cellular concentrations of glycolytic enzymes are modulated via regulation of gene expression via transcription factors, with several glycolysis enzymes themselves acting as regulatory protein kinases in the nucleus.
Allosteric inhibition and activation by metabolites: In particular end-product inhibition of regulated enzymes by metabolites such as ATP serves as negative feedback regulation of the pathway.
Allosteric inhibition and activation by Protein-protein interactions (PPI). Indeed, some proteins interact with and regulate multiple glycolytic enzymes.
Post-translational modification (PTM). In particular, phosphorylation and dephosphorylation is a key mechanism of regulation of pyruvate kinase in the liver.
Localization
Regulation by insulin in animals
In animals, regulation of blood glucose levels by the pancreas in conjunction with the liver is a vital part of homeostasis. The beta cells in the pancreatic islets are sensitive to the blood glucose concentration. A rise in the blood glucose concentration causes them to release insulin into the blood, which has an effect particularly on the liver, but also on fat and muscle cells, causing these tissues to remove glucose from the blood. When the blood sugar falls the pancreatic beta cells cease insulin production, but, instead, stimulate the neighboring pancreatic alpha cells to release glucagon into the blood. This, in turn, causes the liver to release glucose into the blood by breaking down stored glycogen, and by means of gluconeogenesis. If the fall in the blood glucose level is particularly rapid or severe, other glucose sensors cause the release of epinephrine from the adrenal glands into the blood. This has the same action as glucagon on glucose metabolism, but its effect is more pronounced. In the liver glucagon and epinephrine cause the phosphorylation of the key, regulated enzymes of glycolysis, fatty acid synthesis, cholesterol synthesis, gluconeogenesis, and glycogenolysis. Insulin has the opposite effect on these enzymes. The phosphorylation and dephosphorylation of these enzymes (ultimately in response to the glucose level in the blood) is the dominant manner by which these pathways are controlled in the liver, fat, and muscle cells. Thus the phosphorylation of phosphofructokinase inhibits glycolysis, whereas its dephosphorylation through the action of insulin stimulates glycolysis.
Regulated Enzymes in Glycolysis
The three regulatory enzymes are hexokinase (or glucokinase in the liver), phosphofructokinase, and pyruvate kinase. The flux through the glycolytic pathway is adjusted in response to conditions both inside and outside the cell. The internal factors that regulate glycolysis do so primarily to provide ATP in adequate quantities for the cell's needs. The external factors act primarily on the liver, fat tissue, and muscles, which can remove large quantities of glucose from the blood after meals (thus preventing hyperglycemia by storing the excess glucose as fat or glycogen, depending on the tissue type). The liver is also capable of releasing glucose into the blood between meals, during fasting, and exercise thus preventing hypoglycemia by means of glycogenolysis and gluconeogenesis. These latter reactions coincide with the halting of glycolysis in the liver.
In addition hexokinase and glucokinase act independently of the hormonal effects as controls at the entry points of glucose into the cells of different tissues. Hexokinase responds to the glucose-6-phosphate (G6P) level in the cell, or, in the case of glucokinase, to the blood sugar level in the blood to impart entirely intracellular controls of the glycolytic pathway in different tissues (see below).
When glucose has been converted into G6P by hexokinase or glucokinase, it can either be converted to glucose-1-phosphate (G1P) for conversion to glycogen, or it is alternatively converted by glycolysis to pyruvate, which enters the mitochondrion where it is converted into acetyl-CoA and then into citrate. Excess citrate is exported from the mitochondrion back into the cytosol, where ATP citrate lyase regenerates acetyl-CoA and oxaloacetate (OAA). The acetyl-CoA is then used for fatty acid synthesis and cholesterol synthesis, two important ways of utilizing excess glucose when its concentration is high in blood. The regulated enzymes catalyzing these reactions perform these functions when they have been dephosphorylated through the action of insulin on the liver cells. Between meals, during fasting, exercise or hypoglycemia, glucagon and epinephrine are released into the blood. This causes liver glycogen to be converted back to G6P, and then converted to glucose by the liver-specific enzyme glucose 6-phosphatase and released into the blood. Glucagon and epinephrine also stimulate gluconeogenesis, which converts non-carbohydrate substrates into G6P, which joins the G6P derived from glycogen, or substitutes for it when the liver glycogen store have been depleted. This is critical for brain function, since the brain utilizes glucose as an energy source under most conditions. The simultaneously phosphorylation of, particularly, phosphofructokinase, but also, to a certain extent pyruvate kinase, prevents glycolysis occurring at the same time as gluconeogenesis and glycogenolysis.
Hexokinase and glucokinase
All cells contain the enzyme hexokinase, which catalyzes the conversion of glucose that has entered the cell into glucose-6-phosphate (G6P). Since the cell membrane is impervious to G6P, hexokinase essentially acts to transport glucose into the cells from which it can then no longer escape. Hexokinase is inhibited by high levels of G6P in the cell. Thus the rate of entry of glucose into cells partially depends on how fast G6P can be disposed of by glycolysis, and by glycogen synthesis (in the cells which store glycogen, namely liver and muscles).
Glucokinase, unlike hexokinase, is not inhibited by G6P. It occurs in liver cells, and will only phosphorylate the glucose entering the cell to form G6P, when the glucose in the blood is abundant. This being the first step in the glycolytic pathway in the liver, it therefore imparts an additional layer of control of the glycolytic pathway in this organ.
Phosphofructokinase
Phosphofructokinase is an important control point in the glycolytic pathway, since it is one of the irreversible steps and has key allosteric effectors, AMP and fructose 2,6-bisphosphate (F2,6BP).
F2,6BP is a very potent activator of phosphofructokinase (PFK-1) that is synthesized when F6P is phosphorylated by a second phosphofructokinase (PFK2). In the liver, when blood sugar is low and glucagon elevates cAMP, PFK2 is phosphorylated by protein kinase A. The phosphorylation inactivates PFK2, and another domain on this protein becomes active as fructose bisphosphatase-2, which converts F2,6BP back to F6P. Both glucagon and epinephrine cause high levels of cAMP in the liver. The result of lower levels of liver F2,6BP is a decrease in activity of phosphofructokinase and an increase in activity of fructose 1,6-bisphosphatase, so that gluconeogenesis (in essence, "glycolysis in reverse") is favored. This is consistent with the role of the liver in such situations, since the response of the liver to these hormones is to release glucose to the blood.
ATP competes with AMP for the allosteric effector site on the PFK enzyme. ATP concentrations in cells are much higher than those of AMP, typically 100-fold higher, but the concentration of ATP does not change more than about 10% under physiological conditions, whereas a 10% drop in ATP results in a 6-fold increase in AMP. Thus, the relevance of ATP as an allosteric effector is questionable. An increase in AMP is a consequence of a decrease in energy charge in the cell.
Citrate inhibits phosphofructokinase when tested in vitro by enhancing the inhibitory effect of ATP. However, it is doubtful that this is a meaningful effect in vivo, because citrate in the cytosol is utilized mainly for conversion to acetyl-CoA for fatty acid and cholesterol synthesis.
TIGAR, a p53 induced enzyme, is responsible for the regulation of phosphofructokinase and acts to protect against oxidative stress. TIGAR is a single enzyme with dual function that regulates F2,6BP. It can behave as a phosphatase (fructuose-2,6-bisphosphatase) which cleaves the phosphate at carbon-2 producing F6P. It can also behave as a kinase (PFK2) adding a phosphate onto carbon-2 of F6P which produces F2,6BP. In humans, the TIGAR protein is encoded by C12orf5 gene. The TIGAR enzyme will hinder the forward progression of glycolysis, by creating a build up of fructose-6-phosphate (F6P) which is isomerized into glucose-6-phosphate (G6P). The accumulation of G6P will shunt carbons into the pentose phosphate pathway.
Pyruvate kinase
The final step of glycolysis is catalysed by pyruvate kinase to form pyruvate and another ATP. It is regulated by a range of different transcriptional, covalent and non-covalent regulation mechanisms, which can vary widely in different tissues. For example, in the liver, pyruvate kinase is regulated based on glucose availability. During fasting (no glucose available), glucagon activates protein kinase A which phosphorylates pyruvate kinase to inhibit it. An increase in blood sugar leads to secretion of insulin, which activates protein phosphatase 1, leading to dephosphorylation and re-activation of pyruvate kinase. These controls prevent pyruvate kinase from being active at the same time as the enzymes that catalyze the reverse reaction (pyruvate carboxylase and phosphoenolpyruvate carboxykinase), preventing a futile cycle. Conversely, the isoform of pyruvate kinasein found in muscle is not affected by protein kinase A (which is activated by adrenaline in that tissue), so that glycolysis remains active in muscles even during fasting.
Post-glycolysis processes
The overall process of glycolysis is:
Glucose + 2 NAD+ + 2 ADP + 2 Pi → 2 Pyruvate + 2 NADH + 2 H+ + 2 ATP + 2 H2O
If glycolysis were to continue indefinitely, all of the NAD+ would be used up, and glycolysis would stop. To allow glycolysis to continue, organisms must be able to oxidize NADH back to NAD+. How this is performed depends on which external electron acceptor is available.
Anoxic regeneration of NAD+
One method of doing this is to simply have the pyruvate do the oxidation; in this process, pyruvate is converted to lactate (the conjugate base of lactic acid) in a process called lactic acid fermentation:
Pyruvate + NADH + H+ → Lactate + NAD+
This process occurs in the bacteria involved in making yogurt (the lactic acid causes the milk to curdle). This process also occurs in animals under hypoxic (or partially anaerobic) conditions, found, for example, in overworked muscles that are starved of oxygen. In many tissues, this is a cellular last resort for energy; most animal tissue cannot tolerate anaerobic conditions for an extended period of time.
Some organisms, such as yeast, convert NADH back to NAD+ in a process called ethanol fermentation. In this process, the pyruvate is converted first to acetaldehyde and carbon dioxide, and then to ethanol.
Lactic acid fermentation and ethanol fermentation can occur in the absence of oxygen. This anaerobic fermentation allows many single-cell organisms to use glycolysis as their only energy source.
Anoxic regeneration of NAD+ is only an effective means of energy production during short, intense exercise in vertebrates, for a period ranging from 10 seconds to 2 minutes during a maximal effort in humans. (At lower exercise intensities it can sustain muscle activity in diving animals, such as seals, whales and other aquatic vertebrates, for very much longer periods of time.) Under these conditions NAD+ is replenished by NADH donating its electrons to pyruvate to form lactate. This produces 2 ATP molecules per glucose molecule, or about 5% of glucose's energy potential (38 ATP molecules in bacteria). But the speed at which ATP is produced in this manner is about 100 times that of oxidative phosphorylation. The pH in the cytoplasm quickly drops when hydrogen ions accumulate in the muscle, eventually inhibiting the enzymes involved in glycolysis.
The burning sensation in muscles during hard exercise can be attributed to the release of hydrogen ions during the shift to glucose fermentation from glucose oxidation to carbon dioxide and water, when aerobic metabolism can no longer keep pace with the energy demands of the muscles. These hydrogen ions form a part of lactic acid. The body falls back on this less efficient but faster method of producing ATP under low oxygen conditions. This is thought to have been the primary means of energy production in earlier organisms before oxygen reached high concentrations in the atmosphere between 2000 and 2500 million years ago, and thus would represent a more ancient form of energy production than the aerobic replenishment of NAD+ in cells.
The liver in mammals gets rid of this excess lactate by transforming it back into pyruvate under aerobic conditions; see Cori cycle.
Fermentation of pyruvate to lactate is sometimes also called "anaerobic glycolysis", however, glycolysis ends with the production of pyruvate regardless of the presence or absence of oxygen.
In the above two examples of fermentation, NADH is oxidized by transferring two electrons to pyruvate. However, anaerobic bacteria use a wide variety of compounds as the terminal electron acceptors in cellular respiration: nitrogenous compounds, such as nitrates and nitrites; sulfur compounds, such as sulfates, sulfites, sulfur dioxide, and elemental sulfur; carbon dioxide; iron compounds; manganese compounds; cobalt compounds; and uranium compounds.
Aerobic regeneration of NAD+ and further catabolism of pyruvate
In aerobic eukaryotes, a complex mechanism has developed to use the oxygen in air as the final electron acceptor, in a process called oxidative phosphorylation. Aerobic prokaryotes, which lack mitochondria, use a variety of simpler mechanisms.
Firstly, the NADH + H+ generated by glycolysis has to be transferred to the mitochondrion to be oxidized, and thus to regenerate the NAD+ necessary for glycolysis to continue. However the inner mitochondrial membrane is impermeable to NADH and NAD+. Use is therefore made of two "shuttles" to transport the electrons from NADH across the mitochondrial membrane. They are the malate-aspartate shuttle and the glycerol phosphate shuttle. In the former the electrons from NADH are transferred to cytosolic oxaloacetate to form malate. The malate then traverses the inner mitochondrial membrane into the mitochondrial matrix, where it is reoxidized by NAD+ forming intra-mitochondrial oxaloacetate and NADH. The oxaloacetate is then re-cycled to the cytosol via its conversion to aspartate which is readily transported out of the mitochondrion. In the glycerol phosphate shuttle electrons from cytosolic NADH are transferred to dihydroxyacetone to form glycerol-3-phosphate which readily traverses the outer mitochondrial membrane. Glycerol-3-phosphate is then reoxidized to dihydroxyacetone, donating its electrons to FAD instead of NAD+. This reaction takes place on the inner mitochondrial membrane, allowing FADH2 to donate its electrons directly to coenzyme Q (ubiquinone) which is part of the electron transport chain which ultimately transfers electrons to molecular oxygen , with the formation of water, and the release of energy eventually captured in the form of ATP.
The glycolytic end-product, pyruvate (plus NAD+) is converted to acetyl-CoA, and NADH + H+ within the mitochondria in a process called pyruvate decarboxylation.
The resulting acetyl-CoA enters the citric acid cycle (or Krebs Cycle), where the acetyl group of the acetyl-CoA is converted into carbon dioxide by two decarboxylation reactions with the formation of yet more intra-mitochondrial NADH + H+.
The intra-mitochondrial NADH + H+ is oxidized to NAD+ by the electron transport chain, using oxygen as the final electron acceptor to form water. The energy released during this process is used to create a hydrogen ion (or proton) gradient across the inner membrane of the mitochondrion.
Finally, the proton gradient is used to produce about 2.5 ATP for every NADH + H+ oxidized in a process called oxidative phosphorylation.
Conversion of carbohydrates into fatty acids and cholesterol
The pyruvate produced by glycolysis is an important intermediary in the conversion of carbohydrates into fatty acids and cholesterol. This occurs via the conversion of pyruvate into acetyl-CoA in the mitochondrion. However, this acetyl CoA needs to be transported into cytosol where the synthesis of fatty acids and cholesterol occurs. This cannot occur directly. To obtain cytosolic acetyl-CoA, citrate (produced by the condensation of acetyl CoA with oxaloacetate) is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA can be carboxylated by acetyl-CoA carboxylase into malonyl CoA, the first committed step in the synthesis of fatty acids, or it can be combined with acetoacetyl-CoA to form 3-hydroxy-3-methylglutaryl-CoA (HMG-CoA) which is the rate limiting step controlling the synthesis of cholesterol. Cholesterol can be used as is, as a structural component of cellular membranes, or it can be used to synthesize the steroid hormones, bile salts, and vitamin D.
Conversion of pyruvate into oxaloacetate for the citric acid cycle
Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix where they can either be oxidized and combined with coenzyme A to form , acetyl-CoA, and NADH, or they can be carboxylated (by pyruvate carboxylase) to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction (from the Greek meaning to "fill up"), increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in heart and skeletal muscle) are suddenly increased by activity.
In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of oxaloacetate greatly increases the amounts of all the citric acid intermediates, thereby increasing the cycle's capacity to metabolize acetyl CoA, converting its acetate component into and water, with the release of enough energy to form 11 ATP and 1 GTP molecule for each additional molecule of acetyl CoA that combines with oxaloacetate in the cycle.
To cataplerotically remove oxaloacetate from the citric cycle, malate can be transported from the mitochondrion into the cytoplasm, decreasing the amount of oxaloacetate that can be regenerated. Furthermore, citric acid intermediates are constantly used to form a variety of substances such as the purines, pyrimidines and porphyrins.
Intermediates for other pathways
This article concentrates on the catabolic role of glycolysis with regard to converting potential chemical energy to usable chemical energy during the oxidation of glucose to pyruvate. Many of the metabolites in the glycolytic pathway are also used by anabolic pathways, and, as a consequence, flux through the pathway is critical to maintain a supply of carbon skeletons for biosynthesis.
The following metabolic pathways are all strongly reliant on glycolysis as a source of metabolites: and many more.
Pentose phosphate pathway, which begins with the dehydrogenation of glucose-6-phosphate, the first intermediate to be produced by glycolysis, produces various pentose sugars, and NADPH for the synthesis of fatty acids and cholesterol.
Glycogen synthesis also starts with glucose-6-phosphate at the beginning of the glycolytic pathway.
Glycerol, for the formation of triglycerides and phospholipids, is produced from the glycolytic intermediate glyceraldehyde-3-phosphate.
Various post-glycolytic pathways:
Fatty acid synthesis
Cholesterol synthesis
The citric acid cycle which in turn leads to:
Amino acid synthesis
Nucleotide synthesis
Tetrapyrrole synthesis
Although gluconeogenesis and glycolysis share many intermediates the one is not functionally a branch or tributary of the other. There are two regulatory steps in both pathways which, when active in the one pathway, are automatically inactive in the other. The two processes can therefore not be simultaneously active. Indeed, if both sets of reactions were highly active at the same time the net result would be the hydrolysis of four high energy phosphate bonds (two ATP and two GTP) per reaction cycle.
NAD+ is the oxidizing agent in glycolysis, as it is in most other energy yielding metabolic reactions (e.g. beta-oxidation of fatty acids, and during the citric acid cycle). The NADH thus produced is primarily used to ultimately transfer electrons to to produce water, or, when is not available, to produce compounds such as lactate or ethanol (see Anoxic regeneration of NAD+ above). NADH is rarely used for synthetic processes, the notable exception being gluconeogenesis. During fatty acid and cholesterol synthesis the reducing agent is NADPH. This difference exemplifies a general principle that NADPH is consumed during biosynthetic reactions, whereas NADH is generated in energy-yielding reactions. The source of the NADPH is two-fold. When malate is oxidatively decarboxylated by "NADP+-linked malic enzyme" pyruvate, and NADPH are formed. NADPH is also formed by the pentose phosphate pathway which converts glucose into ribose, which can be used in synthesis of nucleotides and nucleic acids, or it can be catabolized to pyruvate.
Glycolysis in disease
Diabetes
Cellular uptake of glucose occurs in response to insulin signals, and glucose is subsequently broken down through glycolysis, lowering blood sugar levels. However, insulin resistance or low insulin levels seen in diabetes result in hyperglycemia, where glucose levels in the blood rise and glucose is not properly taken up by cells. Hepatocytes further contribute to this hyperglycemia through gluconeogenesis. Glycolysis in hepatocytes controls hepatic glucose production, and when glucose is overproduced by the liver without having a means of being broken down by the body, hyperglycemia results.
Genetic diseases
Glycolytic mutations are generally rare due to importance of the metabolic pathway; the majority of occurring mutations result in an inability of the cell to respire, and therefore cause the death of the cell at an early stage. However, some mutations (glycogen storage diseases and other inborn errors of carbohydrate metabolism) are seen with one notable example being pyruvate kinase deficiency, leading to chronic hemolytic anemia.
In combined malonic and methylmalonic aciduria (CMAMMA) due to ACSF3 deficiency, glycolysis is reduced by -50%, which is caused by reduced lipoylation of mitochondrial enzymes such as the pyruvate dehydrogenase complex and α-ketoglutarate dehydrogenase complex.
Cancer
Malignant tumor cells perform glycolysis at a rate that is ten times faster than their noncancerous tissue counterparts. During their genesis, limited capillary support often results in hypoxia (decreased O2 supply) within the tumor cells. Thus, these cells rely on anaerobic metabolic processes such as glycolysis for ATP (adenosine triphosphate). Some tumor cells overexpress specific glycolytic enzymes which result in higher rates of glycolysis. Often these enzymes are Isoenzymes, of traditional glycolysis enzymes, that vary in their susceptibility to traditional feedback inhibition. The increase in glycolytic activity ultimately counteracts the effects of hypoxia by generating sufficient ATP from this anaerobic pathway. This phenomenon was first described in 1930 by Otto Warburg and is referred to as the Warburg effect. The Warburg hypothesis claims that cancer is primarily caused by dysfunctionality in mitochondrial metabolism, rather than because of the uncontrolled growth of cells.
A number of theories have been advanced to explain the Warburg effect. One such theory suggests that the increased glycolysis is a normal protective process of the body and that malignant change could be primarily caused by energy metabolism.
This high glycolysis rate has important medical applications, as high aerobic glycolysis by malignant tumors is utilized clinically to diagnose and monitor treatment responses of cancers by imaging uptake of 2-18F-2-deoxyglucose (FDG) (a radioactive modified hexokinase substrate) with positron emission tomography (PET).
There is ongoing research to affect mitochondrial metabolism and treat cancer by reducing glycolysis and thus starving cancerous cells in various new ways, including a ketogenic diet.
Interactive pathway map
The diagram below shows human protein names. Names in other organisms may be different and the number of isozymes (such as HK1, HK2, ...) is likely to be different too.
Alternative nomenclature
Some of the metabolites in glycolysis have alternative names and nomenclature. In part, this is because some of them are common to other pathways, such as the Calvin cycle.
Structure of glycolysis components in Fischer projections and polygonal model
The intermediates of glycolysis depicted in Fischer projections show the chemical changing step by step. Such image can be compared to polygonal model representation.
See also
Carbohydrate catabolism
Citric acid cycle
Cori cycle
Fermentation (biochemistry)
Gluconeogenesis
Glycolytic oscillation
Glycogenoses (glycogen storage diseases)
Inborn errors of carbohydrate metabolism
Pentose phosphate pathway
Pyruvate decarboxylation
Triose kinase
References
External links
A Detailed Glycolysis Animation provided by IUBMB (Adobe Flash Required)
The Glycolytic enzymes in Glycolysis at RCSB PDB
Glycolytic cycle with animations at wdv.com
Metabolism, Cellular Respiration and Photosynthesis - The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
The chemical logic behind glycolysis at ufp.pt
Expasy biochemical pathways poster at ExPASy
metpath: Interactive representation of glycolysis
Biochemical reactions
Carbohydrates
Cellular respiration
Metabolic pathways | 0.775863 | 0.998978 | 0.77507 |
Biocatalysis | Biocatalysis refers to the use of living (biological) systems or their parts to speed up (catalyze) chemical reactions. In biocatalytic processes, natural catalysts, such as enzymes, perform chemical transformations on organic compounds. Both enzymes that have been more or less isolated and enzymes still residing inside living cells are employed for this task. Modern biotechnology, specifically directed evolution, has made the production of modified or non-natural enzymes possible. This has enabled the development of enzymes that can catalyze novel small molecule transformations that may be difficult or impossible using classical synthetic organic chemistry. Utilizing natural or modified enzymes to perform organic synthesis is termed chemoenzymatic synthesis; the reactions performed by the enzyme are classified as chemoenzymatic reactions.
History
Biocatalysis underpins some of the oldest chemical transformations known to humans, for brewing predates recorded history. The oldest records of brewing are about 6000 years old and refer to the Sumerians.
The employment of enzymes and whole cells have been important for many industries for centuries. The most obvious uses have been in the food and drink businesses where the production of wine, beer, cheese etc. is dependent on the effects of the microorganisms.
More than one hundred years ago, biocatalysis was employed to do chemical transformations on non-natural man-made organic compounds, with the last 30 years seeing a substantial increase in the application of biocatalysis to produce fine chemicals, especially for the pharmaceutical industry.
Since biocatalysis deals with enzymes and microorganisms, it is historically classified separately from "homogeneous catalysis" and "heterogeneous catalysis". However, mechanistically speaking, biocatalysis is simply a special case of heterogeneous catalysis.
Advantages of chemoenzymatic synthesis
-Enzymes are environmentally benign, being completely degraded in the environment.
-Most enzymes typically function under mild or biological conditions, which minimizes problems of undesired side-reactions such as decomposition, isomerization, racemization and rearrangement, which often plague traditional methodology.
-Enzymes selected for chemoenzymatic synthesis can be immobilized on a solid support. These immobilized enzymes demonstrate improved stability and re-usability.
-Through the development of protein engineering, specifically site-directed mutagenesis and directed evolution, enzymes can be modified to enable non-natural reactivity. Modifications may also allow for a broader substrate range, enhance reaction rate or catalyst turnover.
-Enzymes exhibit extreme selectivity towards their substrates. Typically enzymes display three major types of selectivity:
Chemoselectivity: Since the purpose of an enzyme is to act on a single type of functional group, other sensitive functionalities, which would normally react to a certain extent under chemical catalysis, survive. As a result, biocatalytic reactions tend to be "cleaner" and laborious purification of product(s) from impurities emerging through side-reactions can largely be omitted.
Regioselectivity and diastereoselectivity: Due to their complex three-dimensional structure, enzymes may distinguish between functional groups which are chemically situated in different regions of the substrate molecule.
Enantioselectivity: Since almost all enzymes are made from L-amino acids, enzymes are chiral catalysts. As a consequence, any type of chirality present in the substrate molecule is "recognized" upon the formation of the enzyme-substrate complex. Thus a prochiral substrate may be transformed into an optically active product and both enantiomers of a racemic substrate may react at different rates.
These reasons, and especially the latter, are the major reasons why synthetic chemists have become interested in biocatalysis. This interest in turn is mainly due to the need to synthesize enantiopure compounds as chiral building blocks for Pharmaceutical drugs and agrochemicals.
Asymmetric biocatalysis
The use of biocatalysis to obtain enantiopure compounds can be divided into two different methods:
Kinetic resolution of a racemic mixture
Biocatalyzed asymmetric synthesis
In kinetic resolution of a racemic mixture, the presence of a chiral object (the enzyme) converts one of the stereoisomers of the reactant into its product at a greater reaction rate than for the other reactant stereoisomer. The stereochemical mixture has now been transformed into a mixture of two different compounds, making them separable by normal methodology.
Biocatalyzed kinetic resolution is utilized extensively in the purification of racemic mixtures of synthetic amino acids. Many popular amino acid synthesis routes, such as the Strecker Synthesis, result in a mixture of R and S enantiomers. This mixture can be purified by (I) acylating the amine using an anhydride and then (II) selectively deacylating only the L enantiomer using hog kidney acylase. These enzymes are typically extremely selective for one enantiomer leading to very large differences in rate, allowing for selective deacylation. Finally the two products are now separable by classical techniques, such as chromatography.
The maximum yield in such kinetic resolutions is 50%, since a yield of more than 50% means that some of wrong isomer also has reacted, giving a lower enantiomeric excess. Such reactions must therefore be terminated before equilibrium is reached. If it is possible to perform such resolutions under conditions where the two substrate- enantiomers are racemizing continuously, all substrate may in theory be converted into enantiopure product. This is called dynamic resolution.
In biocatalyzed asymmetric synthesis, a non-chiral unit becomes chiral in such a way that the different possible stereoisomers are formed in different quantities. The chirality is introduced into the substrate by influence of enzyme, which is chiral. Yeast is a biocatalyst for the enantioselective reduction of ketones.
The Baeyer–Villiger oxidation is another example of a biocatalytic reaction. In one study a specially designed mutant of Candida antarctica was found to be an effective catalyst for the Michael addition of acrolein with acetylacetone at 20 °C in absence of additional solvent.
Another study demonstrates how racemic nicotine (mixture of S and R-enantiomers 1 in scheme 3) can be deracemized in a one-pot procedure involving a monoamine oxidase isolated from Aspergillus niger which is able to oxidize only the amine S-enantiomer to the imine 2 and involving an ammonia–borane reducing couple which can reduce the imine 2 back to the amine 1. In this way the S-enantiomer will continuously be consumed by the enzyme while the R-enantiomer accumulates. It is even possible to stereoinvert pure S to pure R.
Photoredox enabled biocatalysis
Recently, photoredox catalysis has been applied to biocatalysis, enabling unique, previously inaccessible transformations. Photoredox chemistry relies upon light to generate free radical intermediates. These radical intermediates are achiral thus racemic mixtures of product are obtained when no external chiral environment is provided. Enzymes can provide this chiral environment within the active site and stabilize a particular conformation and favoring formation of one, enantiopure product. Photoredox enabled biocatalysis reactions fall into two categories:
Internal coenzyme/cofactor photocatalyst
External photocatalyst
Certain common hydrogen atom transfer (HAT) cofactors (NADPH and Flavin) can operate as single electron transfer (SET) reagents. Although these species are capable of HAT without irradiation, their redox potentials are enhance by nearly 2.0 V upon visible light irradiation. When paired with their respective enzymes (typically ene-reductases) This phenomenon has been utilized by chemists to develop enantioselective reduction methodologies. For example medium sized lactams can be synthesized in the chiral environment of an ene-reductase through a reductive, baldwin favored, radical cyclization terminated by enantioselective HAT from NADPH.
The second category of photoredox enabled biocatalytic reactions use an external photocatalyst (PC). Many types of PCs with a large range of redox potentials can be utilized, allowing for greater tunability of reactive compared to using a cofactor. Rose bengal, and external PC, was utilized in tandem with an oxidoreductase to enantioselectively deacylate medium sized alpha-acyl-ketones.
Using an external PC has some downsides. For example, external PCs typically complicate reaction design because the PC may react with both the bound and unbound substrate. If a reaction occurs between the unbound substrate and the PC, enantioselectivity is lost and other side reactions may occur.
Agricultural uses
Bioenzymes are also bio catalyst. They are prepared by fermentation of organic waste, jaggery and water in ratio 3:1:10 for three months. It increases the soil microbe population and speeds up composting and decomposition and so is included in catalyts. It heals the soil. It is one of the best best organic liquid fertilizer. It is diluted with water.
Further reading
Kim, Jinhyun; Lee, Sahng Ha; Tieves, Florian; Paul, Caroline E.; Hollmann, Frank; Park, Chan Beum (5 July 2019). "Nicotinamide adenine dinucleotide as a photocatalyst". Science Advances. 5 (7): eaax0501. doi:10.1126/sciadv.aax0501.
See also
List of enzymes
Industrial enzymes
References
External links
Austrian Centre of Industrial Biotechnology official website
The Centre of Excellence for Biocatalysis - CoEBio3
The University of Exeter - Biocatalysis Centre
Center for Biocatalysis and Bioprocessing - The University of Iowa
TU Delft - Biocatalysis & Organic Chemistry (BOC)
KTH Stockholm - Biocatalysis Research Group
Institute of Technical Biocatalysis at the Hamburg University of Technology (TUHH)
Biocascades Project
Enzymes
Organic chemistry
Catalysis | 0.795722 | 0.974044 | 0.775069 |
Paleobiology | Paleobiology (or palaeobiology) is an interdisciplinary field that combines the methods and findings found in both the earth sciences and the life sciences. Paleobiology is not to be confused with geobiology, which focuses more on the interactions between the biosphere and the physical Earth.
Paleobiological research uses biological field research of current biota and of fossils millions of years old to answer questions about the molecular evolution and the evolutionary history of life. In this scientific quest, macrofossils, microfossils and trace fossils are typically analyzed. However, the 21st-century biochemical analysis of DNA and RNA samples offers much promise, as does the biometric construction of phylogenetic trees.
An investigator in this field is known as a paleobiologist.
Important research areas
Paleobotany applies the principles and methods of paleobiology to flora, especially green land plants, but also including the fungi and seaweeds (algae). See also mycology, phycology and dendrochronology.
Paleozoology uses the methods and principles of paleobiology to understand fauna, both vertebrates and invertebrates. See also vertebrate and invertebrate paleontology, as well as paleoanthropology.
Micropaleontology applies paleobiologic principles and methods to archaea, bacteria, protists and microscopic pollen/spores. See also microfossils and palynology.
Paleovirology examines the evolutionary history of viruses on paleobiological timescales.
Paleobiochemistry uses the methods and principles of organic chemistry to detect and analyze molecular-level evidence of ancient life, both microscopic and macroscopic.
Paleoecology examines past ecosystems, climates, and geographies so as to better comprehend prehistoric life.
Taphonomy analyzes the post-mortem history (for example, decay and decomposition) of an individual organism in order to gain insight on the behavior, death and environment of the fossilized organism.
Paleoichnology analyzes the tracks, borings, trails, burrows, impressions, and other trace fossils left by ancient organisms in order to gain insight into their behavior and ecology.
Stratigraphic paleobiology studies long-term secular changes, as well as the (short-term) bed-by-bed sequence of changes, in organismal characteristics and behaviors. See also stratification, sedimentary rocks and the geologic time scale.
Evolutionary developmental paleobiology examines the evolutionary aspects of the modes and trajectories of growth and development in the evolution of life – clades both extinct and extant. See also adaptive radiation, cladistics, evolutionary biology, developmental biology and phylogenetic tree.
Paleobiologists
The founder or "father" of modern paleobiology was Baron Franz Nopcsa (1877 to 1933), a Hungarian scientist trained at the University of Vienna. He initially termed the discipline "paleophysiology".
However, credit for coining the word paleobiology itself should go to Professor Charles Schuchert. He proposed the term in 1904 so as to initiate "a broad new science" joining "traditional paleontology with the evidence and insights of geology and isotopic chemistry."
On the other hand, Charles Doolittle Walcott, a Smithsonian adventurer, has been cited as the "founder of Precambrian paleobiology". Although best known as the discoverer of the mid-Cambrian Burgess shale animal fossils, in 1883 this American curator found the "first Precambrian fossil cells known to science" – a stromatolite reef then known as Cryptozoon algae. In 1899 he discovered the first acritarch fossil cells, a Precambrian algal phytoplankton he named Chuaria. Lastly, in 1914, Walcott reported "minute cells and chains of cell-like bodies" belonging to Precambrian purple bacteria.
Later 20th-century paleobiologists have also figured prominently in finding Archaean and Proterozoic eon microfossils: In 1954, Stanley A. Tyler and Elso S. Barghoorn described 2.1 billion-year-old cyanobacteria and fungi-like microflora at their Gunflint Chert fossil site. Eleven years later, Barghoorn and J. William Schopf reported finely-preserved Precambrian microflora at their Bitter Springs site of the Amadeus Basin, Central Australia.
In 1993, Schopf discovered O2-producing blue-green bacteria at his 3.5 billion-year-old Apex Chert site in Pilbara Craton, Marble Bar, in the northwestern part of Western Australia. So paleobiologists were at last homing in on the origins of the Precambrian "Oxygen catastrophe".
During the early part of the 21st-century, two paleobiologists Anjali Goswami and Thomas Halliday, studied the evolution of mammaliaforms during the Mesozoic and Cenozoic eras (between 299 million to 12,000 years ago). Additionally, they uncovered and studied the morphological disparity and rapid evolutionary rates of living organisms near the end and in the aftermath of the Cretaceous mass extinction (145 million to 66 million years ago).
Paleobiologic journals
Acta Palaeontologica Polonica
Biology and Geology
Historical Biology
PALAIOS
Palaeogeography, Palaeoclimatology, Palaeoecology
Paleobiology (journal)
Paleoceanography
Paleobiology in the general press
Books written for the general public on this topic include the following:
The Rise and Reign of the Mammals: A New History, from the Shadow of the Dinosaurs to Us written by Steve Brusatte
Otherlands: A Journey Through Earth's Extinct Worlds written by Thomas Halliday
Introduction to Paleobiology and the Fossil Record – 22 April 2020 by Michael J. Benton (Author), David A. T. Harper (Author)
See also
History of biology
History of paleontology
History of invertebrate paleozoology
Molecular paleontology
Taxonomy of commonly fossilised invertebrates
Treatise on Invertebrate Paleontology
Footnotes
Derek E.G. Briggs and Peter R. Crowther, eds. (2003). Palaeobiology II. Malden, Massachusetts: Blackwell Publishing. and . The second edition of an acclaimed British textbook.
Robert L. Carroll (1998). Patterns and Processes of Vertebrate Evolution. Cambridge Paleobiology Series. Cambridge, England: Cambridge University Press. and . Applies paleobiology to the adaptive radiation of fishes and quadrupeds.
Matthew T. Carrano, Timothy Gaudin, Richard Blob, and John Wible, eds. (2006). Amniote Paleobiology: Perspectives on the Evolution of Mammals, Birds and Reptiles. Chicago: University of Chicago Press. and . This new book describes paleobiological research into land vertebrates of the Mesozoic and Cenozoic eras.
Robert B. Eckhardt (2000). Human Paleobiology. Cambridge Studies in Biology and Evolutionary Anthropology. Cambridge, England: Cambridge University Press. and . This book connects paleoanthropology and archeology to the field of paleobiology.
Douglas H. Erwin (2006). Extinction: How Life on Earth Nearly Ended 250 Million Years Ago. Princeton: Princeton University Press. . An investigation by a paleobiologist into the many theories as to what happened during the catastrophic Permian-Triassic transition.
Brian Keith Hall and Wendy M. Olson, eds. (2003). Keywords and Concepts in Evolutionary Biology. Cambridge, Massachusetts: Harvard University Press. and .
David Jablonski, Douglas H. Erwin, and Jere H. Lipps (1996). Evolutionary Paleobiology. Chicago: University of Chicago Press, 492 pages. and . A fine American textbook.
Masatoshi Nei and Sudhir Kumar (2000). Molecular Evolution and Phylogenetics. Oxford, England: Oxford University Press. and . This text links DNA/RNA analysis to the evolutionary "tree of life" in paleobiology.
Donald R. Prothero (2004). Bringing Fossils to Life: An Introduction to Paleobiology. New York: McGraw Hill. and . An acclaimed book for the novice fossil-hunter and young adults.
Mark Ridley, ed. (2004). Evolution. Oxford, England: Oxford University Press. and . An anthology of analytical studies in paleobiology.
Raymond Rogers, David Eberth, and Tony Fiorillo (2007). Bonebeds: Genesis, Analysis and Paleobiological Significance. Chicago: University of Chicago Press. and . A new book regarding the fossils of vertebrates, especially tetrapods on land during the Mesozoic and Cenozoic eras.
Thomas J. M. Schopf, ed. (1972). Models in Paleobiology. San Francisco: Freeman, Cooper. and . A much-cited, seminal classic in the field discussing methodology and quantitative analysis.
Thomas J.M. Schopf (1980). Paleoceanography. Cambridge, Massachusetts: Harvard University Press. and . A later book by the noted paleobiologist. This text discusses ancient marine ecology.
J. William Schopf (2001). Cradle of Life: The Discovery of Earth's Earliest Fossils. Princeton: Princeton University Press. . The use of biochemical and ultramicroscopic analysis to analyze microfossils of bacteria and archaea.
Paul Selden and John Nudds (2005). Evolution of Fossil Ecosystems. Chicago: University of Chicago Press. and . A recent analysis and discussion of paleoecology.
David Sepkoski. Rereading the Fossil Record: The Growth of Paleobiology as an Evolutionary Discipline (University of Chicago Press; 2012) 432 pages; A history since the mid-19th century, with a focus on the "revolutionary" era of the 1970s and early 1980s and the work of Stephen Jay Gould and David Raup.
Paul Tasch (1980). Paleobiology of the Invertebrates. New York: John Wiley & Sons. and . Applies statistics to the evolution of sponges, cnidarians, worms, brachiopods, bryozoa, mollusks, and arthropods.
Shuhai Xiao and Alan J. Kaufman, eds. (2006). Neoproterozoic Geobiology and Paleobiology. New York: Springer Science+Business Media. . This new book describes research into the fossils of the earliest multicellular animals and plants, especially the Ediacaran period invertebrates and algae.
Bernard Ziegler and R. O. Muir (1983). Introduction to Palaeobiology. Chichester, England: E. Horwood. and . A classic, British introductory textbook.
External links
Paleobiology website of the National Museum of Natural History (Smithsonian) in Washington, D.C. (archived 11 March 2007)
The Paleobiology Database
Developmental biology
Evolutionary biology
Subfields of paleontology | 0.791994 | 0.978565 | 0.775017 |
Hydroxy group | In chemistry, a hydroxy or hydroxyl group is a functional group with the chemical formula and composed of one oxygen atom covalently bonded to one hydrogen atom. In organic chemistry, alcohols and carboxylic acids contain one or more hydroxy groups. Both the negatively charged anion , called hydroxide, and the neutral radical , known as the hydroxyl radical, consist of an unbonded hydroxy group.
According to IUPAC definitions, the term hydroxyl refers to the hydroxyl radical only, while the functional group is called a hydroxy group.
Properties
Water, alcohols, carboxylic acids, and many other hydroxy-containing compounds can be readily deprotonated due to a large difference between the electronegativity of oxygen (3.5) and that of hydrogen (2.1). Hydroxy-containing compounds engage in intermolecular hydrogen bonding increasing the electrostatic attraction between molecules and thus to higher boiling and melting points than found for compounds that lack this functional group. Organic compounds, which are often poorly soluble in water, become water-soluble when they contain two or more hydroxy groups, as illustrated by sugars and amino acid.
Occurrence
The hydroxy group is pervasive in chemistry and biochemistry. Many inorganic compounds contain hydroxyl groups, including sulfuric acid, the chemical compound produced on the largest scale industrially.
Hydroxy groups participate in the dehydration reactions that link simple biological molecules into long chains. The joining of a fatty acid to glycerol to form a triacylglycerol removes the −OH from the carboxy end of the fatty acid. The joining of two aldehyde sugars to form a disaccharide removes the −OH from the carboxy group at the aldehyde end of one sugar. The creation of a peptide bond to link two amino acids to make a protein removes the −OH from the carboxy group of one amino acid.
Hydroxyl radical
Hydroxyl radicals are highly reactive and undergo chemical reactions that make them short-lived. When biological systems are exposed to hydroxyl radicals, they can cause damage to cells, including those in humans, where they can react with DNA, lipids, and proteins.
Planetary observations
Airglow of the Earth
The Earth's night sky is illuminated by diffuse light, called airglow, that is produced by radiative transitions of atoms and molecules. Among the most intense such features observed in the Earth's night sky is a group of infrared transitions at wavelengths between 700 nanometers and 900 nanometers. In 1950, Aden Meinel showed that these were transitions of the hydroxyl molecule, OH.
Surface of the Moon
In 2009, India's Chandrayaan-1 satellite and the National Aeronautics and Space Administration (NASA) Cassini spacecraft and Deep Impact probe each detected evidence of water by evidence of hydroxyl fragments on the Moon. As reported by Richard Kerr, "A spectrometer [the Moon Mineralogy Mapper, also known as "M3"] detected an infrared absorption at a wavelength of 3.0 micrometers that only water or hydroxyl—a hydrogen and an oxygen bound together—could have created." NASA also reported in 2009 that the LCROSS probe revealed an ultraviolet emission spectrum consistent with hydroxyl presence.
On 26 October 2020, NASA reported definitive evidence of water on the sunlit surface of the Moon, in the vicinity of the crater Clavius (crater), obtained by the Stratospheric Observatory for Infrared Astronomy (SOFIA). The SOFIA Faint Object infrared Camera for the SOFIA Telescope (FORCAST) detected emission bands at a wavelength of 6.1 micrometers that are present in water but not in hydroxyl. The abundance of water on the Moon's surface was inferred to be equivalent to the contents of a 12-ounce bottle of water per cubic meter of lunar soil.
The Chang'e 5 probe, which landed on the Moon on 1 December 2020, carried a mineralogical spectrometer that could measure infrared reflectance spectra of lunar rock and regolith. The reflectance spectrum of a rock sample at a wavelength of 2.85 micrometers indicated localized water/hydroxyl concentrations as high as 180 parts per million.
Atmosphere of Venus
The Venus Express orbiter collected Venus science data from April 2006 until December 2014. In 2008, Piccioni, et al. reported measurements of night-side airglow emission in the atmosphere of Venus made with the Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) on Venus Express. They attributed emission bands in wavelength ranges of 1.40 - 1.49 micrometers and 2.6 - 3.14 micrometers to vibrational transitions of OH. This was the first evidence for OH in the atmosphere of any planet other than Earth's.
Atmosphere of Mars
In 2013, OH near-infrared spectra were observed in the night glow in the polar winter atmosphere of Mars by use of the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM).
Exoplanets
In 2021, evidence for OH in the dayside atmosphere of the exoplanet WASP-33b was found in its emission spectrum at wavelengths between 1 and 2 micrometers. Evidence for OH in the atmosphere of exoplanet WASP-76b was subsequently found. Both WASP-33b and WASP-76b are ultra-hot Jupiters and it is likely that any water in their atmospheres is present as dissociated ions.
See also
Hydronium
Ion
Oxide
Hydroxylation
References
Further
External links
Alcohols
Functional groups
Hydroxides | 0.779582 | 0.994111 | 0.774991 |
Fugacity | In chemical thermodynamics, the fugacity of a real gas is an effective partial pressure which replaces the mechanical partial pressure in an accurate computation of chemical equilibrium. It is equal to the pressure of an ideal gas which has the same temperature and molar Gibbs free energy as the real gas.
Fugacities are determined experimentally or estimated from various models such as a Van der Waals gas that are closer to reality than an ideal gas. The real gas pressure and fugacity are related through the dimensionless fugacity coefficient
For an ideal gas, fugacity and pressure are equal, and so . Taken at the same temperature and pressure, the difference between the molar Gibbs free energies of a real gas and the corresponding ideal gas is equal to .
The fugacity is closely related to the thermodynamic activity. For a gas, the activity is simply the fugacity divided by a reference pressure to give a dimensionless quantity. This reference pressure is called the standard state and normally chosen as 1 atmosphere or 1 bar.
Accurate calculations of chemical equilibrium for real gases should use the fugacity rather than the pressure. The thermodynamic condition for chemical equilibrium is that the total chemical potential of reactants is equal to that of products. If the chemical potential of each gas is expressed as a function of fugacity, the equilibrium condition may be transformed into the familiar reaction quotient form (or law of mass action) except that the pressures are replaced by fugacities.
For a condensed phase (liquid or solid) in equilibrium with its vapor phase, the chemical potential is equal to that of the vapor, and therefore the fugacity is equal to the fugacity of the vapor. This fugacity is approximately equal to the vapor pressure when the vapor pressure is not too high.
Pure substance
Fugacity is closely related to the chemical potential . In a pure substance, is equal to the Gibbs energy for a mole of the substance, and
where and are the temperature and pressure, is the volume per mole and is the entropy per mole.
Gas
For an ideal gas the equation of state can be written as
where is the ideal gas constant. The differential change of the chemical potential between two states of slightly different pressures but equal temperature (i.e., ) is given by
where ln p is the natural logarithm of p.
For real gases the equation of state will depart from the simpler one, and the result above derived for an ideal gas will only be a good approximation provided that (a) the typical size of the molecule is negligible compared to the average distance between the individual molecules, and (b)
the short range behavior of the inter-molecular potential can be neglected, i.e., when the molecules can be considered to rebound elastically off each other during molecular collisions. In other words, real gases behave like ideal gases at low pressures and high temperatures. At moderately high pressures, attractive interactions between molecules reduce the pressure compared to the ideal gas law; and at very high pressures, the sizes of the molecules are no longer negligible and repulsive forces between molecules increases the pressure. At low temperatures, molecules are more likely to stick together instead of rebounding elastically.
The ideal gas law can still be used to describe the behavior of a real gas if the pressure is replaced by a fugacity , defined so that
and
That is, at low pressures is the same as the pressure, so it has the same units as pressure. The ratio
is called the fugacity coefficient.
If a reference state is denoted by a zero superscript, then integrating the equation for the chemical potential gives
Note this can also be expressed with , a dimensionless quantity, called the activity.
Numerical example: Nitrogen gas (N2) at 0 °C and a pressure of atmospheres (atm) has a fugacity of atm. This means that the molar Gibbs energy of real nitrogen at a pressure of 100 atm is equal to the molar Gibbs energy of nitrogen as an ideal gas at . The fugacity coefficient is .
The contribution of nonideality to the molar Gibbs energy of a real gas is equal to . For nitrogen at 100 atm, , which is less than the ideal value because of intermolecular attractive forces. Finally, the activity is just without units.
Condensed phase
The fugacity of a condensed phase (liquid or solid) is defined the same way as for a gas:
and
It is difficult to measure fugacity in a condensed phase directly; but if the condensed phase is saturated (in equilibrium with the vapor phase), the chemical potentials of the two phases are equal. Combined with the above definition, this implies that
When calculating the fugacity of the compressed phase, one can generally assume the volume is constant. At constant temperature, the change in fugacity as the pressure goes from the saturation press to is
This fraction is known as the Poynting factor. Using , where is the fugacity coefficient,
This equation allows the fugacity to be calculated using tabulated values for saturated vapor pressure. Often the pressure is low enough for the vapor phase to be considered an ideal gas, so the fugacity coefficient is approximately equal to 1.
Unless pressures are very high, the Poynting factor is usually small and the exponential term is near 1. Frequently, the fugacity of the pure liquid is used as a reference state when defining and using mixture activity coefficients.
Mixture
The fugacity is most useful in mixtures. It does not add any new information compared to the chemical potential, but it has computational advantages. As the molar fraction of a component goes to zero, the chemical potential diverges but the fugacity goes to zero. In addition, there are natural reference states for fugacity (for example, an ideal gas makes a natural reference state for gas mixtures since the fugacity and pressure converge at low pressure).
Gases
In a mixture of gases, the fugacity of each component has a similar definition, with partial molar quantities instead of molar quantities (e.g., instead of and instead of ):
and
where is the partial pressure of component . The partial pressures obey Dalton's law:
where is the total pressure and is the mole fraction of the component (so the partial pressures add up to the total pressure). The fugacities commonly obey a similar law called the Lewis and Randall rule:
where is the fugacity that component would have if the entire gas had that composition at the same temperature and pressure. Both laws are expressions of an assumption that the gases behave independently.
Liquids
In a liquid mixture, the fugacity of each component is equal to that of a vapor component in equilibrium with the liquid. In an ideal solution, the fugacities obey the Lewis-Randall rule:
where is the mole fraction in the liquid and is the fugacity of the pure liquid phase. This is a good approximation when the component molecules have similar size, shape and polarity.
In a dilute solution with two components, the component with the larger molar fraction (the solvent) may still obey Raoult's law even if the other component (the solute) has different properties. That is because its molecules experience essentially the same environment that they do in the absence of the solute. By contrast, each solute molecule is surrounded by solvent molecules, so it obeys a different law known as Henry's law. By Henry's law, the fugacity of the solute is proportional to its concentration. The constant of proportionality (a measured Henry's constant) depends on whether the concentration is represented by the mole fraction, molality or molarity.
Temperature and pressure dependence
The pressure dependence of fugacity (at constant temperature) is given by
and is always positive.
The temperature dependence at constant pressure is
where is the change in molar enthalpy as the gas expands, liquid vaporizes, or solid sublimates into a vacuum. Also, if the pressure is , then
Since the temperature and entropy are positive, decreases with increasing temperature.
Measurement
The fugacity can be deduced from measurements of volume as a function of pressure at constant temperature. In that case,
This integral can also be calculated using an equation of state.
The integral can be recast in an alternative form using the compressibility factor
Then
This is useful because of the theorem of corresponding states: If the pressure and temperature at the critical point of the gas are and , we can define reduced properties and . Then, to a good approximation, most gases have the same value of for the same reduced temperature and pressure. However, in geochemical applications, this principle ceases to be accurate at pressures where metamorphism occurs.
For a gas obeying the van der Waals equation, the explicit formula for the fugacity coefficient is
This formula is based on the molar volume. Since the pressure and the molar volume are related through the equation of state; a typical procedure would be to choose a volume, calculate the corresponding pressure, and then evaluate the right-hand side of the equation.
History
The word fugacity is derived from the Latin fugere, to flee. In the sense of an "escaping tendency", it was introduced to thermodynamics in 1901 by the American chemist Gilbert N. Lewis and popularized in an influential textbook by Lewis and Merle Randall, Thermodynamics and the Free Energy of Chemical Substances, in 1923. The "escaping tendency" referred to the flow of matter between phases and played a similar role to that of temperature in heat flow.
See also
Electrochemical potential
Excess chemical potential
Fugacity capacity
Multimedia fugacity model
Thermodynamic equilibrium
References
Further reading
External links
Video lectures
Thermodynamics, University of Colorado-Boulder, 2011
Introduction to fugacity: Where did it come from?
What is fugacity?
What is fugacity in mixtures?
Physical chemistry
Chemical thermodynamics
Thermodynamic properties
State functions | 0.780242 | 0.993236 | 0.774964 |
Bioaccumulation | Bioaccumulation is the gradual accumulation of substances, such as pesticides or other chemicals, in an organism. Bioaccumulation occurs when an organism absorbs a substance faster than it can be lost or eliminated by catabolism and excretion. Thus, the longer the biological half-life of a toxic substance, the greater the risk of chronic poisoning, even if environmental levels of the toxin are not very high. Bioaccumulation, for example in fish, can be predicted by models. Hypothesis for molecular size cutoff criteria for use as bioaccumulation potential indicators are not supported by data. Biotransformation can strongly modify bioaccumulation of chemicals in an organism.
Toxicity induced by metals is associated with bioaccumulation and biomagnification. Storage or uptake of a metal faster than it is metabolized and excreted leads to the accumulation of that metal. The presence of various chemicals and harmful substances in the environment can be analyzed and assessed with a proper knowledge on bioaccumulation helping with chemical control and usage.
An organism can take up chemicals by breathing, absorbing through skin or swallowing. When the concentration of a chemical is higher within the organism compared to its surroundings (air or water), it is referred to as bioconcentration. Biomagnification is another process related to bioaccumulation as the concentration of the chemical or metal increases as it moves up from one trophic level to another. Naturally, the process of bioaccumulation is necessary for an organism to grow and develop; however, the accumulation of harmful substances can also occur.
Examples
Terrestrial examples
An example of poisoning in the workplace can be seen from the phrase "mad as a hatter" (18th and 19th century England). Mercury was used in stiffening the felt that was used to make hats. This forms organic species such as methylmercury, which is lipid-soluble (fat-soluble), and tends to accumulate in the brain, resulting in mercury poisoning. Other lipid-soluble poisons include tetraethyllead compounds (the lead in leaded petrol), and DDT. These compounds are stored in the body fat, and when the fatty tissues are used for energy, the compounds are released and cause acute poisoning.
Strontium-90, part of the fallout from atomic bombs, is chemically similar enough to calcium that it is taken up in forming bones, where its radiation can cause damage for a long time.
Some animal species use bioaccumulation as a mode of defense: by consuming toxic plants or animal prey, an animal may accumulate the toxin, which then presents a deterrent to a potential predator. One example is the tobacco hornworm, which concentrates nicotine to a toxic level in its body as it consumes tobacco plants. Poisoning of small consumers can be passed along the food chain to affect the consumers later in the chain.
Other compounds that are not normally considered toxic can be accumulated to toxic levels in organisms. The classic example is vitamin A, which becomes concentrated in livers of carnivores, e.g. polar bears: as a pure carnivore that feeds on other carnivores (seals), they accumulate extremely large amounts of vitamin A in their livers. It was known by the native peoples of the Arctic that the livers of carnivores should not be eaten, but Arctic explorers have suffered hypervitaminosis A from eating the livers of bears; and there has been at least one example of similar poisoning of Antarctic explorers eating husky dog livers. One notable example of this is the expedition of Sir Douglas Mawson, whose exploration companion died from eating the liver of one of their dogs.
Aquatic examples
Coastal fish (such as the smooth toadfish) and seabirds (such as the Atlantic puffin) are often monitored for heavy metal bioaccumulation. Methylmercury gets into freshwater systems through industrial emissions and rain. As its concentration increases up the food web, it can reach dangerous levels for both fish and the humans who rely on fish as a food source.
Fish are typically assessed for bioaccumulation when they have been exposed to chemicals that are in their aqueous phases. Commonly tested fish species include the common carp, rainbow trout, and bluegill sunfish. Generally, fish are exposed to bioconcentration and bioaccumulation of organic chemicals in the environment through lipid layer uptake of water-borne chemicals. In other cases, the fish are exposed through ingestion/digestion of substances or organisms in the aquatic environment which contain the harmful chemicals.
Naturally produced toxins can also bioaccumulate. The marine algal blooms known as "red tides" can result in local filter-feeding organisms such as mussels and oysters becoming toxic; coral reef fish can be responsible for the poisoning known as ciguatera when they accumulate a toxin called ciguatoxin from reef algae. In some eutrophic aquatic systems, biodilution can occur. This is a decrease in a contaminant with an increase in trophic level, due to higher concentrations of algae and bacteria diluting the concentration of the pollutant.
Wetland acidification can raise the chemical or metal concentrations, which leads to an increased bioavailability in marine plants and freshwater biota. Plants situated there which includes both rooted and submerged plants can be influenced by the bioavailability of metals.
Studies of turtles as model species
Bioaccumulation in turtles occurs when synthetic organic contaminants (i.e., PFAS), heavy metals, or high levels of trace elements enter a singular organism, potentially affecting their health. Although there are ongoing studies of bioaccumulation in turtles, factors like pollution, climate change, and shifting landscape can affect the amounts of these toxins in the ecosystem.
The most common elements studied in turtles are mercury, cadmium, argon, and selenium. Heavy metals are released into rivers, streams, lakes, oceans, and other aquatic environments, and the plants that live in these environments will absorb the metals. Since the levels of trace elements are high in aquatic ecosystems, turtles will naturally consume various trace elements throughout various aquatic environments by eating plants and sediments. Once these substances enter the bloodstream and muscle tissue, they will increase in concentration and will become toxic to the turtles, perhaps causing metabolic, endocrine system, and reproductive failure.
Some marine turtles are used as experimental subjects to analyze bioaccumulation because of their shoreline habitats, which facilitate the collection of blood samples and other data. The turtle species are very diverse and contribute greatly to biodiversity, so many researchers find it valuable to collect data from various species. Freshwater turtles are another model species for investigating bioaccumulation. Due to their relatively limited home-range freshwater turtles can be associated with a particular catchment and its chemical contaminant profile.
Developmental effects of turtles
Toxic concentrations in turtle eggs may damage the developmental process of the turtle. For example, in the Australian freshwater short-neck turtle (Emydura macquarii macquarii), environmental PFAS concentrations were bioaccumulated by the mother and then offloaded into their eggs that impacted developmental metabolic processes and fat stores. Furthermore, there is evidence PFAS impacted the gut microbiome in exposed turtles.
In terms of toxic levels of heavy metals, it was observed to decrease egg-hatching rates in the Amazon River turtle, Podocnemis expansa. In this particular turtle egg, the heavy metals reduce the fat in the eggs and change how water is filtered throughout the embryo; this can affect the survival rate of the turtle egg.
See also
Biomagnification (magnification of toxins with increasing trophic level)
Chelation therapy
Drug accumulation ratio
Environmental impact of pesticides
International POPs Elimination Network
Persistent organic pollutants
Phytoremediation (removal of pollutants by bioaccumulation in plants)
References
External links
Bioaccumulation & Biomagnification
Biomagnification graphic
Biomagnification Definition Page
Criteria used by the PBT Profiler
Bioaccumulation & Biotransformation
Biodegradable waste management
Biodegradation
Ecotoxicology
Food chains
Pollution
Species | 0.781814 | 0.991122 | 0.774873 |
Management science | Management science (or managerial science) is a wide and interdisciplinary study of solving complex problems and making strategic decisions as it pertains to institutions, corporations, governments and other types of organizational entities. It is closely related to management, economics, business, engineering, management consulting, and other fields. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms and aims to improve an organization's ability to enact rational and accurate management decisions by arriving at optimal or near optimal solutions to complex decision problems.
Management science looks to help businesses achieve goals using a number of scientific methods. The field was initially an outgrowth of applied mathematics, where early challenges were problems relating to the optimization of systems which could be modeled linearly, i.e., determining the optima (maximum value of profit, assembly line performance, crop yield, bandwidth, etc. or minimum of loss, risk, costs, etc.) of some objective function. Today, the discipline of management science may encompass a diverse range of managerial and organizational activity as it regards to a problem which is structured in mathematical or other quantitative form in order to derive managerially relevant insights and solutions.
Overview
Management science is concerned with a number of areas of study:
Developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems. The models used can often be represented mathematically, but sometimes computer-based, visual or verbal representations are used as well or instead.
Designing and developing new and better models of organizational excellence.
Helping to improve, stabilize or otherwise manage profit margins in enterprises.
Management science research can be done on three levels:
The fundamental level lies in three mathematical disciplines: probability, optimization, and dynamical systems theory.
The modeling level is about building models, analyzing them mathematically, gathering and analyzing data, implementing models on computers, solving them, experimenting with them—all this is part of management science research on the modeling level. This level is mainly instrumental, and driven mainly by statistics and econometrics.
The application level, just as in any other engineering and economics disciplines, strives to make a practical impact and be a driver for change in the real world.
The management scientist's mandate is to use rational, systematic and science-based techniques to inform and improve decisions of all kinds. The techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups. The norm for scholars in management science is to focus their work in a certain area or subfield of management like public administration, finance, calculus, information and so forth.
History
Although management science as it exists now covers a myriad of topics having to do with coming up with solutions that increase the efficiency of a business, it was not even a field of study in the not too distant past. There are a number of businessmen and management specialists who can receive credit for the creation of the idea of management science. Most commonly, however, the founder of the field is considered to be Frederick Winslow Taylor in the early 20th century. Likewise, administration expert Luther Gulick and management expert Peter Drucker both had an impact on the development of management science in the 1930s and 1940s. Drucker is quoted as having said that, "the purpose of the corporation is to be economically efficient." This thought process is foundational to management science. Even before the influence of these men, there was Louis Brandeis who became known as "the people's lawyer". In 1910, Brandeis was the creator of a new business approach which he coined as "scientific management", a term that is often falsely attributed to the aforementioned Frederick Winslow Taylor.
These men represent some of the earliest ideas of management science at its conception. After the idea was born, it was further explored around the time of World War II. It was at this time that management science became more than an idea and was put into practice. This sort of experimentation was essential to the development of the field as it is known today.
The origins of management science can be traced to operations research, which became influential during World War II when the Allied forces recruited scientists of various disciplines to assist with military operations. In these early applications, the scientists used simple mathematical models to make efficient use of limited technologies and resources. The application of these models to the corporate sector became known as management science.
In 1967 Stafford Beer characterized the field of management science as "the business use of operations research".
Theory
Some of the fields that management science involves include:
Contract theory
Data mining
Decision analysis
Engineering
Forecasting
Marketing
Finance
Operations
Game theory
Industrial engineering
Logistics
Management consulting
Mathematical modeling
Optimization
Operational research
Probability and statistics
Project management
Psychology
Simulation
Social network / Transportation forecasting models
Sociology
Supply chain management
Applications
Management science's applications are diverse allowing the use of it in many fields. Below are examples of the applications of management science.
In finance, management science is instrumental in portfolio optimization, risk management, and investment strategies. By employing mathematical models, analysts can assess market trends, optimize asset allocation, and mitigate financial risks, contributing to more informed and strategic decision-making.
In healthcare, management science plays a crucial role in optimizing resource allocation, patient scheduling, and facility management. Mathematical models aid healthcare professionals in streamlining operations, reducing waiting times, and improving overall efficiency in the delivery of care.
Logistics and supply chain management benefit significantly from management science applications. Optimization algorithms assist in route planning, inventory management, and demand forecasting, enhancing the efficiency of the entire supply chain.
In manufacturing, management science supports process optimization, production planning, and quality control. Mathematical models help identify bottlenecks, reduce production costs, and enhance overall productivity.
Furthermore, management science contributes to strategic decision-making in project management, marketing, and human resources. By leveraging quantitative techniques, organizations can make data-driven decisions, allocate resources effectively, and enhance overall performance across diverse functional areas.
In summary, the applications of management science are far-reaching, providing valuable insights and solutions across a spectrum of industries, ultimately fostering more efficient and effective decision-making processes.
See also
Fayolism
Institute for Operations Research and the Management Sciences
John von Neumann Theory Prize
Managerial economics
Management engineering
Management cybernetics
Innovation management
Organization studies
Outline of management
References
Further reading
Kenneth R. Baker, Dean H. Kropp (1985). Management Science: An Introduction to the Use of Decision Models
David Charles Heinze (1982). Management Science: Introductory Concepts and Applications
Lee J. Krajewski, Howard E. Thompson (1981). "Management Science: Quantitative Methods in Context"
Thomas W. Knowles (1989). Management science: Building and Using Models
Kamlesh Mathur, Daniel Solow (1994). Management Science: The Art of Decision Making
Laurence J. Moore, Sang M. Lee, Bernard W. Taylor (1993). Management Science
William Thomas Morris (1968). Management Science: A Bayesian Introduction.
William E. Pinney, Donald B. McWilliams (1987). Management Science: An Introduction to Quantitative Analysis for Management
Gerald E. Thompson (1982). ''Management Science: An Introduction to Modern Quantitative Analysis and Decision Making. New York : McGraw-Hill Publishing Co.
Operations research
Behavioural sciences | 0.780059 | 0.993291 | 0.774826 |
Biomechanics | Biomechanics is the study of the structure, function and motion of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. Biomechanics is a branch of biophysics.
Today computational mechanics goes far beyond pure mechanics, and involves other physical actions: chemistry, heat and mass transfer, electric and magnetic stimuli and many others.
Etymology
The word "biomechanics" (1899) and the related "biomechanical" (1856) come from the Ancient Greek βίος bios "life" and μηχανική, mēchanikē "mechanics", to refer to the study of the mechanical principles of living organisms, particularly their movement and structure.
Subfields
Biofluid mechanics
Biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluid problem is that of blood flow in the human cardiovascular system. Under certain mathematical circumstances, blood flow can be modeled by the Navier–Stokes equations. In vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this assumption fails when considering forward flow within arterioles. At the microscopic scale, the effects of individual red blood cells become significant, and whole blood can no longer be modeled as a continuum. When the diameter of the blood vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs and there is a decrease in wall shear stress. However, as the diameter of the blood vessel decreases further, the red blood cells have to squeeze through the vessel and often can only pass in a single file. In this case, the inverse Fahraeus–Lindquist effect occurs and the wall shear stress increases.
An example of a gaseous biofluids problem is that of human respiration. Recently, respiratory systems in insects have been studied for bioinspiration for designing improved microfluidic devices.
Biotribology
Biotribology is the study of friction, wear and lubrication of biological systems, especially human joints such as hips and knees. In general, these processes are studied in the context of contact mechanics and tribology.
Additional aspects of biotribology include analysis of subsurface damage resulting from two surfaces coming in contact during motion, i.e. rubbing against each other, such as in the evaluation of tissue-engineered cartilage.
Comparative biomechanics
Comparative biomechanics is the application of biomechanics to non-human organisms, whether used to gain greater insights into humans (as in physical anthropology) or into the functions, ecology and adaptations of the organisms themselves. Common areas of investigation are Animal locomotion and feeding, as these have strong connections to the organism's fitness and impose high mechanical demands. Animal locomotion, has many manifestations, including running, jumping and flying. Locomotion requires energy to overcome friction, drag, inertia, and gravity, though which factor predominates varies with environment.
Comparative biomechanics overlaps strongly with many other fields, including ecology, neurobiology, developmental biology, ethology, and paleontology, to the extent of commonly publishing papers in the journals of these other fields. Comparative biomechanics is often applied in medicine (with regards to common model organisms such as mice and rats) as well as in biomimetics, which looks to nature for solutions to engineering problems.
Computational biomechanics
Computational biomechanics is the application of engineering computational tools, such as the Finite element method to study the mechanics of biological systems. Computational models and simulations are used to predict the relationship between parameters that are otherwise challenging to test experimentally, or used to design more relevant experiments reducing the time and costs of experiments. Mechanical modeling using finite element analysis has been used to interpret the experimental observation of plant cell growth to understand how they differentiate, for instance. In medicine, over the past decade, the Finite element method has become an established alternative to in vivo surgical assessment. One of the main advantages of computational biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modeling (or other discretization techniques) to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g., BioSpine) and SOniCS, as well as the SOFA, FEniCS frameworks and FEBio.
Computational biomechanics is an essential ingredient in surgical simulation, which is used for surgical planning, assistance, and training. In this case, numerical (discretization) methods are used to compute, as fast as possible, a system's response to boundary conditions such as forces, heat and mass transfer, and electrical and magnetic stimuli.
Continuum biomechanics
The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics. This assumption breaks down when the length scales of interest approach the order of the microstructural details of the material. One of the most remarkable characteristics of biomaterials is their hierarchical structure. In other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels.
Biomaterials are classified into two groups: hard and soft tissues. Mechanical deformation of hard tissues (like wood, shell and bone) may be analysed with the theory of linear elasticity. On the other hand, soft tissues (like skin, tendon, muscle, and cartilage) usually undergo large deformations, and thus, their analysis relies on the finite strain theory and computer simulations. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation.
Neuromechanics
Neuromechanics uses a biomechanical approach to better understand how the brain and nervous system interact to control the body. During motor tasks, motor units activate a set of muscles to perform a specific movement, which can be modified via motor adaptation and learning. In recent years, neuromechanical experiments have been enabled by combining motion capture tools with neural recordings.
Plant biomechanics
The application of biomechanical principles to plants, plant organs and cells has developed into the subfield of plant biomechanics. Application of biomechanics for plants ranges from studying the resilience of crops to environmental stress to development and morphogenesis at cell and tissue scale, overlapping with mechanobiology.
Sports biomechanics
In sports biomechanics, the laws of mechanics are applied to human movement in order to gain a greater understanding of athletic performance and to reduce sport injuries as well. It focuses on the application of the scientific principles of mechanical physics to understand movements of action of human bodies and sports implements such as cricket bat, hockey stick and javelin etc. Elements of mechanical engineering (e.g., strain gauges), electrical engineering (e.g., digital filtering), computer science (e.g., numerical methods), gait analysis (e.g., force platforms), and clinical neurophysiology (e.g., surface EMG) are common methods used in sports biomechanics.
Biomechanics in sports can be stated as the body's muscular, joint, and skeletal actions while executing a given task, skill, or technique. Understanding biomechanics relating to sports skills has the greatest implications on sports performance, rehabilitation and injury prevention, and sports mastery. As noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best.
Vascular biomechanics
The main topics of the vascular biomechanics is the description of the mechanical behaviour of vascular tissues.
It is well known that cardiovascular disease is the leading cause of death worldwide. Vascular system in the human body is the main component that is supposed to maintain pressure and allow for blood flow and chemical exchanges. Studying the mechanical properties of these complex tissues improves the possibility of better understanding cardiovascular diseases and drastically improves personalized medicine.
Vascular tissues are inhomogeneous with a strongly non linear behaviour. Generally this study involves complex geometry with intricate load conditions and material properties. The correct description of these mechanisms is based on the study of physiology and biological interaction. Therefore is necessary to study wall mechanics and hemodynamics with their interaction.
It is also necessary to premise that the vascular wall is a dynamic structure in continuous evolution. This evolution directly follows the chemical and mechanical environment in which the tissues are immersed like Wall Shear Stress or biochemical signaling.
Immunomechanics
The emerging field of immunomechanics focuses on characterising mechanical properties of the immune cells and their functional relevance. Mechanics of immune cells can be characterised using various force spectroscopy approaches such as acoustic force spectroscopy and optical tweezers, and these measurements can be performed at physiological conditions (e.g. temperature). Furthermore, one can study the link between immune cell mechanics and immunometabolism and immune signalling. The term "immunomechanics" is some times interchangeably used with immune cell mechanobiology or cell mechanoimmunology.
Other applied subfields of biomechanics include
Allometry
Animal locomotion and Gait analysis
Biotribology
Biofluid mechanics
Cardiovascular biomechanics
Comparative biomechanics
Computational biomechanics
Ergonomy
Forensic Biomechanics
Human factors engineering and occupational biomechanics
Injury biomechanics
Implant (medicine), Orthotics and Prosthesis
Kinaesthetics
Kinesiology (kinetics + physiology)
Musculoskeletal and orthopedic biomechanics
Rehabilitation
Soft body dynamics
Sports biomechanics
History
Antiquity
Aristotle, a student of Plato, can be considered the first bio-mechanic because of his work with animal anatomy. Aristotle wrote the first book on the motion of animals, De Motu Animalium, or On the Movement of Animals. He saw animal's bodies as mechanical systems, pursued questions such as the physiological difference between imagining performing an action and actual performance. In another work, On the Parts of Animals, he provided an accurate description of how the ureter uses peristalsis to carry urine from the kidneys to the bladder.
With the rise of the Roman Empire, technology became more popular than philosophy and the next bio-mechanic arose. Galen (129 AD-210 AD), physician to Marcus Aurelius, wrote his famous work, On the Function of the Parts (about the human body). This would be the world's standard medical book for the next 1,400 years.
Renaissance
The next major biomechanic would not be around until the 1490s, with the studies of human anatomy and biomechanics by Leonardo da Vinci. He had a great understanding of science and mechanics and studied anatomy in a mechanics context. He analyzed muscle forces and movements and studied joint functions. These studies could be considered studies in the realm of biomechanics. Leonardo da Vinci studied anatomy in the context of mechanics. He analyzed muscle forces as acting along lines connecting origins and insertions, and studied joint function. Da Vinci is also known for mimicking some animal features in his machines. For example, he studied the flight of birds to find means by which humans could fly; and because horses were the principal source of mechanical power in that time, he studied their muscular systems to design machines that would better benefit from the forces applied by this animal.
In 1543, Galen's work, On the Function of the Parts was challenged by Andreas Vesalius at the age of 29. Vesalius published his own work called, On the Structure of the Human Body. In this work, Vesalius corrected many errors made by Galen, which would not be globally accepted for many centuries. With the death of Copernicus came a new desire to understand and learn about the world around people and how it works. On his deathbed, he published his work, On the Revolutions of the Heavenly Spheres. This work not only revolutionized science and physics, but also the development of mechanics and later bio-mechanics.
Galileo Galilei, the father of mechanics and part time biomechanic was born 21 years after the death of Copernicus. Over his years of science, Galileo made a lot of biomechanical aspects known. For example, he discovered that "animals' masses increase disproportionately to their size, and their bones must consequently also disproportionately increase in girth, adapting to loadbearing rather than mere size. The bending strength of a tubular structure such as a bone is increased relative to its weight by making it hollow and increasing its diameter. Marine animals can be larger than terrestrial animals because the water's buoyancy relieves their tissues of weight."
Galileo Galilei was interested in the strength of bones and suggested that bones are hollow because this affords maximum strength with minimum weight. He noted that animals' bone masses increased disproportionately to their size. Consequently, bones must also increase disproportionately in girth rather than mere size. This is because the bending strength of a tubular structure (such as a bone) is much more efficient relative to its weight. Mason suggests that this insight was one of the first grasps of the principles of biological optimization.
In the 17th century, Descartes suggested a philosophic system whereby all living systems, including the human body (but not the soul), are simply machines ruled by the same mechanical laws, an idea that did much to promote and sustain biomechanical study.
Industrial era
The next major bio-mechanic, Giovanni Alfonso Borelli, embraced Descartes' mechanical philosophy and studied walking, running, jumping, the flight of birds, the swimming of fish, and even the piston action of the heart within a mechanical framework. He could determine the position of the human center of gravity, calculate and measure inspired and expired air volumes, and he showed that inspiration is muscle-driven and expiration is due to tissue elasticity.
Borelli was the first to understand that "the levers of the musculature system magnify motion rather than force, so that muscles must produce much larger forces than those resisting the motion". Influenced by the work of Galileo, whom he personally knew, he had an intuitive understanding of static equilibrium in various joints of the human body well before Newton published the laws of motion. His work is often considered the most important in the history of bio-mechanics because he made so many new discoveries that opened the way for the future generations to continue his work and studies.
It was many years after Borelli before the field of bio-mechanics made any major leaps. After that time, more and more scientists took to learning about the human body and its functions. There are not many notable scientists from the 19th or 20th century in bio-mechanics because the field is far too vast now to attribute one thing to one person. However, the field is continuing to grow every year and continues to make advances in discovering more about the human body. Because the field became so popular, many institutions and labs have opened over the last century and people continue doing research. With the Creation of the American Society of Bio-mechanics in 1977, the field continues to grow and make many new discoveries.
In the 19th century Étienne-Jules Marey used cinematography to scientifically investigate locomotion. He opened the field of modern 'motion analysis' by being the first to correlate ground reaction forces with movement. In Germany, the brothers Ernst Heinrich Weber and Wilhelm Eduard Weber hypothesized a great deal about human gait, but it was Christian Wilhelm Braune who significantly advanced the science using recent advances in engineering mechanics. During the same period, the engineering mechanics of materials began to flourish in France and Germany under the demands of the Industrial Revolution. This led to the rebirth of bone biomechanics when the railroad engineer Karl Culmann and the anatomist Hermann von Meyer compared the stress patterns in a human femur with those in a similarly shaped crane. Inspired by this finding Julius Wolff proposed the famous Wolff's law of bone remodeling.
Applications
The study of biomechanics ranges from the inner workings of a cell to the movement and development of limbs, to the mechanical properties of soft tissue, and bones. Some simple examples of biomechanics research include the investigation of the forces that act on limbs, the aerodynamics of bird and insect flight, the hydrodynamics of swimming in fish, and locomotion in general across all forms of life, from individual cells to whole organisms. With growing understanding of the physiological behavior of living tissues, researchers are able to advance the field of tissue engineering, as well as develop improved treatments for a wide array of pathologies including cancer.
Biomechanics is also applied to studying human musculoskeletal systems. Such research utilizes force platforms to study human ground reaction forces and infrared videography to capture the trajectories of markers attached to the human body to study human 3D motion. Research also applies electromyography to study muscle activation, investigating muscle responses to external forces and perturbations.
Biomechanics is widely used in orthopedic industry to design orthopedic implants for human joints, dental parts, external fixations and other medical purposes. Biotribology is a very important part of it. It is a study of the performance and function of biomaterials used for orthopedic implants. It plays a vital role to improve the design and produce successful biomaterials for medical and clinical purposes. One such example is in tissue engineered cartilage. The dynamic loading of joints considered as impact is discussed in detail by Emanuel Willert.
It is also tied to the field of engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Applied mechanics, most notably mechanical engineering disciplines such as continuum mechanics, mechanism analysis, structural analysis, kinematics and dynamics play prominent roles in the study of biomechanics.
Usually biological systems are much more complex than man-built systems. Numerical methods are hence applied in almost every biomechanical study. Research is done in an iterative process of hypothesis and verification, including several steps of modeling, computer simulation and experimental measurements.
See also
Biomechatronics
Biomedical engineering
Cardiovascular System Dynamics Society
Evolutionary physiology
Forensic biomechanics
International Society of Biomechanics
List of biofluid mechanics research groups
Mechanics of human sexuality
OpenSim (simulation toolkit)
Physical oncology
References
Further reading
External links
Biomechanics and Movement Science Listserver (Biomch-L)
Biomechanics Links
A Genealogy of Biomechanics
Motor control | 0.778293 | 0.995541 | 0.774823 |
Quantum chemistry | Quantum chemistry, also called molecular quantum mechanics, is a branch of physical chemistry focused on the application of quantum mechanics to chemical systems, particularly towards the quantum-mechanical calculation of electronic contributions to physical and chemical properties of molecules, materials, and solutions at the atomic level. These calculations include systematically applied approximations intended to make calculations computationally feasible while still capturing as much information about important contributions to the computed wave functions as well as to observable properties such as structures, spectra, and thermodynamic properties. Quantum chemistry is also concerned with the computation of quantum effects on molecular dynamics and chemical kinetics.
Chemists rely heavily on spectroscopy through which information regarding the quantization of energy on a molecular scale can be obtained. Common methods are infra-red (IR) spectroscopy, nuclear magnetic resonance (NMR) spectroscopy, and scanning probe microscopy. Quantum chemistry may be applied to the prediction and verification of spectroscopic data as well as other experimental data.
Many quantum chemistry studies are focused on the electronic ground state and excited states of individual atoms and molecules as well as the study of reaction pathways and transition states that occur during chemical reactions. Spectroscopic properties may also be predicted. Typically, such studies assume the electronic wave function is adiabatically parameterized by the nuclear positions (i.e., the Born–Oppenheimer approximation). A wide variety of approaches are used, including semi-empirical methods, density functional theory, Hartree–Fock calculations, quantum Monte Carlo methods, and coupled cluster methods.
Understanding electronic structure and molecular dynamics through the development of computational solutions to the Schrödinger equation is a central goal of quantum chemistry. Progress in the field depends on overcoming several challenges, including the need to increase the accuracy of the results for small molecular systems, and to also increase the size of large molecules that can be realistically subjected to computation, which is limited by scaling considerations — the computation time increases as a power of the number of atoms.
History
Some view the birth of quantum chemistry as starting with the discovery of the Schrödinger equation and its application to the hydrogen atom. However, a 1927 article of Walter Heitler (1904–1981) and Fritz London is often recognized as the first milestone in the history of quantum chemistry. This was the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. However, prior to this a critical conceptual framework was provided by Gilbert N. Lewis in his 1916 paper The Atom and the Molecule, wherein Lewis developed the first working model of valence electrons. Important contributions were also made by Yoshikatsu Sugiura and S.C. Wang. A series of articles by Linus Pauling, written throughout the 1930s, integrated the work of Heitler, London, Sugiura, Wang, Lewis, and John C. Slater on the concept of valence and its quantum-mechanical basis into a new theoretical framework. Many chemists were introduced to the field of quantum chemistry by Pauling's 1939 text The Nature of the Chemical Bond and the Structure of Molecules and Crystals: An Introduction to Modern Structural Chemistry, wherein he summarized this work (referred to widely now as valence bond theory) and explained quantum mechanics in a way which could be followed by chemists. The text soon became a standard text at many universities. In 1937, Hans Hellmann appears to have been the first to publish a book on quantum chemistry, in the Russian and German languages.
In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding. In addition to the investigators mentioned above, important progress and critical contributions were made in the early years of this field by Irving Langmuir, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Hans Hellmann, Maria Goeppert Mayer, Erich Hückel, Douglas Hartree, John Lennard-Jones, and Vladimir Fock.
Electronic structure
The electronic structure of an atom or molecule is the quantum state of its electrons. The first step in solving a quantum chemical problem is usually solving the Schrödinger equation (or Dirac equation in relativistic quantum chemistry) with the electronic molecular Hamiltonian, usually making use of the Born–Oppenheimer (B–O) approximation. This is called determining the electronic structure of the molecule. An exact solution for the non-relativistic Schrödinger equation can only be obtained for the hydrogen atom (though exact solutions for the bound state energies of the hydrogen molecular ion within the B-O approximation have been identified in terms of the generalized Lambert W function). Since all other atomic and molecular systems involve the motions of three or more "particles", their Schrödinger equations cannot be solved analytically and so approximate and/or computational solutions must be sought. The process of seeking computational solutions to these problems is part of the field known as computational chemistry.
Valence bond theory
As mentioned above, Heitler and London's method was extended by Slater and Pauling to become the valence-bond (VB)
method. In this method, attention is primarily devoted to the pairwise interactions between atoms, and this method therefore correlates closely with classical chemists' drawings of bonds. It focuses on how the atomic orbitals of an atom combine to give individual chemical bonds when a molecule is formed, incorporating the two key concepts of orbital hybridization and resonance.
Molecular orbital theory
An alternative approach to valence bond theory was developed in 1929 by Friedrich Hund and Robert S. Mulliken, in which electrons are described by mathematical functions delocalized over an entire molecule. The Hund–Mulliken approach or molecular orbital (MO) method is less intuitive to chemists, but has turned out capable of predicting spectroscopic properties better than the VB method. This approach is the conceptual basis of the Hartree–Fock method and further post-Hartree–Fock methods.
Density functional theory
The Thomas–Fermi model was developed independently by Thomas and Fermi in 1927. This was the first attempt to describe many-electron systems on the basis of electronic density instead of wave functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory (DFT). Modern day DFT uses the Kohn–Sham method, where the density functional is split into four terms; the Kohn–Sham kinetic energy, an external potential, exchange and correlation energies. A large part of the focus on developing DFT is on improving the exchange and correlation terms. Though this method is less developed than post Hartree–Fock methods, its significantly lower computational requirements (scaling typically no worse than n3 with respect to n basis functions, for the pure functionals) allow it to tackle larger polyatomic molecules and even macromolecules. This computational affordability and often comparable accuracy to MP2 and CCSD(T) (post-Hartree–Fock methods) has made it one of the most popular methods in computational chemistry.
Chemical dynamics
A further step can consist of solving the Schrödinger equation with the total molecular Hamiltonian in order to study the motion of molecules. Direct solution of the Schrödinger equation is called quantum dynamics, whereas its solution within the semiclassical approximation is called semiclassical dynamics. Purely classical simulations of molecular motion are referred to as molecular dynamics (MD). Another approach to dynamics is a hybrid framework known as mixed quantum-classical dynamics; yet another hybrid framework uses the Feynman path integral formulation to add quantum corrections to molecular dynamics, which is called path integral molecular dynamics. Statistical approaches, using for example classical and quantum Monte Carlo methods, are also possible and are particularly useful for describing equilibrium distributions of states.
Adiabatic chemical dynamics
In adiabatic dynamics, interatomic interactions are represented by single scalar potentials called potential energy surfaces. This is the Born–Oppenheimer approximation introduced by Born and Oppenheimer in 1927. Pioneering applications of this in chemistry were performed by Rice and Ramsperger in 1927 and Kassel in 1928, and generalized into the RRKM theory in 1952 by Marcus who took the transition state theory developed by Eyring in 1935 into account. These methods enable simple estimates of unimolecular reaction rates from a few characteristics of the potential surface.
Non-adiabatic chemical dynamics
Non-adiabatic dynamics consists of taking the interaction between several coupled potential energy surfaces (corresponding to different electronic quantum states of the molecule). The coupling terms are called vibronic couplings. The pioneering work in this field was done by Stueckelberg, Landau, and Zener in the 1930s, in their work on what is now known as the Landau–Zener transition. Their formula allows the transition probability between two adiabatic potential curves in the neighborhood of an avoided crossing to be calculated. Spin-forbidden reactions are one type of non-adiabatic reactions where at least one change in spin state occurs when progressing from reactant to product.
See also
Atomic physics
Computational chemistry
Condensed matter physics
Car–Parrinello molecular dynamics
Electron localization function
International Academy of Quantum Molecular Science
Molecular modelling
Physical chemistry
Quantum computational chemistry
List of quantum chemistry and solid-state physics software
QMC@Home
Quantum Aspects of Life
Quantum electrochemistry
Relativistic quantum chemistry
Theoretical physics
Spin forbidden reactions
References
Sources
Gavroglu, Kostas; Ana Simões: Neither Physics nor Chemistry: A History of Quantum Chemistry, MIT Press, 2011,
Karplus M., Porter R.N. (1971). Atoms and Molecules. An introduction for students of physical chemistry, Benjamin–Cummings Publishing Company,
Considers the extent to which chemistry and especially the periodic system has been reduced to quantum mechanics.
External links
The Sherrill Group – Notes
ChemViz Curriculum Support Resources
Early ideas in the history of quantum chemistry | 0.778131 | 0.995725 | 0.774805 |
Cloze test | A cloze test (also cloze deletion test or occlusion test) is an exercise, test, or assessment in which a portion of text is masked and the participant is asked to fill in the masked portion of text. Cloze tests require the ability to understand the context and vocabulary in order to identify the correct language or part of speech that belongs in the deleted passages. This exercise is commonly administered for the assessment of native and second language learning and instruction.
The word cloze is derived from closure in Gestalt theory. The exercise was first described by Wilson L. Taylor in 1953.
Words may be deleted from the text in question either mechanically (every nth word) or selectively, depending on exactly what aspect it is intended to test for. The methodology is the subject of extensive academic literature; nonetheless, teachers commonly devise ad hoc tests.
Examples
A language teacher may give the following passage to students:
Students would then be required to fill in the blanks with words that would best complete the passage. The context in language and content terms is essential in most, if not all, cloze tests. The first blank is preceded by "the"; therefore, a noun, an adjective or an adverb must follow. However, a conjunction follows the blank; the sentence would not be grammatically correct if anything other than a noun were in the blank. The words "milk and eggs" are important for deciding which noun to put in the blank; "supermarket" is a possible answer; depending on the student, however, the first blank could be store, supermarket, shop, shops, market, or grocer while umbrella, brolly or raincoat could fit the second. A possible completed passage would be:
Besides use for testing linguistic fluency, a cloze test may also be used for testing factual knowledge, for example: is the anaerobic catabolism of glucose. Possible answers would then include lactic acid fermentation, anaerobic glycolysis, and anaerobic respiration.
Assessment
The definition of success in a given cloze test varies, depending on the broader goals behind the exercise. Assessment may depend on whether the exercise is objective (i.e. students are given a list of words to use in a cloze) or subjective (i.e. students are to fill in a cloze with words that would make a given sentence grammatically correct).
Given the above passage, students' answers may then vary depending on their vocabulary skills and their personal opinions. However, the placement of the blank at the end of the sentence restricts the possible words that may complete the sentence; following an adverb and finishing the sentence, the word is most likely an adjective. Romantic, chivalrous or gallant may, for example, occupy the blank, as well as foolish or cheesy. Using those answers, a teacher may ask students to reflect on the opinions drawn from the given cloze.
Recent research using eye-tracking has posited that cloze/gapfill items where a selection of words are given as options may be testing different kinds of reading skills depending on the language abilities of the participants taking the test. Lower ability test takers are suggested to be more likely to be concentrating on the information contained in the words immediately surrounding the gap, while higher ability test takers are thought to be able to use a wider context window, which is also true for more capable large language models, such as ChatGPT, in contrast to less able older models.
A number of the methodological problems pointed out by researchers regarding the open-ended type cloze item (readers must supply a correct word from long-term memory, how to score acceptable responses that are not the exact replacement, etc.) can be solved by the use of carefully designed multiple-choice cloze items. See sample test and practice activity from a pilot study in a rural Latin American community. Mostow and associates also showed how this approach is both practical and informative.
Implementation
In addition to the usage in testing, cloze deletion can be used in learning, particularly language learning, but also learning facts. This may be done manually – for example, by covering sections of a text with paper, or highlighting sections of text with a highlighter, then covering the line with a colored ruler in the complementary color (say, a red ruler for a green highlighter) so the highlighted text disappears; this is popular in Japan, for instance . Cloze deletion can also be used as part of spaced repetition software. For example the SuperMemo and Anki applications feature semi-automated creation of cloze tests.
Programming software to accept all synonyms of a word as valid correct answers to a cloze test is a challenge, as all potential synonyms must be considered. An important concept that applies during the automatic creation of cloze tests by software is word clozability. Word clozability is defined as: "How often do participants who know this word guess it correctly when it is clozed in a sentence that they haven't seen before?"
Words that have a large amount of synonyms will have a low word clozability score as the likelihood that the given word will be guessed correctly is reduced. Words that are specific and have a low amount of synonyms will have a high clozability score.
Cloze deletion can also be applied to a graphic organizer, wherein a diagram, map, grid, or image is presented and contextual clues must be used to fill in some labels. In particular, when learning an image-heavy subject, such as anatomy, a user of Anki may employ an image occlusion to occlude parts of an image.
Comparison to other testing methodologies
Glover, 1989 compared different forms of recall and their effectiveness after time passed for forgetting to occur. Glover referred to cloze tests as cued recall, which was found to be less effective than free recall testing (generic cue was given to pupil, the pupil was expected to recall all they knew), but more effective than recognition tests.
Natural language processing
Cloze test is often used as an evaluation task in natural language processing (NLP) to assess the performance of the trained language models. The tasks have a few different variants, like predicting the answer for the blank with and without providing the right options, predicting the ending sentence of a story or passage, etc. Since the design of the BERT encoder, it is also used in pre-training language models, in which case it is known as masked language modelling.
See also
Communicative competence
English language learning and teaching
Form letter
Mad Libs
Sentence completion tests
References
More Information
Language assessment | 0.782019 | 0.990665 | 0.774719 |
Ad hoc | Ad hoc is a Latin phrase meaning literally for this. In English, it typically signifies a solution designed for a specific purpose, problem, or task rather than a generalized solution adaptable to collateral instances (compare with a priori).
Common examples include ad hoc committees and commissions created at the national or international level for a specific task, and the term is often used to describe arbitration (ad hoc arbitration). In other fields, the term could refer to a military unit created under special circumstances (see task force), a handcrafted network protocol (e.g., ad hoc network), a temporary collaboration among geographically-linked franchise locations (of a given national brand) to issue advertising coupons, or a purpose-specific equation in mathematics or science.
Ad hoc can also function as an adjective describing temporary, provisional, or improvised methods to deal with a particular problem, the tendency of which has given rise to the noun adhocism. This concept highlights the flexibility and adaptability often required in problem-solving across various domains.
In everyday language, "ad hoc" is sometimes used informally to describe improvised or makeshift solutions, emphasizing their temporary nature and specific applicability to immediate circumstances.
Styling
Style guides disagree on whether Latin phrases like ad hoc should be italicized. The trend is not to use italics. For example, The Chicago Manual of Style recommends that familiar Latin phrases that are listed in the Webster's Dictionary, including "ad hoc", not be italicized.
Hypothesis
In science and philosophy, ad hoc means the addition of extraneous hypotheses to a theory to save it from being falsified. Ad hoc hypotheses compensate for anomalies not anticipated by the theory in its unmodified form.
Scientists are often skeptical of scientific theories that rely on frequent, unsupported adjustments to sustain them. Ad hoc hypotheses are often characteristic of pseudo-scientific subjects such as homeopathy.
In the military
In the military, ad hoc units are created during unpredictable situations, when the cooperation between different units is suddenly needed for fast action, or from remnants of previous units which have been overrun or otherwise whittled down.
In governance
In national and sub-national governance, ad hoc bodies may be established to deal with specific problems not easily accommodated by the current structure of governance or to address multi-faceted issues spanning several areas of governance. In the UK and other commonwealth countries, ad hoc Royal Commissions may be set up to address specific questions as directed by parliament.
In diplomacy
In diplomacy, diplomats may be appointed by a government as special envoys, or diplomats who serve on a ad hoc basis due to the possibility that such envoys' offices may either not be retained by a future government or may only exist during the duration of a relevant cause.
Networking
The term ad hoc networking typically refers to a system of network elements that combine to form a network requiring little or no planning.
See also
Ad hoc testing
Ad infinitum
Ad libitum
Adhocracy
Democracy
Heuristic
House rule
Russell's teapot
Inductive reasoning
Confirmation bias
Cherry picking
References
Further reading
External links
Latin words and phrases | 0.776179 | 0.998085 | 0.774693 |
Endergonic reaction | In chemical thermodynamics, an endergonic reaction (; also called a heat absorbing nonspontaneous reaction or an unfavorable reaction) is a chemical reaction in which the standard change in free energy is positive, and an additional driving force is needed to perform this reaction. In layman's terms, the total amount of useful energy is negative (it takes more energy to start the reaction than what is received out of it) so the total energy is a net negative result, as opposed to a net positive result in an exergonic reaction. Another way to phrase this is that useful energy must be absorbed from the surroundings into the workable system for the reaction to happen.
Under constant temperature and constant pressure conditions, this means that the change in the standard Gibbs free energy would be positive,
for the reaction at standard state (i.e. at standard pressure (1 bar), and standard concentrations (1 molar) of all the reagents).
In metabolism, an endergonic process is anabolic, meaning that energy is stored; in many such anabolic processes, energy is supplied by coupling the reaction to adenosine triphosphate (ATP) and consequently resulting in a high energy, negatively charged organic phosphate and positive adenosine diphosphate.
Equilibrium constant
The equilibrium constant for the reaction is related to ΔG° by the relation:
where T is the absolute temperature and R is the gas constant. A positive value of ΔG° therefore implies
so that starting from molar stoichiometric quantities such a reaction would move backwards toward equilibrium, not forwards.
Nevertheless, endergonic reactions are quite common in nature, especially in biochemistry and physiology. Examples of endergonic reactions in cells include protein synthesis, and the Na+/K+ pump which drives nerve conduction and muscle contraction.
Gibbs free energy for endergonic reactions
All physical and chemical systems in the universe follow the second law of thermodynamics and proceed in a downhill, i.e., exergonic, direction. Thus, left to itself, any physical or chemical system will proceed, according to the second law of thermodynamics, in a direction that tends to lower the free energy of the system, and thus to expend energy in the form of work. These reactions occur spontaneously.
A chemical reaction is endergonic when non spontaneous. Thus in this type of reaction the Gibbs free energy increases. The entropy is included in any change of the Gibbs free energy. This differs from an endothermic reaction where the entropy is not included. The Gibbs free energy is calculated with the Gibbs–Helmholtz equation:
where:
= temperature in kelvins (K)
= change in the Gibbs free energy
= change in entropy (at 298 K) as
= change in enthalpy (at 298 K) as
A chemical reaction progresses non spontaneously when the Gibbs free energy increases, in that case the is positive. In exergonic reactions the is negative and in endergonic reactions the is positive:
exergonic
endergonic
where equals the change in the Gibbs free energy after completion of a chemical reaction.
Making endergonic reactions happen
Endergonic reactions can be achieved if they are either pulled or pushed by an exergonic (stability increasing, negative change in free energy) process. Of course, in all cases the net reaction of the total system (the reaction under study plus the puller or pusher reaction) is exergonic.
Pull
Reagents can be pulled through an endergonic reaction, if the reaction products are cleared rapidly by a subsequent exergonic reaction. The concentration of the products of the endergonic reaction thus always remains low, so the reaction can proceed.
A classic example of this might be the first stage of a reaction which proceeds via a transition state. The process of getting to the top of the activation energy barrier to the transition state is endergonic. However, the reaction can proceed because having reached the transition state, it rapidly evolves via an exergonic process to the more stable final products.
Push
Endergonic reactions can be pushed by coupling them to another reaction which is strongly exergonic, through a shared intermediate.
This is often how biological reactions proceed. For example, on its own the reaction
may be too endergonic to occur. However it may be possible to make it occur by coupling it to a strongly exergonic reaction – such as, very often, the decomposition of ATP into ADP and inorganic phosphate ions, ATP → ADP + Pi, so that
This kind of reaction, with the ATP decomposition supplying the free energy needed to make an endergonic reaction occur, is so common in cell biochemistry that ATP is often called the "universal energy currency" of all living organisms.
See also
Exergonic
Exergonic reaction
Exothermic
Endothermic
Exothermic reaction
Endothermic reaction
Endotherm
Exotherm
References
Thermochemistry
Thermodynamic processes | 0.787369 | 0.983877 | 0.774674 |
Quantification (science) | In mathematics and empirical science, quantification (or quantitation) is the act of counting and measuring that maps human sense observations and experiences into quantities. Quantification in this sense is fundamental to the scientific method.
Natural science
Some measure of the undisputed general importance of quantification in the natural sciences can be gleaned from the following comments:
"these are mere facts, but they are quantitative facts and the basis of science."
It seems to be held as universally true that "the foundation of quantification is measurement."
There is little doubt that "quantification provided a basis for the objectivity of science."
In ancient times, "musicians and artists ... rejected quantification, but merchants, by definition, quantified their affairs, in order to survive, made them visible on parchment and paper."
Any reasonable "comparison between Aristotle and Galileo shows clearly that there can be no unique lawfulness discovered without detailed quantification."
Even today, "universities use imperfect instruments called 'exams' to indirectly quantify something they call knowledge."
This meaning of quantification comes under the heading of pragmatics.
In some instances in the natural sciences a seemingly intangible concept may be quantified by creating a scale—for example, a pain scale in medical research, or a discomfort scale at the intersection of meteorology and human physiology such as the heat index measuring the combined perceived effect of heat and humidity, or the wind chill factor measuring the combined perceived effects of cold and wind.
Social sciences
In the social sciences, quantification is an integral part of economics and psychology. Both disciplines gather data – economics by empirical observation and psychology by experimentation – and both use statistical techniques such as regression analysis to draw conclusions from it.
In some instances a seemingly intangible property may be quantified by asking subjects to rate something on a scale—for example, a happiness scale or a quality-of-life scale—or by the construction of a scale by the researcher, as with the index of economic freedom. In other cases, an unobservable variable may be quantified by replacing it with a proxy variable with which it is highly correlated—for example, per capita gross domestic product is often used as a proxy for standard of living or quality of life.
Frequently in the use of regression, the presence or absence of a trait is quantified by employing a dummy variable, which takes on the value 1 in the presence of the trait or the value 0 in the absence of the trait.
Quantitative linguistics is an area of linguistics that relies on quantification. For example, indices of grammaticalization of morphemes, such as phonological shortness, dependence on surroundings, and fusion with the verb, have been developed and found to be significantly correlated across languages with stage of evolution of function of the morpheme.
Hard versus soft science
The ease of quantification is one of the features used to distinguish hard and soft sciences from each other. Scientists often consider hard sciences to be more scientific or rigorous, but this is disputed by social scientists who maintain that appropriate rigor includes the qualitative evaluation of the broader contexts of qualitative data. In some social sciences such as sociology, quantitative data are difficult to obtain, either because laboratory conditions are not present or because the issues involved are conceptual but not directly quantifiable. Thus in these cases qualitative methods are preferred.
See also
Calibration
Internal standard
Isotope dilution
Physical quantity
Quantitative analysis (chemistry)
Standard addition
References
Further reading
Crosby, Alfred W. (1996) The Measure of Reality: Quantification and Western Society, 1250–1600. Cambridge University Press.
Wiese, Heike, 2003. Numbers, Language, and the Human Mind. Cambridge University Press. .
Philosophy of science
Analytical chemistry | 0.78154 | 0.991149 | 0.774623 |
Pharmacy | Pharmacy is the science and practice of discovering, producing, preparing, dispensing, reviewing and monitoring medications, aiming to ensure the safe, effective, and affordable use of medicines. It is a miscellaneous science as it links health sciences with pharmaceutical sciences and natural sciences. The professional practice is becoming more clinically oriented as most of the drugs are now manufactured by pharmaceutical industries. Based on the setting, pharmacy practice is either classified as community or institutional pharmacy. Providing direct patient care in the community of institutional pharmacies is considered clinical pharmacy.
The scope of pharmacy practice includes more traditional roles such as compounding and dispensing of medications. It also includes more modern services related to health care including clinical services, reviewing medications for safety and efficacy, and providing drug information with patient counselling. Pharmacists, therefore, are experts on drug therapy and are the primary health professionals who optimize the use of medication for the benefit of the patients.
An establishment in which pharmacy (in the first sense) is practiced is called a pharmacy (this term is more common in the United States) or chemists (which is more common in Great Britain, though pharmacy is also used). In the United States and Canada, drugstores commonly sell medicines, as well as miscellaneous items such as confectionery, cosmetics, office supplies, toys, hair care products and magazines, and occasionally refreshments and groceries.
In its investigation of herbal and chemical ingredients, the work of the apothecary may be regarded as a precursor of the modern sciences of chemistry and pharmacology, prior to the formulation of the scientific method.
Disciplines
The field of pharmacy can generally be divided into various disciplines:
Pharmaceutics and Computational Pharmaceutics
Pharmacokinetics and Pharmacodynamics
Medicinal Chemistry and Pharmacognosy
Pharmacology
Pharmacy Practice
Pharmacoinformatics
Pharmacogenomics
The boundaries between these disciplines and with other sciences, such as biochemistry, are not always clear-cut.
Often, collaborative teams from various disciplines (pharmacists and other scientists) work together toward the introduction of new therapeutics and methods for patient care. However, pharmacy is not a basic or biomedical science in its typical form. Medicinal chemistry is also a distinct branch of synthetic chemistry combining pharmacology, organic chemistry, and chemical biology.
Pharmacology is sometimes considered the fourth discipline of pharmacy. Although pharmacology is essential to the study of pharmacy, it is not specific to pharmacy. Both disciplines are distinct. Those who wish to practice both pharmacy (patient-oriented) and pharmacology (a biomedical science requiring the scientific method) receive separate training and degrees unique to either discipline.
Pharmacoinformatics is considered another new discipline, for systematic drug discovery and development with efficiency and safety.
Pharmacogenomics is the study of genetic-linked variants that effect patient clinical responses, allergies, and metabolism of drugs.
Professionals
The World Health Organization estimates that there are at least 2.6 million pharmacists and other pharmaceutical personnel worldwide.
Pharmacists
Pharmacists are healthcare professionals with specialized education and training who perform various roles to ensure optimal health outcomes for their patients through the quality use of medicines. Pharmacists may also be small business proprietors, owning the pharmacy in which they practice. Since pharmacists know about the mode of action of a particular drug, and its metabolism and physiological effects on the human body in great detail, they play an important role in optimization of drug treatment for an individual.
Pharmacists are represented internationally by the International Pharmaceutical Federation (FIP), an NGO linked with World Health Organization (WHO). They are represented at the national level by professional organisations such as the Royal Pharmaceutical Society in the UK, Pharmaceutical Society of Australia (PSA), Canadian Pharmacists Association (CPhA), Indian Pharmacist Association (IPA), Pakistan Pharmacists Association (PPA), American Pharmacists Association (APhA), and the Malaysian Pharmaceutical Society (MPS).
In some cases, the representative body is also the registering body, which is responsible for the regulation and ethics of the profession.
In the United States, specializations in pharmacy practice recognized by the Board of Pharmacy Specialties include: cardiovascular, infectious disease, oncology, pharmacotherapy, nuclear, nutrition, and psychiatry. The Commission for Certification in Geriatric Pharmacy certifies pharmacists in geriatric pharmacy practice. The American Board of Applied Toxicology certifies pharmacists and other medical professionals in applied toxicology.
Pharmacy support staff
Pharmacy technicians
Pharmacy technicians support the work of pharmacists and other health professionals by performing a variety of pharmacy-related functions, including dispensing prescription drugs and other medical devices to patients and instructing on their use. They may also perform administrative duties in pharmaceutical practice, such as reviewing prescription requests with medic's offices and insurance companies to ensure correct medications are provided and payment is received.
Legislation requires the supervision of certain pharmacy technician's activities by a pharmacist. The majority of pharmacy technicians work in community pharmacies. In hospital pharmacies, pharmacy technicians may be managed by other senior pharmacy technicians. In the UK the role of a PhT in hospital pharmacy has grown and responsibility has been passed on to them to manage the pharmacy department and specialized areas in pharmacy practice allowing pharmacists the time to specialize in their expert field as medication consultants spending more time working with patients and in research. Pharmacy technicians are registered with the General Pharmaceutical Council (GPhC). The GPhC is the regulator of pharmacists, pharmacy technicians, and pharmacy premises.
In the US, pharmacy technicians perform their duties under the supervision of pharmacists. Although they may perform, under supervision, most dispensing, compounding and other tasks, they are not generally allowed to perform the role of counseling patients on the proper use of their medications. Some states have a legally mandated pharmacist-to-pharmacy technician ratio.
Dispensing assistants
Dispensing assistants are commonly referred to as "dispensers" and in community pharmacies perform largely the same tasks as a pharmacy technician. They work under the supervision of pharmacists and are involved in preparing (dispensing and labelling) medicines for provision to patients.
Healthcare assistants/medicines counter assistants
In the UK, this group of staff can sell certain medicines (including pharmacy only and general sales list medicines) over the counter. They cannot prepare prescription-only medicines for supply to patients.
History
The earliest known compilation of medicinal substances was the Sushruta Samhita, an Indian Ayurvedic treatise attributed to Sushruta in the 6th century BC. However, the earliest text as preserved dates to the 3rd or 4th century AD.
Many Sumerian (4th millennium BC – early 2nd millennium BC) cuneiform clay tablets record prescriptions for medicine.
Ancient Egyptian pharmacological knowledge was recorded in various papyri such as the Ebers Papyrus of 1550 BC, and the Edwin Smith Papyrus of the 16th century BC.
In Ancient Greece, Diocles of Carystus (4th century BC) was one of several men studying the medicinal properties of plants. He wrote several treatises on the topic. The Greek physician Pedanius Dioscorides is famous for writing a five-volume book in his native Greek Περί ύλης ιατρικής in the 1st century AD. The Latin translation (Concerning medical substances) was used as a basis for many medieval texts and was built upon by many middle eastern scientists during the Islamic Golden Age, themselves deriving their knowledge from earlier Greek Byzantine medicine Byzantine Medicine.
Pharmacy in China dates at least to the earliest known Chinese manual, the Shennong Bencao Jing (The Divine Farmer's Herb-Root Classic), dating back to the 1st century AD. It was compiled during the Han dynasty and was attributed to the mythical Shennong. Earlier literature included lists of prescriptions for specific ailments, exemplified by a manuscript "Recipes for 52 Ailments", found in the Mawangdui, sealed in 168 BC.
In Japan, at the end of the Asuka period (538–710) and the early Nara period (710–794), the men who fulfilled roles similar to those of modern pharmacists were highly respected. The place of pharmacists in society was expressly defined in the Taihō Code (701) and re-stated in the Yōrō Code (718). Ranked positions in the pre-Heian Imperial court were established; and this organizational structure remained largely intact until the Meiji Restoration (1868). In this highly stable hierarchy, the pharmacists—and even pharmacist assistants—were assigned status superior to all others in health-related fields such as physicians and acupuncturists. In the Imperial household, the pharmacist was even ranked above the two personal physicians of the Emperor.
There is a stone sign for a pharmacy shop with a tripod, a mortar, and a pestle opposite one for a doctor in the Arcadian Way in Ephesus near Kusadasi in Turkey. The current Ephesus dates back to 400 BC and was the site of the Temple of Artemis, one of the seven wonders of the world.
In Baghdad the first pharmacies, or drug stores, were established in 754, under the Abbasid Caliphate during the Islamic Golden Age. By the 9th century, these pharmacies were state-regulated.
The advances made in the Middle East in botany and chemistry led medicine in medieval Islam substantially to develop pharmacology. Muhammad ibn Zakarīya Rāzi (Rhazes) (865–915), for instance, acted to promote the medical uses of chemical compounds. Abu al-Qasim al-Zahrawi (Abulcasis) (936–1013) pioneered the preparation of medicines by sublimation and distillation. His Liber servitoris is of particular interest, as it provides the reader with recipes and explains how to prepare the "simples" from which were compounded the complex drugs then generally used. Sabur Ibn Sahl (d 869), was, however, the first physician to record his findings in a pharmacopoeia, describing a large variety of drugs and remedies for ailments. Al-Biruni (973–1050) wrote one of the most valuable Islamic works on pharmacology, entitled Kitab al-Saydalah (The Book of Drugs), in which he detailed the properties of drugs and outlined the role of pharmacy and the functions and duties of the pharmacist. Avicenna, too, described no less than 700 preparations, their properties, modes of action, and their indications. He devoted in fact a whole volume to simple drugs in The Canon of Medicine. Of great impact were also the works by al-Maridini of Baghdad and Cairo, and Ibn al-Wafid (1008–1074), both of which were printed in Latin more than fifty times, appearing as De Medicinis universalibus et particularibus by 'Mesue' the younger, and the Medicamentis simplicibus by 'Abenguefit'. Peter of Abano (1250–1316) translated and added a supplement to the work of al-Maridini under the title De Veneris. Al-Muwaffaq's contributions in the field are also pioneering. Living in the 10th century, he wrote The foundations of the true properties of Remedies, amongst others describing arsenious oxide, and being acquainted with silicic acid. He made clear distinction between sodium carbonate and potassium carbonate, and drew attention to the poisonous nature of copper compounds, especially copper vitriol, and also lead compounds. He also describes the distillation of sea-water for drinking.
In Europe, pharmacy-like shops began to appear during the 12th century. In 1240, emperor Frederic II issued a decree by which the physician's and the apothecary's professions were separated.
There are pharmacies in Europe that have been in operation since medieval times. In Florence, Italy, the director of the museum in the former Santa Maria Novella pharmacy says that the pharmacy there dates back to 1221. In Trier (Germany), the Löwen-Apotheke is in operation since 1241, the oldest pharmacy in Europe in continuous operation. In Dubrovnik (Croatia), a pharmacy that first opened in 1317 is located inside the Franciscan monastery: it is the 2nd oldest pharmacy in Europe that is still operating. In the Town Hall Square of Tallinn (Estonia), there is a pharmacy dating from at least 1422. The medieval Esteve Pharmacy, located in Llívia, a Catalan enclave close to Puigcerdà, is a museum: the building dates back to the 15th century and the museum keeps albarellos from the 16th and 17th centuries, old prescription books and antique drugs.
Practice areas
Pharmacists practice in a variety of areas including community pharmacies, infusion pharmacies, hospitals, clinics, insurance companies, medical communication companies, research facilities, pharmaceutical companies, extended care facilities, psychiatric hospitals, and regulatory agencies. Pharmacists themselves may have expertise in a medical specialty.
Community pharmacy
A pharmacy (also known as a chemist in Australia, New Zealand and the British Isles; or drugstore in North America; retail pharmacy in industry terminology; or apothecary, historically) is where most pharmacists practice the profession of pharmacy. It is the community pharmacy in which the dichotomy of the profession exists; health professionals who are also retailers.
Community pharmacies usually consist of a retail storefront with a dispensary, where medications are stored and dispensed. According to Sharif Kaf al-Ghazal, the opening of the first drugstores are recorded by Muslim pharmacists in Baghdad in 754 AD.
Hospital pharmacy
Pharmacies within hospitals differ considerably from community pharmacies. Some pharmacists in hospital pharmacies may have more complex clinical medication management issues, and pharmacists in community pharmacies often have more complex business and customer relations issues.
Because of the complexity of medications including specific indications, effectiveness of treatment regimens, safety of medications (i.e., drug interactions) and patient compliance issues (in the hospital and at home), many pharmacists practicing in hospitals gain more education and training after pharmacy school through a pharmacy practice residency, sometimes followed by another residency in a specific area. Those pharmacists are often referred to as clinical pharmacists and they often specialize in various disciplines of pharmacy.
For example, there are pharmacists who specialize in hematology/oncology, HIV/AIDS, infectious disease, critical care, emergency medicine, toxicology, nuclear pharmacy, pain management, psychiatry, anti-coagulation clinics, herbal medicine, neurology/epilepsy management, pediatrics, neonatal pharmacists and more.
Hospital pharmacies can often be found within the premises of the hospital. Hospital pharmacies usually stock a larger range of medications, including more specialized medications, than would be feasible in the community setting. Most hospital medications are unit-dose, or a single dose of medicine. Hospital pharmacists and trained pharmacy technicians compound sterile products for patients including total parenteral nutrition (TPN), and other medications are given intravenously. That is a complex process that requires adequate training of personnel, quality assurance of products, and adequate facilities.
Several hospital pharmacies have decided to outsource high-risk preparations and some other compounding functions to companies who specialize in compounding. The high cost of medications and drug-related technology and the potential impact of medications and pharmacy services on patient-care outcomes and patient safety require hospital pharmacies to perform at the highest level possible.
Clinical pharmacy
Pharmacists provide direct patient care services that optimize the use of medication and promotes health, wellness, and disease prevention. Clinical pharmacists care for patients in all health care settings, but the clinical pharmacy movement initially began inside hospitals and clinics. Clinical pharmacists often collaborate with physicians and other healthcare professionals to improve pharmaceutical care. Clinical pharmacists are now an integral part of the interdisciplinary approach to patient care. They often participate in patient care rounds for drug product selection. In the UK clinical pharmacists can also prescribe some medications for patients on the NHS or privately, after completing a non-medical prescribers course to become an Independent Prescriber.
The clinical pharmacist's role involves creating a comprehensive drug therapy plan for patient-specific problems, identifying goals of therapy, and reviewing all prescribed medications prior to dispensing and administration to the patient. The review process often involves an evaluation of the appropriateness of drug therapy (e.g., drug choice, dose, route, frequency, and duration of therapy) and its efficacy. Research shows that pharmacist led strategies reduce errors related to medication use. The pharmacist must also consider potential drug interactions, adverse drug reactions, and patient drug allergies while they design and initiate a drug therapy plan.
Ambulatory care pharmacy
Since the emergence of modern clinical pharmacy, ambulatory care pharmacy practice has emerged as a unique pharmacy practice setting. Ambulatory care pharmacy is based primarily on pharmacotherapy services that a pharmacist provides in a clinic. Pharmacists in this setting often do not dispense drugs, but rather see patients in-office visits to manage chronic disease states.
In the U.S. federal health care system (including the VA, the Indian Health Service, and NIH) ambulatory care pharmacists are given full independent prescribing authority. In some states, such North Carolina and New Mexico, these pharmacist clinicians are given collaborative prescriptive and diagnostic authority. In 2011 the board of Pharmaceutical Specialties approved ambulatory care pharmacy practice as a separate board certification. The official designation for pharmacists who pass the ambulatory care pharmacy specialty certification exam will be Board Certified Ambulatory Care Pharmacist and these pharmacists will carry the initials BCACP.
Compounding pharmacy/industrial pharmacy
Compounding involves preparing drugs in forms that are different from the generic prescription standard. This may include altering the strength, ingredients, or dosage form. Compounding is a way to create custom drugs for patients who may not be able to take the medication in its standard form, such as due to an allergy or difficulty swallowing. Compounding is necessary for these patients to still be able to properly get the prescriptions they need.
One area of compounding is preparing drugs in new dosage forms. For example, if a drug manufacturer only provides a drug as a tablet, a compounding pharmacist might make a medicated lollipop that contains the drug. Patients who have difficulty swallowing the tablet may prefer to suck the medicated lollipop instead.
Another form of compounding is by mixing different strengths (g, mg, mcg) of capsules or tablets to yield the desired amount of medication indicated by the physician, physician assistant, nurse practitioner, or clinical pharmacist practitioner. This form of compounding is found at community or hospital pharmacies or in-home administration therapy.
Compounding pharmacies specialize in compounding, although many also dispense the same non-compounded drugs that patients can obtain from community pharmacies.
Consultant pharmacy
Consultant pharmacy practice focuses more on medication regimen review (i.e. "cognitive services") than on actual dispensing of drugs. Consultant pharmacists most typically work in nursing homes, but are increasingly branching into other institutions and non-institutional settings. Traditionally consultant pharmacists were usually independent business owners, though in the United States many now work for a large pharmacy management company such as Omnicare, Kindred Healthcare or PharMerica. This trend may be gradually reversing as consultant pharmacists begin to work directly with patients, primarily because many elderly people are now taking numerous medications but continue to live outside of institutional settings. Some community pharmacies employ consultant pharmacists and/or provide consulting services.
The main principle of consultant pharmacy is developed by Hepler and Strand in 1990.
Veterinary pharmacy
Veterinary pharmacies, sometimes called animal pharmacies, may fall in the category of hospital pharmacy, retail pharmacy or mail-order pharmacy. Veterinary pharmacies stock different varieties and different strengths of medications to fulfill the pharmaceutical needs of animals. Because the needs of animals, as well as the regulations on veterinary medicine, are often very different from those related to people, in some jurisdictions veterinary pharmacy may be kept separate from regular pharmacies.
Nuclear pharmacy
Nuclear pharmacy focuses on preparing radioactive materials for diagnostic tests and for treating certain diseases. Nuclear pharmacists undergo additional training specific to handling radioactive materials, and unlike in community and hospital pharmacies, nuclear pharmacists typically do not interact directly with patients.
Military pharmacy
Military pharmacy is a different working environment to civilian practise because military pharmacy technicians perform duties such as evaluating medication orders, preparing medication orders, and dispensing medications. This would be illegal in civilian pharmacies because these duties are required to be performed by a licensed registered pharmacist. In the US military, state laws that prevent technicians from counseling patients or doing the final medication check prior to dispensing to patients (rather than a pharmacist solely responsible for these duties) do not apply.
Pharmacy informatics
Pharmacy informatics is the combination of pharmacy practice science and applied information science. Pharmacy informaticists work in many practice areas of pharmacy, however, they may also work in information technology departments or for healthcare information technology vendor companies. As a practice area and specialist domain, pharmacy informatics is growing quickly to meet the needs of major national and international patient information projects and health system interoperability goals. Pharmacists in this area are trained to participate in medication management system development, deployment, and optimization.
Specialty pharmacy
Specialty pharmacies supply high-cost injectable, oral, infused, or inhaled medications that are used for chronic and complex disease states such as cancer, hepatitis, and rheumatoid arthritis. Unlike a traditional community pharmacy where prescriptions for any common medication can be brought in and filled, specialty pharmacies carry novel medications that need to be properly stored, administered, carefully monitored, and clinically managed. In addition to supplying these drugs, specialty pharmacies also provide lab monitoring, adherence counseling, and assist patients with cost-containment strategies needed to obtain their expensive specialty drugs. In the US, it is currently the fastest-growing sector of the pharmaceutical industry with 19 of 28 newly FDA approved medications in 2013 being specialty drugs.
Due to the demand for clinicians who can properly manage these specific patient populations, the Specialty Pharmacy Certification Board has developed a new certification exam to certify specialty pharmacists. Along with the 100 questions computerized multiple-choice exam, pharmacists must also complete 3,000 hours of specialty pharmacy practice within the past three years as well as 30 hours of specialty pharmacist continuing education within the past two years.
Pharmaceutical sciences
The pharmaceutical sciences are a group of interdisciplinary areas of study concerned with the design, manufacturing, action, delivery, and classification of drugs. They apply knowledge from chemistry (inorganic, physical, biochemical and analytical), biology (anatomy, physiology, biochemistry, cell biology, and molecular biology), epidemiology, statistics, chemometrics, mathematics, physics, and chemical engineering.
The pharmaceutical sciences are further subdivided into several specific specialties, with four main branches:
Pharmacology: the study of the biochemical and physiological effects of drugs on human beings.
Pharmacodynamics: the study of the cellular and molecular interactions of drugs with their receptors. Simply "What the drug does to the body"
Pharmacokinetics: the study of the factors that control the concentration of drug at various sites in the body. Simply "What the body does to the drug"
Pharmaceutical toxicology: the study of the harmful or toxic effects of drugs.
Pharmacogenomics: the study of the inheritance of characteristic patterns of interaction between drugs and organisms.
Pharmaceutical chemistry: the study of drug design to optimize pharmacokinetics and pharmacodynamics, and synthesis of new drug molecules (Medicinal Chemistry).
Pharmaceutics: the study and design of drug formulation for optimum delivery, stability, pharmacokinetics, and patient acceptance.
Pharmacognosy: the study of medicines derived from natural sources.
As new discoveries advance and extend the pharmaceutical sciences, subspecialties continue to be added to this list. Importantly, as knowledge advances, boundaries between these specialty areas of pharmaceutical sciences are beginning to blur. Many fundamental concepts are common to all pharmaceutical sciences. These shared fundamental concepts further the understanding of their applicability to all aspects of pharmaceutical research and drug therapy.
Pharmacocybernetics (also known as pharma-cybernetics, cybernetic pharmacy, and cyber pharmacy) is an emerging field that describes the science of supporting drugs and medications use through the application and evaluation of informatics and internet technologies, so as to improve the pharmaceutical care of patients.
Society and culture
Etymology
The word pharmacy is derived from Old French farmacie "substance, such as a food or in the form of a medicine which has a laxative effect" from Medieval Latin pharmacia from Greek pharmakeia "a medicine", which itself derives from pharmakon, meaning "drug, poison, spell" (which is etymologically related to pharmakos).
Separation of prescribing and dispensing
Separation of prescribing and dispensing, also called dispensing separation, is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug.
In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs.
In contemporary time researchers and health policy analysts have more deeply considered these traditions and their effects. Advocates for separation and advocates for combining make similar claims for each of their conflicting perspectives, saying that separating or combining reduces conflict of interest in the healthcare industry, unnecessary health care, and lowers costs, while the opposite causes those things. Research in various places reports mixed outcomes in different circumstances.
Environmental impacts
In 2022 the Organisation for Economic Co-operation and Development proposed that pharmaceutical companies should be required to collect and destroy unused or expired medicines that they have put on the market in order to reduce public health risks around the misuse of medicines obtained from waste bins, the development of antimicrobial resistant bacteria from the discharge of antibiotics into environmental systems and "economic losses" from wasted healthcare resources. Potentially harmful concentrations of pharmaceutical waste has been detected in more than a quarter of water samples taken from 258 rivers around the world. OECD recommend that medicines should be collected separately from household waste and that "marketplaces and redistribution platforms for unused close-to-expiry-date medicines" should be set up. Such extended producer responsibility schemes are already running in France, Spain and Portugal.
The future of pharmacy
In the coming decades, pharmacists are expected to become more integral within the health care system. Rather than simply dispensing medication, pharmacists are increasingly expected to be compensated for their patient care skills. In particular, Medication Therapy Management (MTM) includes the clinical services that pharmacists can provide for their patients. Such services include a thorough analysis of all medication (prescription, non-prescription, and herbals) currently being taken by an individual. The result is a reconciliation of medication and patient education resulting in increased patient health outcomes and decreased costs to the health care system.
This shift has already commenced in some countries; for instance, pharmacists in Australia receive remuneration from the Australian Government for conducting comprehensive Home Medicines Reviews. In Canada, pharmacists in certain provinces have limited prescribing rights (as in Alberta and British Columbia) or are remunerated by their provincial government for expanded services such as medications reviews (Medschecks in Ontario). In the United Kingdom, pharmacists who undertake additional training are obtaining prescribing rights and this is because of pharmacy education. They are also being paid for by the government for medicine use reviews. In Scotland, the pharmacist can write prescriptions for Scottish registered patients of their regular medications, for the majority of drugs, except for controlled drugs, when the patient is unable to see their doctor, as could happen if they are away from home or the doctor is unavailable. In the United States, pharmaceutical care or clinical pharmacy has had an evolving influence on the practice of pharmacy. Moreover, the Doctor of Pharmacy (Pharm. D.) degree is now required before entering practice and some pharmacists now complete one or two years of residency or fellowship training following graduation. In addition, consultant pharmacists, who traditionally operated primarily in nursing homes, are now expanding into direct consultation with patients, under the banner of "senior care pharmacy".
In addition to patient care, pharmacies will be a focal point for medical adherence initiatives. There is enough evidence to show that integrated pharmacy based initiatives significantly impact adherence for chronic patients. For example, a study published in NIH shows "pharmacy based interventions improved patients' medication adherence rates by 2.1 percent and increased physicians' initiation rates by 38 percent, compared to the control group".
Pharmacy journals
List of pharmaceutical sciences journals
Symbols
The symbols most commonly associated with pharmacy are the mortar and pestle (North America) and the (medical prescription) character, which is often written as "Rx" in typed text; the green Greek cross in France, Argentina, the United Kingdom, Belgium, Ireland, Italy, Spain, and India; the Bowl of Hygieia (only) often used in the Netherlands but may be seen combined with other symbols elsewhere. Other common symbols include conical measures, and (in the US) caduceuses, in their logos. A red stylized letter A in used Germany and Austria (from Apotheke, the German word for pharmacy, from the same Greek root as the English word "apothecary"). The show globe was used in the US until the early 20th century; the Gaper in the Netherlands is increasingly rare.
See also
Bachelor of Pharmacy, Master of Pharmacy, Doctor of Pharmacy
Notes
References
Sources
Asai, T. (1985). Nyokan Tūkai. Tokyo: Kōdan-Sha.
Titsingh, Isaac, ed. (1834). [Siyun-sai Rin-siyo/Hayashi Gahō, 1652], Nipon o daï itsi ran; ou, Annales des empereurs du Japon. Paris: Oriental Translation Fund of Great Britain and Ireland....Click link for digitized, full-text copy of this book (in French)
Pharmacy Consulting Services | McKesson – A landmark study in hospital pharmacy performance based on an extensive literature review and the collective experience of the Health Systems Pharmacy Executive Alliance.
External links
Navigator History of Pharmacy Collection of internet resources related to the history of pharmacy
Soderlund Pharmacy Museum – Information about the history of the American Drugstore
The Lloyd Library Library of botanical, medical, pharmaceutical, and scientific books and periodicals, and works of allied sciences
American Institute of the History of Pharmacy American Institute of the History of Pharmacy—resources in the history of pharmacy
International Pharmaceutical Federation (FIP) Federation representing national associations of pharmacists and pharmaceutical scientists. Information and resources relating to pharmacy education, practice, science and policy
Medicinal chemistry
Symbols
Greek words and phrases | 0.776268 | 0.997865 | 0.774611 |
Generative grammar | Generative grammar is a research tradition in linguistics that aims to explain the cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Generative linguists, or generativists, tend to share certain working assumptions such as the competence–performance distinction and the notion that some domain-specific aspects of grammar are partly innate in humans. These assumptions are rejected in non-generative approaches such as usage-based models of language. Generative linguistics includes work in core areas such as syntax, semantics, phonology, psycholinguistics, and language acquisition, with additional extensions to topics including biolinguistics and music cognition.
Generative grammar began in the late 1950s with the work of Noam Chomsky, though its roots include earlier approaches such as structural linguistics. The earliest version of Chomsky's model was called Transformational grammar, with subsequent iterations known as Government and binding theory and the Minimalist program. Other present-day generative models include Optimality theory, Categorial grammar, and Tree-adjoining grammar.
Principles
Generative grammar is an umbrella term for a variety of approaches to linguistics. What unites these approaches is the goal of uncovering the cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge.
Cognitive science
Generative grammar studies language as part of cognitive science. Thus, research in the generative tradition involves formulating and testing hypotheses about the mental processes that allow humans to use language.
Like other approaches in linguistics, generative grammar engages in linguistic description rather than linguistic prescription.
Explicitness and generality
Generative grammar proposes models of language consisting of explicit rule systems, which make testable falsifiable predictions. This is different from traditional grammar where grammatical patterns are often described more loosely. These models are intended to be parsimonious, capturing generalizations in the data with as few rules as possible. For example, because English imperative tag questions obey the same restrictions that second person future declarative tags do, Paul Postal proposed that the two constructions are derived from the same underlying structure. By adopting this hypothesis, he was able to capture the restrictions on tags with a single rule. This kind of reasoning is commonplace in generative research.
Particular theories within generative grammar have been expressed using a variety of formal systems, many of which are modifications or extensions of context free grammars.
Competence versus performance
Generative grammar generally distinguishes linguistic competence and linguistic performance. Competence is the collection of subconscious rules that one knows when one knows a language; performance is the system which puts these rules to use. This distinction is related to the broader notion of Marr's levels used in other cognitive sciences, with competence corresponding to Marr's computational level.
For example, generative theories generally provide competence-based explanations for why English speakers would judge the sentence in (1) as odd. In these explanations, the sentence would be ungrammatical because the rules of English only generate sentences where demonstratives agree with the grammatical number of their associated noun.
(1) *That cats is eating the mouse.
By contrast, generative theories generally provide performance-based explanations for the oddness of center embedding sentences like one in (2). According to such explanations, the grammar of English could in principle generate such sentences, but doing so in practice is so taxing on working memory that the sentence ends up being unparsable.
(2) *The cat that the dog that the man fed chased meowed.
In general, performance-based explanations deliver a simpler theory of grammar at the cost of additional assumptions about memory and parsing. As a result, the choice between a competence-based explanation and a performance-based explanation for a given phenomenon is not always obvious and can require investigating whether the additional assumptions are supported by independent evidence. For example, while many generative models of syntax explain island effects by positing constraints within the grammar, it has also been argued that some or all of these constraints are in fact the result of limitations on performance.
Non-generative approaches often do not posit any distinction between competence and performance. For instance, usage-based models of language assume that grammatical patterns arise as the result of usage.
Innateness and universality
A major goal of generative research is to figure out which aspects of linguistic competence are innate and which are not. Within generative grammar, it is generally accepted that at least some domain-specific aspects are innate, and the term "universal grammar" is often used as a placeholder for whichever those turn out to be.
The idea that at least some aspects are innate is motivated by poverty of the stimulus arguments. For example, one famous poverty of the stimulus argument concerns the acquisition of yes-no questions in English. This argument starts from the observation that children only make mistakes compatible with rules targeting hierarchical structure even though the examples which they encounter could have been generated by a simpler rule that targets linear order. In other words, children seem to ignore the possibility that the question rule is as simple as "switch the order of the first two words" and immediately jump to alternatives that rearrange constituents in tree structures. This is taken as evidence that children are born knowing that grammatical rules involve hierarchical structure, even though they have to figure out what those rules are. The empirical basis of poverty of the stimulus arguments has been challenged by Geoffrey Pullum and others, leading to back-and-forth debate in the language acquisition literature. Recent work has also suggested that some recurrent neural network architectures are able to learn hierarchical structure without an explicit constraint.
Within generative grammar, there are a variety of theories about what universal grammar consists of. One notable hypothesis proposed by Hagit Borer holds that the fundamental syntactic operations are universal and that all variation arises from different feature-specifications in the lexicon. On the other hand, a strong hypothesis adopted in some variants of Optimality Theory holds that humans are born with a universal set of constraints, and that all variation arises from differences in how these constraints are ranked. In a 2002 paper, Noam Chomsky, Marc Hauser and W. Tecumseh Fitch proposed that universal grammar consists solely of the capacity for hierarchical phrase structure.
In day-to-day research, the notion that universal grammar exists motivates analyses in terms of general principles. As much as possible, facts about particular languages are derived from these general principles rather than from language-specific stipulations.
Subfields
Research in generative grammar spans a number of subfields. These subfields are also studied in non-generative approaches.
Syntax
Syntax studies the rule systems which combine smaller units such as morphemes into larger units such as phrases and sentences. Within generative syntax, prominent approaches include Minimalism, Government and binding theory, Lexical-functional grammar (LFG), and Head-driven phrase structure grammar (HPSG).
Phonology
Phonology studies the rule systems which organize linguistic sounds. For example, research in phonology includes work on phonotactic rules which govern which phonemes can be combined, as well as those that determine the placement of stress, tone, and other suprasegmental elements. Within generative grammar, a prominent approach to phonology is Optimality Theory.
Semantics
Semantics studies the rule systems that determine expressions' meanings. Within generative grammar, semantics is a species of formal semantics, providing compositional models of how the denotations of sentences are computed on the basis of the meanings of the individual morphemes and their syntactic structure.
Extensions
Music
Generative grammar has been applied to music theory and analysis since the 1980s. One notable approach is Fred Lerdahl and Ray Jackendoff's Generative theory of tonal music, which formalized and extended ideas from Schenkerian analysis.
Biolinguistics
Recent work in generative-inspired biolinguistics has proposed that universal grammar consists solely of syntactic recursion, and that it arose recently in humans as the result of a random genetic mutation. Generative-inspired biolinguistics has not uncovered any particular genes responsible for language. While some prospects were raised at the discovery of the FOXP2 gene, there is not enough support for the idea that it is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech.
History
As a distinct research tradition, generative grammar began in the late 1950s with the work of Noam Chomsky. However, its roots include earlier structuralist approaches such as glossematics which themselves had older roots, for instance in the work of the ancient Indian grammarian Pāṇini. Military funding to generative research was an important factor in its early spread in the 1960s.
The initial version of generative syntax was called transformational grammar. In transformational grammar, rules called transformations mapped a level of representation called deep structures to another level of representation called surface structure. The semantic interpretation of a sentence was represented by its deep structure, while the surface structure provided its pronunciation. For example, an active sentence such as "The doctor examined the patient" and "The patient was examined by the doctor", had the same deep structure. The difference in surface structures arises from the application of the passivization transformation, which was assumed to not affect meaning. This assumption was challenged in the 1960s by the discovery of examples such as "Everyone in the room knows two languages" and "Two languages are known by everyone in the room".
After the Linguistics wars of the late 1960s and early 1970s, Chomsky developed a revised model of syntax called Government and binding theory, which eventually grew into Minimalism. In the aftermath of those disputes, a variety of other generative models of syntax were proposed including relational grammar, Lexical-functional grammar (LFG), and Head-driven phrase structure grammar (HPSG).
Generative phonology originally focused on rewrite rules, in a system commonly known as SPE Phonology after the 1968 book The Sound Pattern of English by Chomsky and Morris Halle. In the 1990s, this approach was largely replaced by Optimality theory, which was able to capture generalizations called conspiracies which needed to be stipulated in SPE phonology.
Semantics emerged as a subfield of generative linguistics during the late 1970s, with the pioneering work of Richard Montague. Montague proposed a system called Montague grammar which consisted of interpretation rules mapping expressions from a bespoke model of syntax to formulas of intensional logic. Subsequent work by Barbara Partee, Irene Heim, Tanya Reinhart, and others showed that the key insights of Montague Grammar could be incorporated into more syntactically plausible systems.
See also
Cognitive linguistics
Cognitive revolution
Digital infinity
Formal grammar
Functional theories of grammar
Generative lexicon
Generative metrics
Generative principle
Generative semantics
Generative systems
Parsing
Phrase structure rules
Syntactic Structures
References
Further reading
Chomsky, Noam. 1965. Aspects of the theory of syntax. Cambridge, Massachusetts: MIT Press.
Hurford, J. (1990) Nativist and functional explanations in language acquisition. In I. M. Roca (ed.), Logical Issues in Language Acquisition, 85–136. Foris, Dordrecht.
Grammar
Grammar frameworks
Noam Chomsky
Cognitive musicology | 0.779453 | 0.993784 | 0.774608 |
Mechanism of action | In pharmacology, the term mechanism of action (MOA) refers to the specific biochemical interaction through which a drug substance produces its pharmacological effect. A mechanism of action usually includes mention of the specific molecular targets to which the drug binds, such as an enzyme or receptor. Receptor sites have specific affinities for drugs based on the chemical structure of the drug, as well as the specific action that occurs there.
Drugs that do not bind to receptors produce their corresponding therapeutic effect by simply interacting with chemical or physical properties in the body. Common examples of drugs that work in this way are antacids and laxatives.
In contrast, a mode of action (MoA) describes functional or anatomical changes, at the cellular level, resulting from the exposure of a living organism to a substance.
Importance
Elucidating the mechanism of action of novel drugs and medications is important for several reasons:
In the case of anti-infective drug development, the information permits anticipation of problems relating to clinical safety. Drugs disrupting the cytoplasmic membrane or electron transport chain, for example, are more likely to cause toxicity problems than those targeting components of the cell wall (peptidoglycan or β-glucans) or 70S ribosome, structures which are absent in human cells.
By knowing the interaction between a certain site of a drug and a receptor, other drugs can be formulated in a way that replicates this interaction, thus producing the same therapeutic effects. Indeed, this method is used to create new drugs.
It can help identify which patients are most likely to respond to treatment. Because the breast cancer medication trastuzumab is known to target protein HER2, for example, tumors can be screened for the presence of this molecule to determine whether or not the patient will benefit from trastuzumab therapy.
It can enable better dosing because the drug's effects on the target pathway can be monitored in the patient. Statin dosage, for example, is usually determined by measuring the patient's blood cholesterol levels.
It allows drugs to be combined in such a way that the likelihood of drug resistance emerging is reduced. By knowing what cellular structure an anti-infective or anticancer drug acts upon, it is possible to administer a cocktail that inhibits multiple targets simultaneously, thereby reducing the risk that a single mutation in microbial or tumor DNA will lead to drug resistance and treatment failure.
It may allow other indications for the drug to be identified. Discovery that sildenafil inhibits phosphodiesterase-5 (PDE-5) proteins, for example, enabled this drug to be repurposed for pulmonary arterial hypertension treatment, since PDE-5 is expressed in pulmonary hypertensive lungs.
Determination
Microscopy-based methods
Bioactive compounds induce phenotypic changes in target cells, changes that are observable by microscopy and that can give insight into the mechanism of action of the compound.
With antibacterial agents, the conversion of target cells to spheroplasts can be an indication that peptidoglycan synthesis is being inhibited, and filamentation of target cells can be an indication that PBP3, FtsZ, or DNA synthesis is being inhibited. Other antibacterial agent-induced changes include ovoid cell formation, pseudomulticellular forms, localized swelling, bulge formation, blebbing, and peptidoglycan thickening. In the case of anticancer agents, bleb formation can be an indication that the compound is disrupting the plasma membrane.
A current limitation of this approach is the time required to manually generate and interpret data, but advances in automated microscopy and image analysis software may help resolve this.
Direct biochemical methods
Direct biochemical methods include methods in which a protein or a small molecule, such as a drug candidate, is labeled and is traced throughout the body. This proves to be the most direct approach to find target protein that will bind to small targets of interest, such as a basic representation of a drug outline, in order to identify the pharmacophore of the drug. Due to the physical interactions between the labeled molecule and a protein, biochemical methods can be used to determine the toxicity, efficacy, and mechanism of action of the drug.
Computation inference methods
Typically, computation inference methods are primarily used to predict protein targets for small molecule drugs based on computer based pattern recognition. However, this method could also be used for finding new targets for existing or newly developed drugs. By identifying the pharmacophore of the drug molecule, the profiling method of pattern recognition can be carried out where a new target is identified. This provides an insight at a possible mechanism of action since it is known what certain functional components of the drug are responsible for when interacting with a certain area on a protein, thus leading to a therapeutic effect.
Omics based methods
Omics based methods use omics technologies, such as chemoproteomics, reverse genetics and genomics, transcriptomics, and proteomics, to identify the potential targets of the compound of interest. Reverse genetics and genomics approaches, for instance, uses genetic perturbation (e.g. CRISPR-Cas9 or siRNA) in combination with the compound to identify genes whose knockdown or knockout abolishes the pharmacological effect of the compound. On the other hand, transcriptomics and proteomics profiles of the compound can be used to compare with profiles of compounds with known targets. Thanks to computation inference, it is then possible to make hypotheses about the mechanism of action of the compound, which can subsequently be tested.
Drugs with known MOA
There are many drugs in which the mechanism of action is known. One example is aspirin.
Aspirin
The mechanism of action of aspirin involves irreversible inhibition of the enzyme cyclooxygenase; therefore suppressing the production of prostaglandins and thromboxanes, thus reducing pain and inflammation. This mechanism of action is specific to aspirin and is not constant for all nonsteroidal anti-inflammatory drugs (NSAIDs). Rather, aspirin is the only NSAID that irreversibly inhibits COX-1.
Drugs with unknown MOA
Some drug mechanisms of action are still unknown. However, even though the mechanism of action of a certain drug is unknown, the drug still functions; it is just unknown or unclear how the drug interacts with receptors and produces its therapeutic effect.
Mode of action
In some literature articles, the terms "mechanism of action" and "mode of action" are used interchangeably, typically referring to the way in which the drug interacts and produces a medical effect. However, in actuality, a mode of action describes functional or anatomical changes, at the cellular level, resulting from the exposure of a living organism to a substance. This differs from a mechanism of action since it is a more specific term that focuses on the interaction between the drug itself and an enzyme or receptor and its particular form of interaction, whether through inhibition, activation, agonism, or antagonism. Furthermore, the term "mechanism of action" is the main term that is primarily used in pharmacology, whereas "mode of action" will more often appear in the field of microbiology or certain aspects of biology.
See also
Mode of action (MoA)
Pharmacodynamics
Chemoproteomics
References
Pharmacology
Pharmacodynamics
Medicinal chemistry | 0.78041 | 0.992565 | 0.774608 |
Avogadro's law | Avogadro's law (sometimes referred to as Avogadro's hypothesis or Avogadro's principle) or Avogadro-Ampère's hypothesis is an experimental gas law relating the volume of a gas to the amount of substance of gas present. The law is a specific case of the ideal gas law. A modern statement is:
Avogadro's law states that "equal volumes of all gases, at the same temperature and pressure, have the same number of molecules."
For a given mass of an ideal gas, the volume and amount (moles) of the gas are directly proportional if the temperature and pressure are constant.
The law is named after Amedeo Avogadro who, in 1812, hypothesized that two given samples of an ideal gas, of the same volume and at the same temperature and pressure, contain the same number of molecules. As an example, equal volumes of gaseous hydrogen and nitrogen contain the same number of molecules when they are at the same temperature and pressure, and observe ideal gas behavior. In practice, real gases show small deviations from the ideal behavior and the law holds only approximately, but is still a useful approximation for scientists.
Mathematical definition
The law can be written as:
or
where
V is the volume of the gas;
n is the amount of substance of the gas (measured in moles);
k is a constant for a given temperature and pressure.
This law describes how, under the same condition of temperature and pressure, equal volumes of all gases contain the same number of molecules. For comparing the same substance under two different sets of conditions, the law can be usefully expressed as follows:
The equation shows that, as the number of moles of gas increases, the volume of the gas also increases in proportion. Similarly, if the number of moles of gas is decreased, then the volume also decreases. Thus, the number of molecules or atoms in a specific volume of ideal gas is independent of their size or the molar mass of the gas.
Derivation from the ideal gas law
The derivation of Avogadro's law follows directly from the ideal gas law, i.e.
where R is the gas constant, T is the Kelvin temperature, and P is the pressure (in pascals).
Solving for V/n, we thus obtain
Compare that to
which is a constant for a fixed pressure and a fixed temperature.
An equivalent formulation of the ideal gas law can be written using Boltzmann constant kB, as
where N is the number of particles in the gas, and the ratio of R over kB is equal to the Avogadro constant.
In this form, for V/N is a constant, we have
If T and P are taken at standard conditions for temperature and pressure (STP), then k′ = 1/n0, where n0 is the Loschmidt constant.
Historical account and influence
Avogadro's hypothesis (as it was known originally) was formulated in the same spirit of earlier empirical gas laws like Boyle's law (1662), Charles's law (1787) and Gay-Lussac's law (1808). The hypothesis was first published by Amedeo Avogadro in 1811, and it reconciled Dalton atomic theory with the "incompatible" idea of Joseph Louis Gay-Lussac that some gases were composite of different fundamental substances (molecules) in integer proportions. In 1814, independently from Avogadro, André-Marie Ampère published the same law with similar conclusions. As Ampère was more well known in France, the hypothesis was usually referred there as Ampère's hypothesis, and later also as Avogadro–Ampère hypothesis or even Ampère–Avogadro hypothesis.
Experimental studies carried out by Charles Frédéric Gerhardt and Auguste Laurent on organic chemistry demonstrated that Avogadro's law explained why the same quantities of molecules in a gas have the same volume. Nevertheless, related experiments with some inorganic substances showed seeming exceptions to the law. This apparent contradiction was finally resolved by Stanislao Cannizzaro, as announced at Karlsruhe Congress in 1860, four years after Avogadro's death. He explained that these exceptions were due to molecular dissociations at certain temperatures, and that Avogadro's law determined not only molecular masses, but atomic masses as well.
Ideal gas law
Boyle, Charles and Gay-Lussac laws, together with Avogadro's law, were combined by Émile Clapeyron in 1834, giving rise to the ideal gas law. At the end of the 19th century, later developments from scientists like August Krönig, Rudolf Clausius, James Clerk Maxwell and Ludwig Boltzmann, gave rise to the kinetic theory of gases, a microscopic theory from which the ideal gas law can be derived as an statistical result from the movement of atoms/molecules in a gas.
Avogadro constant
Avogadro's law provides a way to calculate the quantity of gas in a receptacle. Thanks to this discovery, Johann Josef Loschmidt, in 1865, was able for the first time to estimate the size of a molecule. His calculation gave rise to the concept of the Loschmidt constant, a ratio between macroscopic and atomic quantities. In 1910, Millikan's oil drop experiment determined the charge of the electron; using it with the Faraday constant (derived by Michael Faraday in 1834), one is able to determine the number of particles in a mole of substance. At the same time, precision experiments by Jean Baptiste Perrin led to the definition of the Avogadro number as the number of molecules in one gram-molecule of oxygen. Perrin named the number to honor Avogadro for his discovery of the namesake law. Later standardization of the International System of Units led to the modern definition of the Avogadro constant.
Molar volume
At standard temperature and pressure (100 kPa and 273.15 K), we can use Avogadro's law to find the molar volume of an ideal gas:
Similarly, at standard atmospheric pressure (101.325 kPa) and 0 °C (273.15 K):
Notes
References
Gas laws
Amount of substance
it:Volume molare#Legge di Avogadro | 0.778806 | 0.994554 | 0.774565 |
Exploratory data analysis | In statistics, exploratory data analysis (EDA) is an approach of analyzing data sets to summarize their main characteristics, often using statistical graphics and other data visualization methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling and thereby contrasts traditional hypothesis testing. Exploratory data analysis has been promoted by John Tukey since 1970 to encourage statisticians to explore the data, and possibly formulate hypotheses that could lead to new data collection and experiments. EDA is different from initial data analysis (IDA), which focuses more narrowly on checking assumptions required for model fitting and hypothesis testing, and handling missing values and making transformations of variables as needed. EDA encompasses IDA.
Overview
Tukey defined data analysis in 1961 as: "Procedures for analyzing data, techniques for interpreting the results of such procedures, ways of planning the gathering of data to make its analysis easier, more precise or more accurate, and all the machinery and results of (mathematical) statistics which apply to analyzing data."
Exploratory data analysis is an analysis technique to analyze and investigate the data set and summarize the main characteristics of the dataset. Main advantage of EDA is providing the data visualization of data after conducting the analysis.
Tukey's championing of EDA encouraged the development of statistical computing packages, especially S at Bell Labs. The S programming language inspired the systems S-PLUS and R. This family of statistical-computing environments featured vastly improved dynamic visualization capabilities, which allowed statisticians to identify outliers, trends and patterns in data that merited further study.
Tukey's EDA was related to two other developments in statistical theory: robust statistics and nonparametric statistics, both of which tried to reduce the sensitivity of statistical inferences to errors in formulating statistical models. Tukey promoted the use of five number summary of numerical data—the two extremes (maximum and minimum), the median, and the quartiles—because these median and quartiles, being functions of the empirical distribution are defined for all distributions, unlike the mean and standard deviation; moreover, the quartiles and median are more robust to skewed or heavy-tailed distributions than traditional summaries (the mean and standard deviation). The packages S, S-PLUS, and R included routines using resampling statistics, such as Quenouille and Tukey's jackknife and Efron bootstrap, which are nonparametric and robust (for many problems).
Exploratory data analysis, robust statistics, nonparametric statistics, and the development of statistical programming languages facilitated statisticians' work on scientific and engineering problems. Such problems included the fabrication of semiconductors and the understanding of communications networks, which concerned Bell Labs. These statistical developments, all championed by Tukey, were designed to complement the analytic theory of testing statistical hypotheses, particularly the Laplacian tradition's emphasis on exponential families.
Development
John W. Tukey wrote the book Exploratory Data Analysis in 1977. Tukey held that too much emphasis in statistics was placed on statistical hypothesis testing (confirmatory data analysis); more emphasis needed to be placed on using data to suggest hypotheses to test. In particular, he held that confusing the two types of analyses and employing them on the same set of data can lead to systematic bias owing to the issues inherent in testing hypotheses suggested by the data.
The objectives of EDA are to:
Enable unexpected discoveries in the data
Suggest hypotheses about the causes of observed phenomena
Assess assumptions on which statistical inference will be based
Support the selection of appropriate statistical tools and techniques
Provide a basis for further data collection through surveys or experiments
Many EDA techniques have been adopted into data mining. They are also being taught to young students as a way to introduce them to statistical thinking.
Techniques and tools
There are a number of tools that are useful for EDA, but EDA is characterized more by the attitude taken than by particular techniques.
Typical graphical techniques used in EDA are:
Box plot
Histogram
Multi-vari chart
Run chart
Pareto chart
Scatter plot (2D/3D)
Stem-and-leaf plot
Parallel coordinates
Odds ratio
Targeted projection pursuit
Heat map
Bar chart
Horizon graph
Glyph-based visualization methods such as PhenoPlot and Chernoff faces
Projection methods such as grand tour, guided tour and manual tour
Interactive versions of these plots
Dimensionality reduction:
Multidimensional scaling
Principal component analysis (PCA)
Multilinear PCA
Nonlinear dimensionality reduction (NLDR)
Iconography of correlations
Typical quantitative techniques are:
Median polish
Trimean
Ordination
History
Many EDA ideas can be traced back to earlier authors, for example:
Francis Galton emphasized order statistics and quantiles.
Arthur Lyon Bowley used precursors of the stemplot and five-number summary (Bowley actually used a "seven-figure summary", including the extremes, deciles and quartiles, along with the median—see his Elementary Manual of Statistics (3rd edn., 1920), p. 62– he defines "the maximum and minimum, median, quartiles and two deciles" as the "seven positions").
Andrew Ehrenberg articulated a philosophy of data reduction (see his book of the same name).
The Open University course Statistics in Society (MDST 242), took the above ideas and merged them with Gottfried Noether's work, which introduced statistical inference via coin-tossing and the median test.
Example
Findings from EDA are orthogonal to the primary analysis task. To illustrate, consider an example from Cook et al. where the analysis task is to find the variables which best predict the tip that a dining party will give to the waiter. The variables available in the data collected for this task are: the tip amount, total bill, payer gender, smoking/non-smoking section, time of day, day of the week, and size of the party. The primary analysis task is approached by fitting a regression model where the tip rate is the response variable. The fitted model is
(tip rate) = 0.18 - 0.01 × (party size)
which says that as the size of the dining party increases by one person (leading to a higher bill), the tip rate will decrease by 1%, on average.
However, exploring the data reveals other interesting features not described by this model.
What is learned from the plots is different from what is illustrated by the regression model, even though the experiment was not designed to investigate any of these other trends. The patterns found by exploring the data suggest hypotheses about tipping that may not have been anticipated in advance, and which could lead to interesting follow-up experiments where the hypotheses are formally stated and tested by collecting new data.
Software
JMP, an EDA package from SAS Institute.
KNIME, Konstanz Information Miner – Open-Source data exploration platform based on Eclipse.
Minitab, an EDA and general statistics package widely used in industrial and corporate settings.
Orange, an open-source data mining and machine learning software suite.
Python, an open-source programming language widely used in data mining and machine learning.
R, an open-source programming language for statistical computing and graphics. Together with Python one of the most popular languages for data science.
TinkerPlots an EDA software for upper elementary and middle school students.
Weka an open source data mining package that includes visualization and EDA tools such as targeted projection pursuit.
See also
Anscombe's quartet, on importance of exploration
Data dredging
Predictive analytics
Structured data analysis (statistics)
Configural frequency analysis
Descriptive statistics
References
Bibliography
Andrienko, N & Andrienko, G (2005) Exploratory Analysis of Spatial and Temporal Data. A Systematic Approach. Springer.
Cook, D. and Swayne, D.F. (with A. Buja, D. Temple Lang, H. Hofmann, H. Wickham, M. Lawrence) (2007-12-12). Interactive and Dynamic Graphics for Data Analysis: With R and GGobi. Springer. ISBN 9780387717616.
Hoaglin, D C; Mosteller, F & Tukey, John Wilder (Eds) (1985). Exploring Data Tables, Trends and Shapes. ISBN 978-0-471-09776-1.
Hoaglin, D C; Mosteller, F & Tukey, John Wilder (Eds) (1983). Understanding Robust and Exploratory Data Analysis. ISBN 978-0-471-09777-8.
Young, F. W. Valero-Mora, P. and Friendly M. (2006) Visual Statistics: Seeing your data with Dynamic Interactive Graphics. Wiley ISBN 978-0-471-68160-1 Jambu M. (1991) Exploratory and Multivariate Data Analysis. Academic Press ISBN 0123800900
S. H. C. DuToit, A. G. W. Steyn, R. H. Stumpf (1986) Graphical Exploratory Data Analysis. Springer ISBN 978-1-4612-9371-2
Leinhardt, G., Leinhardt, S., Exploratory Data Analysis: New Tools for the Analysis of Empirical Data, Review of Research in Education, Vol. 8, 1980 (1980), pp. 85–157.
Theus, M., Urbanek, S. (2008), Interactive Graphics for Data Analysis: Principles and Examples, CRC Press, Boca Raton, FL,
Young, F. W. Valero-Mora, P. and Friendly M. (2006) Visual Statistics: Seeing your data with Dynamic Interactive Graphics. Wiley
Jambu M. (1991) Exploratory and Multivariate Data Analysis. Academic Press
S. H. C. DuToit, A. G. W. Steyn, R. H. Stumpf (1986) Graphical Exploratory Data Analysis. Springer
External links
Carnegie Mellon University – free online course on Probability and Statistics, with a module on EDA
• Exploratory data analysis chapter: engineering statistics handbook | 0.777638 | 0.995991 | 0.77452 |
Degeneracy (biology) | Within biological systems, degeneracy occurs when structurally dissimilar components/pathways can perform similar functions (i.e. are effectively interchangeable) under certain conditions, but perform distinct functions in other conditions. Degeneracy is thus a relational property that requires comparing the behavior of two or more components. In particular, if degeneracy is present in a pair of components, then there will exist conditions where the pair will appear functionally redundant but other conditions where they will appear functionally distinct.
Note that this use of the term has practically no relevance to the questionably meaningful concept of evolutionarily degenerate populations that have lost ancestral functions.
Biological examples
Examples of degeneracy are found in the genetic code, when many different nucleotide sequences encode the same polypeptide; in protein folding, when different polypeptides fold to be structurally and functionally equivalent; in protein functions, when overlapping binding functions and similar catalytic specificities are observed; in metabolism, when multiple, parallel biosynthetic and catabolic pathways may coexist.
More generally, degeneracy is observed in proteins of every functional class (e.g. enzymatic, structural, or regulatory), protein complex assemblies, ontogenesis, the nervous system, cell signalling (crosstalk) and numerous other biological contexts reviewed in.
Contribution to robustness
Degeneracy contributes to the robustness of biological traits through several mechanisms. Degenerate components compensate for one another under conditions where they are functionally redundant, thus providing robustness against component or pathway failure. Because degenerate components are somewhat different, they tend to harbor unique sensitivities so that a targeted attack such as a specific inhibitor is less likely to present a risk to all components at once. There are numerous biological examples where degeneracy contributes to robustness in this way. For instance, gene families can encode for diverse proteins with many distinctive roles yet sometimes these proteins can compensate for each other during lost or suppressed gene expression, as seen in the developmental roles of the adhesins gene family in Saccharomyces. Nutrients can be metabolized by distinct metabolic pathways that are effectively interchangeable for certain metabolites even though the total effects of each pathway are not identical. In cancer, therapies targeting the EGF receptor are thwarted by the co-activation of alternate receptor tyrosine kinases (RTK) that have partial functional overlap with the EGF receptor (and are therefore degenerate), but are not targeted by the same specific EGF receptor inhibitor. Other examples from various levels of biological organization can be found in.
Theory
Several theoretical developments have outlined links between degeneracy and important biological measurements related to robustness, complexity, and evolvability. These include:
Theoretical arguments supported by simulations have proposed that degeneracy can lead to distributed forms of robustness in protein interaction networks. Those authors suggest that similar phenomena is likely to arise in other biological networks and potentially may contribute to the resilience of ecosystems as well.
Tononi et al. have found evidence that degeneracy is inseparable from the existence of hierarchical complexity in neural populations. They argue that the link between degeneracy and complexity is likely to be much more general.
Fairly abstract simulations have supported the hypothesis that degeneracy fundamentally alters the propensity for a genetic system to access novel heritable phenotypes and that degeneracy could therefore be a precondition for open-ended evolution.
The three hypotheses above have been integrated in where they propose that degeneracy plays a central role in the open-ended evolution of biological complexity. In the same article, it was argued that the absence of degeneracy within many designed (abiotic) complex systems may help to explain why robustness appears to be in conflict with flexibility and adaptability, as seen in software, systems engineering, and artificial life.
See also
Canalisation
Equifinality
References
Further reading
Because there are many distinct types of systems that undergo heritable variation and selection (see Universal Darwinism), degeneracy has become a highly interdisciplinary topic. The following provides a brief roadmap to the application and study of degeneracy within different disciplines.
Animal Communication
Cultural Variation
Ecosystems
Epigenetics
History and philosophy of science
Systems biology
Evolution
Immunology
Cohen, I.R., U. Hershberg, and S. Solomon, 2004 Antigen-receptor degeneracy and immunological paradigms. Molecular Immunology, . 40(14–15) pp. 993–996.
Tieri, P., G.C. Castellani, D. Remondini, S. Valensin, J. Loroni, S. Salvioli, and C. Franceschi, Capturing degeneracy of the immune system. In Silico Immunology. Springer, 2007.
Artificial life, Computational intelligence
Andrews, P.S. and J. Timmis, A Computational Model of Degeneracy in a Lymph Node. Lecture Notes in Computer Science, 2006. 4163: p. 164.
Mendao, M., J. Timmis, P.S. Andrews, and M. Davies. The Immune System in Pieces: Computational Lessons from Degeneracy in the Immune System. in Foundations of Computational Intelligence (FOCI). 2007.
Whitacre, J.M. and A. Bender. Degenerate neutrality creates evolvable fitness landscapes. in WorldComp-2009. 2009. Las Vegas, Nevada, USA.
Whitacre, J.M., P. Rohlfshagen, X. Yao, and A. Bender. The role of degenerate robustness in the evolvability of multi-agent systems in dynamic environments. in PPSN XI. 2010. Kraków, Poland.
Fernandez-Leon, J.A. (2011). Evolving cognitive-behavioural dependencies in situated agents for behavioural robustness. BioSystems 106, pp. 94–110.
Fernandez-Leon, J.A. (2011). Behavioural robustness: a link between distributed mechanisms and coupled transient dynamics. BioSystems 105, Elsevier, pp. 49–61.
Fernandez-Leon, J.A. (2010). Evolving experience-dependent robust behaviour in embodied agents. BioSystems 103:1, Elsevier, pp. 45–56.
Brain
Price, C. and K. Friston, Degeneracy and cognitive anatomy. Trends in Cognitive Sciences, 2002. 6(10) pp. 416–421.
Tononi, G., O. Sporns, and G.M. Edelman, Measures of degeneracy and redundancy in biological networks. Proceedings of the National Academy of Sciences, USA, 1999. 96(6) pp. 3257–3262.
Mason, P.H. (2014) What is normal? A historical survey and neuroanthropological perspective, in Jens Clausen and Neil Levy. (Eds.) Handbook of Neuroethics, Springer, pp. 343–363.
Linguistics
Oncology
Tian, T., S. Olson, J.M. Whitacre, and A. Harding, The origins of cancer robustness and evolvability. Integrative Biology, 2011. 3: pp. 17–30.
Peer Review
Lehky, S., Peer Evaluation and Selection Systems: Adaptation and Maladaptation of Individuals and Groups through Peer Review. 2011: BioBitField Press.
Researchers
Duarte Araujo
Sergei Atamas
Andrew Barron
Keith Davids
Gerald Edelman
Ryszard Maleszka
Paul Mason
Ludovic Seifert
Ricard Sole
Giulio Tononi
James Whitacre
External links
degeneracy research community
Biological concepts
Biology theories
Evolutionarily significant biological phenomena
Systems biology
Evolutionary dynamics
Evolutionary processes | 0.798477 | 0.969972 | 0.7745 |
Aspen HYSYS | Aspen HYSYS (or simply HYSYS) is a chemical process simulator currently developed by AspenTech used to mathematically model chemical processes, from unit operations to full chemical plants and refineries. HYSYS is able to perform many of the core calculations of chemical engineering, including those concerned with mass balance, energy balance, vapor-liquid equilibrium, heat transfer, mass transfer, chemical kinetics, fractionation, and pressure drop. HYSYS is used extensively in industry and academia for steady-state and dynamic simulation, process design, performance modeling, and optimization.
Etymology
HYSYS is a portmanteau formed from Hyprotech, the name of the company which created the software, and Systems.
History
HYSYS was first conceived and created by the Canadian company Hyprotech, founded by researchers from the University of Calgary. The HYSYS Version 1.1 Reference Volume was published in 1996.
In May 2002, AspenTech acquired Hyprotech, including HYSYS. Following a 2004 ruling by the United States Federal Trade Commission, AspenTech was forced to divest its Hyprotech assets, including HYSYS source code, ultimately selling these to Honeywell. Honeywell was also able to hire a number of HYSYS developers, ultimately mobilizing these resources to produce UniSim. The divestment agreement specified that Aspentech would retain rights to market and develop most Hyprotech products (including HYSYS) royalty-free. As of 2024, AspenTech continues to produce HYSYS.
References
Chemical synthesis
Chemical engineering software | 0.7856 | 0.985832 | 0.77447 |
Fluid dynamics | In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids — liquids and gases. It has several subdisciplines, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and modelling fission weapon detonation.
Fluid dynamics offers a systematic structure—which underlies these practical disciplines—that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time.
Before the twentieth century, "hydrodynamics" was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases.
Equations
The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum, and conservation of energy (also known as the First Law of Thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds transport theorem.
In addition to the above, fluids are assumed to obey the continuum assumption. At small scale, all fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption assumes that fluids are continuous, rather than discrete. Consequently, it is assumed that properties such as density, pressure, temperature, and flow velocity are well-defined at infinitesimally small points in space and vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored.
For fluids that are sufficiently dense to be a continuum, do not contain ionized species, and have flow velocities that are small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier–Stokes equations—which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on flow velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in computational fluid dynamics. The equations can be simplified in several ways, all of which make them easier to solve. Some of the simplifications allow some simple fluid dynamics problems to be solved in closed form.
In addition to the mass, momentum, and energy conservation equations, a thermodynamic equation of state that gives the pressure as a function of other thermodynamic variables is required to completely describe the problem. An example of this would be the perfect gas equation of state:
where is pressure, is density, and is the absolute temperature, while is the gas constant and is molar mass for a particular gas. A constitutive relation may also be useful.
Conservation laws
Three conservation laws are used to solve fluid dynamics problems, and may be written in integral or differential form. The conservation laws may be applied to a region of the flow called a control volume. A control volume is a discrete volume in space through which fluid is assumed to flow. The integral formulations of the conservation laws are used to describe the change of mass, momentum, or energy within the control volume. Differential formulations of the conservation laws apply Stokes' theorem to yield an expression that may be interpreted as the integral form of the law applied to an infinitesimally small volume (at a point) within the flow.
Classifications
Compressible versus incompressible flow
All fluids are compressible to an extent; that is, changes in pressure or temperature cause changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modelled as an incompressible flow. Otherwise the more general compressible flow equations must be used.
Mathematically, incompressibility is expressed by saying that the density of a fluid parcel does not change as it moves in the flow field, that is,
where is the material derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density.
For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate.
Newtonian versus non-Newtonian fluids
All fluids, except superfluids, are viscous, meaning that they exert some resistance to deformation: neighbouring parcels of fluid moving at different velocities exert viscous forces on each other. The velocity gradient is referred to as a strain rate; it has dimensions . Isaac Newton showed that for many familiar fluids such as water and air, the stress due to these viscous forces is linearly related to the strain rate. Such fluids are called Newtonian fluids. The coefficient of proportionality is called the fluid's viscosity; for Newtonian fluids, it is a fluid property that is independent of the strain rate.
Non-Newtonian fluids have a more complicated, non-linear stress-strain behaviour. The sub-discipline of rheology describes the stress-strain behaviours of such fluids, which include emulsions and slurries, some viscoelastic materials such as blood and some polymers, and sticky liquids such as latex, honey and lubricants.
Inviscid versus viscous versus Stokes flow
The dynamic of fluid parcels is described with the help of Newton's second law. An accelerating parcel of fluid is subject to inertial effects.
The Reynolds number is a dimensionless quantity which characterises the magnitude of inertial effects compared to the magnitude of viscous effects. A low Reynolds number indicates that viscous forces are very strong compared to inertial forces. In such cases, inertial forces are sometimes neglected; this flow regime is called Stokes or creeping flow.
In contrast, high Reynolds numbers indicate that the inertial effects have more effect on the velocity field than the viscous (friction) effects. In high Reynolds number flows, the flow is often modeled as an inviscid flow, an approximation in which viscosity is completely neglected. Eliminating viscosity allows the Navier–Stokes equations to be simplified into the Euler equations. The integration of the Euler equations along a streamline in an inviscid flow yields Bernoulli's equation. When, in addition to being inviscid, the flow is irrotational everywhere, Bernoulli's equation can completely describe the flow everywhere. Such flows are called potential flows, because the velocity field may be expressed as the gradient of a potential energy expression.
This idea can work fairly well when the Reynolds number is high. However, problems such as those involving solid boundaries may require that the viscosity be included. Viscosity cannot be neglected near solid boundaries because the no-slip condition generates a thin region of large strain rate, the boundary layer, in which viscosity effects dominate and which thus generates vorticity. Therefore, to calculate net forces on bodies (such as wings), viscous flow equations must be used: inviscid flow theory fails to predict drag forces, a limitation known as the d'Alembert's paradox.
A commonly used model, especially in computational fluid dynamics, is to use two flow models: the Euler equations away from the body, and boundary layer equations in a region close to the body. The two solutions can then be matched with each other, using the method of matched asymptotic expansions.
Steady versus unsteady flow
A flow that is not a function of time is called steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Time dependent flow is known as unsteady (also called transient). Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady.
Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. The random velocity field is statistically stationary if all statistics are invariant under a shift in time. This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow.
Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension fewer (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field.
Laminar versus turbulent flow
Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. The presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component.
It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS have been found to agree well with experimental data for some flows.
Most flows of interest have Reynolds numbers much too high for DNS to be a viable option, given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human ( > 3 m), moving faster than is well beyond the limit of DNS simulation ( = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord dimension). Solving these real-life flow problems requires turbulence models for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modelling provides a model of the effects of the turbulent flow. Such a modelling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the form of detached eddy simulation (DES) — a combination of LES and RANS turbulence modelling.
Other approximations
There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below.
The Boussinesq approximation neglects variations in density except to calculate buoyancy forces. It is often used in free convection problems where density changes are small.
Lubrication theory and Hele–Shaw flow exploits the large aspect ratio of the domain to show that certain terms in the equations are small and so can be neglected.
Slender-body theory is a methodology used in Stokes flow problems to estimate the force on, or flow field around, a long slender object in a viscous fluid.
The shallow-water equations can be used to describe a layer of relatively inviscid fluid with a free surface, in which surface gradients are small.
Darcy's law is used for flow in porous media, and works with variables averaged over several pore-widths.
In rotating systems, the quasi-geostrophic equations assume an almost perfect balance between pressure gradients and the Coriolis force. It is useful in the study of atmospheric dynamics.
Multidisciplinary types
Flows according to Mach regimes
While many flows (such as flow of water through a pipe) occur at low Mach numbers (subsonic flows), many flows of practical interest in aerodynamics or in turbomachines occur at high fractions of (transonic flows) or in excess of it (supersonic or even hypersonic flows). New phenomena occur at these regimes such as instabilities in transonic flow, shock waves for supersonic flow, or non-equilibrium chemical behaviour due to ionization in hypersonic flows. In practice, each of those flow regimes is treated separately.
Reactive versus non-reactive flows
Reactive flows are flows that are chemically reactive, which finds its applications in many areas, including combustion (IC engine), propulsion devices (rockets, jet engines, and so on), detonations, fire and safety hazards, and astrophysics. In addition to conservation of mass, momentum and energy, conservation of individual species (for example, mass fraction of methane in methane combustion) need to be derived, where the production/depletion rate of any species are obtained by simultaneously solving the equations of chemical kinetics.
Magnetohydrodynamics
Magnetohydrodynamics is the multidisciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism.
Relativistic fluid dynamics
Relativistic fluid dynamics studies the macroscopic and microscopic fluid motion at large velocities comparable to the velocity of light. This branch of fluid dynamics accounts for the relativistic effects both from the special theory of relativity and the general theory of relativity. The governing equations are derived in Riemannian geometry for Minkowski spacetime.
Fluctuating hydrodynamics
This branch of fluid dynamics augments the standard hydrodynamic equations with stochastic fluxes that model
thermal fluctuations.
As formulated by Landau and Lifshitz,
a white noise contribution obtained from the fluctuation-dissipation theorem of statistical mechanics
is added to the viscous stress tensor and heat flux.
Terminology
The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods.
Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics.
Characteristic numbers
Terminology in incompressible fluid dynamics
The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field.
A point in a fluid flow where the flow has come to rest (that is to say, speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field.
Terminology in compressible fluid dynamics
In a compressible fluid, it is convenient to define the total conditions (also called stagnation conditions) for all thermodynamic state properties (such as total temperature, total enthalpy, total speed of sound). These total flow conditions are a function of the fluid velocity and have different values in frames of reference with different motion.
To avoid potential ambiguity when referring to the properties of the fluid associated with the state of the fluid rather than its motion, the prefix "static" is commonly used (such as static temperature and static enthalpy). Where there is no prefix, the fluid property is the static condition (so "density" and "static density" mean the same thing). The static conditions are independent of the frame of reference.
Because the total flow conditions are defined by isentropically bringing the fluid to rest, there is no need to distinguish between total entropy and static entropy as they are always equal by definition. As such, entropy is most commonly referred to as simply "entropy".
See also
List of publications in fluid dynamics
List of fluid dynamicists
References
Further reading
Originally published in 1879, the 6th extended edition appeared first in 1932.
Originally published in 1938.
Encyclopedia: Fluid dynamics Scholarpedia
External links
National Committee for Fluid Mechanics Films (NCFMF), containing films on several subjects in fluid dynamics (in RealMedia format)
Gallery of fluid motion, "a visual record of the aesthetic and science of contemporary fluid mechanics," from the American Physical Society
List of Fluid Dynamics books
Piping
Aerodynamics
Continuum mechanics | 0.776381 | 0.997496 | 0.774437 |
Total synthesis | Total synthesis is the complete chemical synthesis of a complex molecule, often a natural product, from simple, commercially-available precursors. It usually refers to a process not involving the aid of biological processes, which distinguishes it from semisynthesis. Syntheses may sometimes conclude at a precursor with further known synthetic pathways to a target molecule, in which case it is known as a formal synthesis. Total synthesis target molecules can be natural products, medicinally-important active ingredients, known intermediates, or molecules of theoretical interest. Total synthesis targets can also be organometallic or inorganic, though these are rarely encountered. Total synthesis projects often require a wide diversity of reactions and reagents, and subsequently requires broad chemical knowledge and training to be successful.
Often, the aim is to discover a new route of synthesis for a target molecule for which there already exist known routes. Sometimes, however, no route exists, and chemists wish to find a viable route for the first time. Total synthesis is particularly important for the discovery of new chemical reactions and new chemical reagents, as well as establishing synthetic routes for medicinally important compounds.
Scope and definitions
There are numerous classes of natural products for which total synthesis is applied to. These include (but are not limited to): terpenes, alkaloids, polyketides and polyethers. Total synthesis targets are sometimes referred to by their organismal origin such as plant, marine, and fungal. The term total synthesis is less frequently but still accurately applied to the synthesis of natural polypeptides and polynucleotides. The peptide hormones oxytocin and vasopressin were isolated and their total syntheses first reported in 1954. It is not uncommon for natural product targets to feature multiple structural components of several natural product classes.
Aims
Although untrue from an historical perspective (see the history of the steroid, cortisone), total synthesis in the modern age has largely been an academic endeavor (in terms of manpower applied to problems). Industrial chemical needs often differ from academic focuses. Typically, commercial entities may pick up particular avenues of total synthesis efforts and expend considerable resources on particular natural product targets, especially if semi-synthesis can be applied to complex, natural product-derived drugs. Even so, for decades there has been a continuing discussion regarding the value of total synthesis as an academic enterprise. While there are some outliers, the general opinions are that total synthesis has changed in recent decades, will continue to change, and will remain an integral part of chemical research. Within these changes, there has been increasing focus on improving the practicality and marketability of total synthesis methods. The Phil S. Baran group at Scripps, a notable pioneer of practical synthesis have endeavored to create scalable and high efficiency syntheses that would have more immediate uses outside of academia.
History
Friedrich Wöhler discovered that an organic substance, urea, could be produced from inorganic starting materials in 1828. That was an important conceptual milestone in chemistry by being the first example of a synthesis of a substance that had been known only as a byproduct of living processes. Wöhler obtained urea by treating silver cyanate with ammonium chloride, a simple, one-step synthesis:
AgNCO + NH4Cl → (NH2)2CO + AgCl
Camphor was a scarce and expensive natural product with a worldwide demand. Haller and Blanc synthesized it from camphor acid; however, the precursor, camphoric acid, had an unknown structure. When Finnish chemist Gustav Komppa synthesized camphoric acid from diethyl oxalate and 3,3-dimethylpentanoic acid in 1904, the structure of the precursors allowed contemporary chemists to infer the complicated ring structure of camphor. Shortly thereafter, William Perkin published another synthesis of camphor. The work on the total chemical synthesis of camphor allowed Komppa to begin industrial production of the compound, in Tainionkoski, Finland, in 1907.
The American chemist Robert Burns Woodward was a pre-eminent figure in developing total syntheses of complex organic molecules, some of his targets being cholesterol, cortisone, strychnine, lysergic acid, reserpine, chlorophyll, colchicine, vitamin B12, and prostaglandin F-2a.
Vincent du Vigneaud was awarded the 1955 Nobel Prize in Chemistry for the total synthesis of the natural polypeptide oxytocin and vasopressin, which reported in 1954 with the citation "for his work on biochemically important sulphur compounds, especially for the first synthesis of a polypeptide hormone."
Another gifted chemist is Elias James Corey, who won the Nobel Prize in Chemistry in 1990 for lifetime achievement in total synthesis and for the development of retrosynthetic analysis.
List of notable total syntheses
Quinine total synthesis
Vitamin B12 total synthesis
Strychnine total synthesis
Paclitaxel (Taxol) total synthesis
Cholesterol total synthesis
References
External links
The Organic Synthesis Archive
Total Synthesis Highlights
Total Synthesis News
Total syntheses schemes with reaction and reagent indices
Group Meeting Problems in Organic Chemistry
Organic synthesis | 0.788994 | 0.981531 | 0.774422 |
Aliphatic compound | In organic chemistry, hydrocarbons (compounds composed solely of carbon and hydrogen) are divided into two classes: aromatic compounds and aliphatic compounds (; G. aleiphar, fat, oil). Aliphatic compounds can be saturated (in which all the C-C bonds are single requiring the structure to be completed, or 'saturated', by hydrogen) like hexane, or unsaturated, like hexene and hexyne. Open-chain compounds, whether straight or branched, and which contain no rings of any type, are always aliphatic. Cyclic compounds can be aliphatic if they are not aromatic.
Structure
Aliphatic compounds can be saturated, joined by single bonds (alkanes), or unsaturated, with double bonds (alkenes) or triple bonds (alkynes). If other elements (heteroatoms) are bound to the carbon chain, the most common being oxygen, nitrogen, sulfur, and chlorine, it is no longer a hydrocarbon, and therefore no longer an aliphatic compound. However, such compounds may still be referred to as aliphatic if the hydrocarbon portion of the molecule is aliphatic, e.g. aliphatic amines, to differentiate them from aromatic amines.
The least complex aliphatic compound is methane (CH4).
Properties
Most aliphatic compounds are flammable, allowing the use of hydrocarbons as fuel, such as methane in natural gas for stoves or heating; butane in torches and lighters; various aliphatic (as well as aromatic) hydrocarbons in liquid transportation fuels like petrol/gasoline, diesel, and jet fuel; and other uses such as ethyne (acetylene) in welding.
Examples of aliphatic compounds
The most important aliphatic compounds are:
n-, iso- and cyclo-alkanes (saturated hydrocarbons)
n-, iso- and cyclo-alkenes and -alkynes (unsaturated hydrocarbons).
Important examples of low-molecular aliphatic compounds can be found in the list below (sorted by the number of carbon-atoms):
References
Organic compounds | 0.776926 | 0.99674 | 0.774393 |
A New Kind of Science | A New Kind of Science is a book by Stephen Wolfram, published by his company Wolfram Research under the imprint Wolfram Media in 2002. It contains an empirical and systematic study of computational systems such as cellular automata. Wolfram calls these systems simple programs and argues that the scientific philosophy and methods appropriate for the study of simple programs are relevant to other fields of science.
Contents
Computation and its implications
The thesis of A New Kind of Science (NKS) is twofold: that the nature of computation must be explored experimentally, and that the results of these experiments have great relevance to understanding the physical world.
Simple programs
The basic subject of Wolfram's "new kind of science" is the study of simple abstract rules—essentially, elementary computer programs. In almost any class of a computational system, one very quickly finds instances of great complexity among its simplest cases (after a time series of multiple iterative loops, applying the same simple set of rules on itself, similar to a self-reinforcing cycle using a set of rules). This seems to be true regardless of the components of the system and the details of its setup. Systems explored in the book include, amongst others, cellular automata in one, two, and three dimensions; mobile automata; Turing machines in 1 and 2 dimensions; several varieties of substitution and network systems; recursive functions; nested recursive functions; combinators; tag systems; register machines; reversal-addition. For a program to qualify as simple, there are several requirements:
Its operation can be completely explained by a simple graphical illustration.
It can be completely explained in a few sentences of human language.
It can be implemented in a computer language using just a few lines of code.
The number of its possible variations is small enough so that all of them can be computed.
Generally, simple programs tend to have a very simple abstract framework. Simple cellular automata, Turing machines, and combinators are examples of such frameworks, while more complex cellular automata do not necessarily qualify as simple programs. It is also possible to invent new frameworks, particularly to capture the operation of natural systems. The remarkable feature of simple programs is that a significant percentage of them are capable of producing great complexity. Simply enumerating all possible variations of almost any class of programs quickly leads one to examples that do unexpected and interesting things. This leads to the question: if the program is so simple, where does the complexity come from? In a sense, there is not enough room in the program's definition to directly encode all the things the program can do. Therefore, simple programs can be seen as a minimal example of emergence. A logical deduction from this phenomenon is that if the details of the program's rules have little direct relationship to its behavior, then it is very difficult to directly engineer a simple program to perform a specific behavior. An alternative approach is to try to engineer a simple overall computational framework, and then do a brute-force search through all of the possible components for the best match.
Simple programs are capable of a remarkable range of behavior. Some have been proven to be universal computers. Others exhibit properties familiar from traditional science, such as thermodynamic behavior, continuum behavior, conserved quantities, percolation, sensitive dependence on initial conditions, and others. They have been used as models of traffic, material fracture, crystal growth, biological growth, and various sociological, geological, and ecological phenomena. Another feature of simple programs is that, according to the book, making them more complicated seems to have little effect on their overall complexity. A New Kind of Science argues that this is evidence that simple programs are enough to capture the essence of almost any complex system.
Mapping and mining the computational universe
In order to study simple rules and their often-complex behaviour, Wolfram argues that it is necessary to systematically explore all of these computational systems and document what they do. He further argues that this study should become a new branch of science, like physics or chemistry. The basic goal of this field is to understand and characterize the computational universe using experimental methods.
The proposed new branch of scientific exploration admits many different forms of scientific production. For instance, qualitative classifications are often the results of initial forays into the computational jungle. On the other hand, explicit proofs that certain systems compute this or that function are also admissible. There are also some forms of production that are in some ways unique to this field of study. For example, the discovery of computational mechanisms that emerge in different systems but in bizarrely different forms.
Another type of production involves the creation of programs for the analysis of computational systems. In the NKS framework, these themselves should be simple programs, and subject to the same goals and methodology. An extension of this idea is that the human mind is itself a computational system, and hence providing it with raw data in as effective a way as possible is crucial to research. Wolfram believes that programs and their analysis should be visualized as directly as possible, and exhaustively examined by the thousands or more. Since this new field concerns abstract rules, it can in principle address issues relevant to other fields of science. However, in general Wolfram's idea is that novel ideas and mechanisms can be discovered in the computational universe, where they can be represented in their simplest forms, and then other fields can choose among these discoveries for those they find relevant.
Systematic abstract science
While Wolfram advocates simple programs as a scientific discipline, he also argues that its methodology will revolutionize other fields of science. The basis of his argument is that the study of simple programs is the minimal possible form of science, grounded equally in both abstraction and empirical experimentation. Every aspect of the methodology advocated in NKS is optimized to make experimentation as direct, easy, and meaningful as possible while maximizing the chances that the experiment will do something unexpected. Just as this methodology allows computational mechanisms to be studied in their simplest forms, Wolfram argues that the process of doing so engages with the mathematical basis of the physical world, and therefore has much to offer the sciences.
Wolfram argues that the computational realities of the universe make science hard for fundamental reasons. But he also argues that by understanding the importance of these realities, we can learn to use them in our favor. For instance, instead of reverse engineering our theories from observation, we can enumerate systems and then try to match them to the behaviors we observe. A major theme of NKS is investigating the structure of the possibility space. Wolfram argues that science is far too ad hoc, in part because the models used are too complicated and unnecessarily organized around the limited primitives of traditional mathematics. Wolfram advocates using models whose variations are enumerable and whose consequences are straightforward to compute and analyze.
Philosophical underpinnings
Computational irreducibility
Wolfram argues that one of his achievements is in providing a coherent system of ideas that justifies computation as an organizing principle of science. For instance, he argues that the concept of computational irreducibility (that some complex computations are not amenable to short-cuts and cannot be "reduced"), is ultimately the reason why computational models of nature must be considered in addition to traditional mathematical models. Likewise, his idea of intrinsic randomness generation—that natural systems can generate their own randomness, rather than using chaos theory or stochastic perturbations—implies that computational models do not need to include explicit randomness.
Principle of computational equivalence
Based on his experimental results, Wolfram developed the principle of computational equivalence (PCE): the principle states that systems found in the natural world can perform computations up to a maximal ("universal") level of computational power. Most systems can attain this level. Systems, in principle, compute the same things as a computer. Computation is therefore simply a question of translating input and outputs from one system to another. Consequently, most systems are computationally equivalent. Proposed examples of such systems are the workings of the human brain and the evolution of weather systems.
The principle can be restated as follows: almost all processes that are not obviously simple are of equivalent sophistication. From this principle, Wolfram draws an array of concrete deductions which he argues reinforce his theory. Possibly the most important among these is an explanation as to why we experience randomness and complexity: often, the systems we analyze are just as sophisticated as we are. Thus, complexity is not a special quality of systems, like for instance the concept of "heat," but simply a label for all systems whose computations are sophisticated. Wolfram argues that understanding this makes possible the "normal science" of the NKS paradigm.
Applications and results
There are a number of specific results and ideas in the NKS book, and they can be organized into several themes. One common theme of examples and applications is demonstrating how little complexity it takes to achieve interesting behavior, and how the proper methodology can discover this behavior.
First, there are several cases where the NKS book introduces what was, during the book's composition, the simplest known system in some class that has a particular characteristic. Some examples include the first primitive recursive function that results in complexity, the smallest universal Turing machine, and the shortest axiom for propositional calculus. In a similar vein, Wolfram also demonstrates many simple programs that exhibit phenomena like phase transitions, conserved quantities, continuum behavior, and thermodynamics that are familiar from traditional science. Simple computational models of natural systems like shell growth, fluid turbulence, and phyllotaxis are a final category of applications that fall in this theme.
Another common theme is taking facts about the computational universe as a whole and using them to reason about fields in a holistic way. For instance, Wolfram discusses how facts about the computational universe inform evolutionary theory, SETI, free will, computational complexity theory, and philosophical fields like ontology, epistemology, and even postmodernism.
Wolfram suggests that the theory of computational irreducibility may provide a resolution to the existence of free will in a nominally deterministic universe. He posits that the computational process in the brain of the being with free will is actually complex enough so that it cannot be captured in a simpler computation, due to the principle of computational irreducibility. Thus, while the process is indeed deterministic, there is no better way to determine the being's will than, in essence, to run the experiment and let the being exercise it.
The book also contains a number of individual results—both experimental and analytic—about what a particular automaton computes, or what its characteristics are, using some methods of analysis.
The book contains a new technical result in describing the Turing completeness of the Rule 110 cellular automaton. Very small Turing machines can simulate Rule 110, which Wolfram demonstrates using a 2-state 5-symbol universal Turing machine. Wolfram conjectures that a particular 2-state 3-symbol Turing machine is universal. In 2007, as part of commemorating the book's fifth anniversary, Wolfram's company offered a $25,000 prize for proof that this Turing machine is universal. Alex Smith, a computer science student from Birmingham, UK, won the prize later that year by proving Wolfram's conjecture.
Reception
Periodicals gave A New Kind of Science coverage, including articles in The New York Times, Newsweek, Wired, and The Economist. Some scientists criticized the book as abrasive and arrogant, and perceived a fatal flaw—that simple systems such as cellular automata are not complex enough to describe the degree of complexity present in evolved systems, and observed that Wolfram ignored the research categorizing the complexity of systems. Although critics accept Wolfram's result showing universal computation, they view it as minor and dispute Wolfram's claim of a paradigm shift. Others found that the work contained valuable insights and refreshing ideas. Wolfram addressed his critics in a series of blog posts.
Scientific philosophy
A tenet of NKS is that the simpler the system, the more likely a version of it will recur in a wide variety of more complicated contexts. Therefore, NKS argues that systematically exploring the space of simple programs will lead to a base of reusable knowledge. However, many scientists believe that of all possible parameters, only some actually occur in the universe. For instance, of all possible permutations of the symbols making up an equation, most will be essentially meaningless. NKS has also been criticized for asserting that the behavior of simple systems is somehow representative of all systems.
Methodology
A common criticism of NKS is that it does not follow established scientific methodology. For instance, NKS does not establish rigorous mathematical definitions, nor does it attempt to prove theorems; and most formulas and equations are written in Mathematica rather than standard notation. Along these lines, NKS has also been criticized for being heavily visual, with much information conveyed by pictures that do not have formal meaning. It has also been criticized for not using modern research in the field of complexity, particularly the works that have studied complexity from a rigorous mathematical perspective. And it has been criticized for misrepresenting chaos theory.
Utility
NKS has been criticized for not providing specific results that would be immediately applicable to ongoing scientific research. There has also been criticism, implicit and explicit, that the study of simple programs has little connection to the physical universe, and hence is of limited value. Steven Weinberg has pointed out that no real world system has been explained using Wolfram's methods in a satisfactory fashion. Mathematician Steven G. Krantz wrote, "Just because Wolfram can cook up a cellular automaton that seems to produce the spot pattern on a leopard, may we safely conclude that he understands the mechanism by which the spots are produced on the leopard, or why the spots are there, or what function (evolutionary or mating or camouflage or other) they perform?"
Principle of computational equivalence (PCE)
The principle of computational equivalence (PCE) has been criticized for being vague, unmathematical, and for not making directly verifiable predictions. It has also been criticized for being contrary to the spirit of research in mathematical logic and computational complexity theory, which seek to make fine-grained distinctions between levels of computational sophistication, and for wrongly conflating different kinds of universality property. Moreover, critics such as Ray Kurzweil have argued that it ignores the distinction between hardware and software; while two computers may be equivalent in power, it does not follow that any two programs they might run are also equivalent. Others suggest it is little more than a rechristening of the Church–Turing thesis.
The fundamental theory (NKS Chapter 9)
Wolfram's speculations of a direction towards a fundamental theory of physics have been criticized as vague and obsolete. Scott Aaronson, Professor of Computer Science at University of Texas Austin, also claims that Wolfram's methods cannot be compatible with both special relativity and Bell's theorem violations, and hence cannot explain the observed results of Bell tests.
Edward Fredkin and Konrad Zuse pioneered the idea of a computable universe, the former by writing a line in his book on how the world might be like a cellular automaton, and later further developed by Fredkin using a toy model called Salt. It has been claimed that NKS tries to take these ideas as its own, but Wolfram's model of the universe is a rewriting network, and not a cellular automaton, as Wolfram himself has suggested a cellular automaton cannot account for relativistic features such as no absolute time frame. Jürgen Schmidhuber has also charged that his work on Turing machine-computable physics was stolen without attribution, namely his idea on enumerating possible Turing-computable universes.
In a 2002 review of NKS, the Nobel laureate and elementary particle physicist Steven Weinberg wrote, "Wolfram himself is a lapsed elementary particle physicist, and I suppose he can't resist trying to apply his experience with digital computer programs to the laws of nature. This has led him to the view (also considered in a 1981 paper by Richard Feynman) that nature is discrete rather than continuous. He suggests that space consists of a set of isolated points, like cells in a cellular automaton, and that even time flows in discrete steps. Following an idea of Edward Fredkin, he concludes that the universe itself would then be an automaton, like a giant computer. It's possible, but I can't see any motivation for these speculations, except that this is the sort of system that Wolfram and others have become used to in their work on computers. So might a carpenter, looking at the moon, suppose that it is made of wood."
Natural selection
Wolfram's claim that natural selection is not the fundamental cause of complexity in biology has led journalist Chris Lavers to state that Wolfram does not understand the theory of evolution.
Originality
NKS has been heavily criticized as not being original or important enough to justify its title and claims.
The authoritative manner in which NKS presents a vast number of examples and arguments has been criticized as leading the reader to believe that each of these ideas was original to Wolfram; in particular, one of the most substantial new technical results presented in the book, that the rule 110 cellular automaton is Turing complete, was not proven by Wolfram. Wolfram credits the proof to his research assistant, Matthew Cook. However, the notes section at the end of his book acknowledges many of the discoveries made by these other scientists citing their names together with historical facts, although not in the form of a traditional bibliography section. Additionally, the idea that very simple rules often generate great complexity is already an established idea in science, particularly in chaos theory and complex systems.
See also
Digital physics
Scientific reductionism
Calculating Space
Marcus Hutter's "Universal Artificial Intelligence" algorithm
References
External links
A New Kind of Science free E-Book
What We've Learned from NKS YouTube playlist — extensive discussion of each NKS chapter; (As of 2022, Stephen Wolfram discusses the NKS chapters in view of recent developments. Wolfram Physics Project)
2002 non-fiction books
Algorithmic art
Cellular automata
Computer science books
Complex systems theory
Mathematics and art
Metatheory of science
Science books
Self-organization
Systems theory books
Wolfram Research
Computational science | 0.784947 | 0.986549 | 0.774389 |