title
stringlengths 3
49
| text
stringlengths 509
117k
| relevans
float64 0.76
0.83
| popularity
float64 0.95
1
| ranking
float64 0.76
0.82
|
---|---|---|---|---|
Enzyme kinetics | Enzyme kinetics is the study of the rates of enzyme-catalysed chemical reactions. In enzyme kinetics, the reaction rate is measured and the effects of varying the conditions of the reaction are investigated. Studying an enzyme's kinetics in this way can reveal the catalytic mechanism of this enzyme, its role in metabolism, how its activity is controlled, and how a drug or a modifier (inhibitor or activator) might affect the rate.
An enzyme (E) is a protein molecule that serves as a biological catalyst to facilitate and accelerate a chemical reaction in the body. It does this through binding of another molecule, its substrate (S), which the enzyme acts upon to form the desired product. The substrate binds to the active site of the enzyme to produce an enzyme-substrate complex ES, and is transformed into an enzyme-product complex EP and from there to product P, via a transition state ES*. The series of steps is known as the mechanism:
E + S ⇄ ES ⇄ ES* ⇄ EP ⇄ E + P
This example assumes the simplest case of a reaction with one substrate and one product. Such cases exist: for example, a mutase such as phosphoglucomutase catalyses the transfer of a phosphate group from one position to another, and isomerase is a more general term for an enzyme that catalyses any one-substrate one-product reaction, such as triosephosphate isomerase. However, such enzymes are not very common, and are heavily outnumbered by enzymes that catalyse two-substrate two-product reactions: these include, for example, the NAD-dependent dehydrogenases such as alcohol dehydrogenase, which catalyses the oxidation of ethanol by NAD+. Reactions with three or four substrates or products are less common, but they exist. There is no necessity for the number of products to be equal to the number of substrates; for example, glyceraldehyde 3-phosphate dehydrogenase has three substrates and two products.
When enzymes bind multiple substrates, such as dihydrofolate reductase (shown right), enzyme kinetics can also show the sequence in which these substrates bind and the sequence in which products are released. An example of enzymes that bind a single substrate and release multiple products are proteases, which cleave one protein substrate into two polypeptide products. Others join two substrates together, such as DNA polymerase linking a nucleotide to DNA. Although these mechanisms are often a complex series of steps, there is typically one rate-determining step that determines the overall kinetics. This rate-determining step may be a chemical reaction or a conformational change of the enzyme or substrates, such as those involved in the release of product(s) from the enzyme.
Knowledge of the enzyme's structure is helpful in interpreting kinetic data. For example, the structure can suggest how substrates and products bind during catalysis; what changes occur during the reaction; and even the role of particular amino acid residues in the mechanism. Some enzymes change shape significantly during the mechanism; in such cases, it is helpful to determine the enzyme structure with and without bound substrate analogues that do not undergo the enzymatic reaction.
Not all biological catalysts are protein enzymes: RNA-based catalysts such as ribozymes and ribosomes are essential to many cellular functions, such as RNA splicing and translation. The main difference between ribozymes and enzymes is that RNA catalysts are composed of nucleotides, whereas enzymes are composed of amino acids. Ribozymes also perform a more limited set of reactions, although their reaction mechanisms and kinetics can be analysed and classified by the same methods.
General principles
The reaction catalysed by an enzyme uses exactly the same reactants and produces exactly the same products as the uncatalysed reaction. Like other catalysts, enzymes do not alter the position of equilibrium between substrates and products. However, unlike uncatalysed chemical reactions, enzyme-catalysed reactions display saturation kinetics. For a given enzyme concentration and for relatively low substrate concentrations, the reaction rate increases linearly with substrate concentration; the enzyme molecules are largely free to catalyse the reaction, and increasing substrate concentration means an increasing rate at which the enzyme and substrate molecules encounter one another. However, at relatively high substrate concentrations, the reaction rate asymptotically approaches the theoretical maximum; the enzyme active sites are almost all occupied by substrates resulting in saturation, and the reaction rate is determined by the intrinsic turnover rate of the enzyme. The substrate concentration midway between these two limiting cases is denoted by KM. Thus, KM is the substrate concentration at which the reaction velocity is half of the maximum velocity.
The two important properties of enzyme kinetics are how easily the enzyme can be saturated with a substrate, and the maximum rate it can achieve. Knowing these properties suggests what an enzyme might do in the cell and can show how the enzyme will respond to changes in these conditions.
Enzyme assays
Enzyme assays are laboratory procedures that measure the rate of enzyme reactions. Since enzymes are not consumed by the reactions they catalyse, enzyme assays usually follow changes in the concentration of either substrates or products to measure the rate of reaction. There are many methods of measurement. Spectrophotometric assays observe the change in the absorbance of light between products and reactants; radiometric assays involve the incorporation or release of radioactivity to measure the amount of product made over time. Spectrophotometric assays are most convenient since they allow the rate of the reaction to be measured continuously. Although radiometric assays require the removal and counting of samples (i.e., they are discontinuous assays) they are usually extremely sensitive and can measure very low levels of enzyme activity. An analogous approach is to use mass spectrometry to monitor the incorporation or release of stable isotopes as the substrate is converted into product. Occasionally, an assay fails and approaches are essential to resurrect a failed assay.
The most sensitive enzyme assays use lasers focused through a microscope to observe changes in single enzyme molecules as they catalyse their reactions. These measurements either use changes in the fluorescence of cofactors during an enzyme's reaction mechanism, or of fluorescent dyes added onto specific sites of the protein to report movements that occur during catalysis. These studies provide a new view of the kinetics and dynamics of single enzymes, as opposed to traditional enzyme kinetics, which observes the average behaviour of populations of millions of enzyme molecules.
An example progress curve for an enzyme assay is shown above. The enzyme produces product at an initial rate that is approximately linear for a short period after the start of the reaction. As the reaction proceeds and substrate is consumed, the rate continuously slows (so long as the substrate is not still at saturating levels). To measure the initial (and maximal) rate, enzyme assays are typically carried out while the reaction has progressed only a few percent towards total completion. The length of the initial rate period depends on the assay conditions and can range from milliseconds to hours. However, equipment for rapidly mixing liquids allows fast kinetic measurements at initial rates of less than one second. These very rapid assays are essential for measuring pre-steady-state kinetics, which are discussed below.
Most enzyme kinetics studies concentrate on this initial, approximately linear part of enzyme reactions. However, it is also possible to measure the complete reaction curve and fit this data to a non-linear rate equation. This way of measuring enzyme reactions is called progress-curve analysis. This approach is useful as an alternative to rapid kinetics when the initial rate is too fast to measure accurately.
The Standards for Reporting Enzymology Data Guidelines provide minimum information required to comprehensively report kinetic and equilibrium data from investigations of enzyme activities including corresponding experimental conditions. The guidelines have been developed to report functional enzyme data with rigor and robustness.
Single-substrate reactions
Enzymes with single-substrate mechanisms include isomerases such as triosephosphateisomerase or bisphosphoglycerate mutase, intramolecular lyases such as adenylate cyclase and the hammerhead ribozyme, an RNA lyase. However, some enzymes that only have a single substrate do not fall into this category of mechanisms. Catalase is an example of this, as the enzyme reacts with a first molecule of hydrogen peroxide substrate, becomes oxidised and is then reduced by a second molecule of substrate. Although a single substrate is involved, the existence of a modified enzyme intermediate means that the mechanism of catalase is actually a ping–pong mechanism, a type of mechanism that is discussed in the Multi-substrate reactions section below.
Michaelis–Menten kinetics
As enzyme-catalysed reactions are saturable, their rate of catalysis does not show a linear response to increasing substrate. If the initial rate of the reaction is measured over a range of substrate concentrations (denoted as [S]), the initial reaction rate increases as [S] increases, as shown on the right. However, as [S] gets higher, the enzyme becomes saturated with substrate and the initial rate reaches Vmax, the enzyme's maximum rate.
The Michaelis–Menten kinetic model of a single-substrate reaction is shown on the right. There is an initial bimolecular reaction between the enzyme E and substrate S to form the enzyme–substrate complex ES. The rate of enzymatic reaction increases with the increase of the substrate concentration up to a certain level called Vmax; at Vmax, increase in substrate concentration does not cause any increase in reaction rate as there is no more enzyme (E) available for reacting with substrate (S). Here, the rate of reaction becomes dependent on the ES complex and the reaction becomes a unimolecular reaction with an order of zero. Though the enzymatic mechanism for the unimolecular reaction ES ->[k_{cat}] E + P can be quite complex, there is typically one rate-determining enzymatic step that allows this reaction to be modelled as a single catalytic step with an apparent unimolecular rate constant kcat.
If the reaction path proceeds over one or several intermediates, kcat will be a function of several elementary rate constants, whereas in the simplest case of a single elementary reaction (e.g. no intermediates) it will be identical to the elementary unimolecular rate constant k2. The apparent unimolecular rate constant kcat is also called turnover number, and denotes the maximum number of enzymatic reactions catalysed per second.
The Michaelis–Menten equation describes how the (initial) reaction rate v0 depends on the position of the substrate-binding equilibrium and the rate constant k2.
(Michaelis–Menten equation)
with the constants
This Michaelis–Menten equation is the basis for most single-substrate enzyme kinetics. Two crucial assumptions underlie this equation (apart from the general assumption about the mechanism only involving no intermediate or product inhibition, and there is no allostericity or cooperativity). The first assumption is the so-called
quasi-steady-state assumption (or pseudo-steady-state hypothesis), namely that the concentration of the substrate-bound enzyme (and hence also the unbound enzyme) changes much more slowly than those of the product and substrate and thus the change over time of the complex can be set to zero
. The second assumption is that the total enzyme concentration does not change over time, thus .
The Michaelis constant KM is experimentally defined as the concentration at which the rate of the enzyme reaction is half Vmax, which can be verified by substituting [S] = KM into the Michaelis–Menten equation and can also be seen graphically. If the rate-determining enzymatic step is slow compared to substrate dissociation, the Michaelis constant KM is roughly the dissociation constant KD of the ES complex.
If [S] is small compared to then the term and also very little ES complex is formed, thus [E]_{\rm tot} \approx [E]. Therefore, the rate of product formation is
Thus the product formation rate depends on the enzyme concentration as well as on the substrate concentration, the equation resembles a bimolecular reaction with a corresponding pseudo-second order rate constant . This constant is a measure of catalytic efficiency. The most efficient enzymes reach a in the range of . These enzymes are so efficient they effectively catalyse a reaction each time they encounter a substrate molecule and have thus reached an upper theoretical limit for efficiency (diffusion limit); and are sometimes referred to as kinetically perfect enzymes. But most enzymes are far from perfect: the average values of and are about and , respectively.
Direct use of the Michaelis–Menten equation for time course kinetic analysis
The observed velocities predicted by the Michaelis–Menten equation can be used to directly model the time course disappearance of substrate and the production of product through incorporation of the Michaelis–Menten equation into the equation for first order chemical kinetics. This can only be achieved however if one recognises the problem associated with the use of Euler's number in the description of first order chemical kinetics. i.e. e−k is a split constant that introduces a systematic error into calculations and can be rewritten as a single constant which represents the remaining substrate after each time period.
In 1983 Stuart Beal (and also independently Santiago Schnell and Claudio Mendoza in 1997) derived a closed form solution for the time course kinetics analysis of the Michaelis-Menten mechanism. The solution, known as the Schnell-Mendoza equation, has the form:
where W[ ] is the Lambert-W function. and where F(t) is
This equation is encompassed by the equation below, obtained by Berberan-Santos, which is also valid when the initial substrate concentration is close to that of enzyme,
where W[ ] is again the Lambert-W function.
Linear plots of the Michaelis–Menten equation
The plot of v versus [S] above is not linear; although initially linear at low [S], it bends over to saturate at high [S]. Before the modern era of nonlinear curve-fitting on computers, this nonlinearity could make it difficult to estimate KM and Vmax accurately. Therefore, several researchers developed linearisations of the Michaelis–Menten equation, such as the Lineweaver–Burk plot, the Eadie–Hofstee diagram and the Hanes–Woolf plot. All of these linear representations can be useful for visualising data, but none should be used to determine kinetic parameters, as computer software is readily available that allows for more accurate determination by nonlinear regression methods.
The Lineweaver–Burk plot or double reciprocal plot is a common way of illustrating kinetic data. This is produced by taking the reciprocal of both sides of the Michaelis–Menten equation. As shown on the right, this is a linear form of the Michaelis–Menten equation and produces a straight line with the equation y = mx + c with a y-intercept equivalent to 1/Vmax and an x-intercept of the graph representing −1/KM.
Naturally, no experimental values can be taken at negative 1/[S]; the lower limiting value 1/[S] = 0 (the y-intercept) corresponds to an infinite substrate concentration, where 1/v=1/Vmax as shown at the right; thus, the x-intercept is an extrapolation of the experimental data taken at positive concentrations. More generally, the Lineweaver–Burk plot skews the importance of measurements taken at low substrate concentrations and, thus, can yield inaccurate estimates of Vmax and KM. A more accurate linear plotting method is the Eadie–Hofstee plot. In this case, v is plotted against v/[S]. In the third common linear representation, the Hanes–Woolf plot, [S]/v is plotted against [S].
In general, data normalisation can help diminish the amount of experimental work and can increase the reliability of the output, and is suitable for both graphical and numerical analysis.
Practical significance of kinetic constants
The study of enzyme kinetics is important for two basic reasons. Firstly, it helps explain how enzymes work, and secondly, it helps predict how enzymes behave in living organisms. The kinetic constants defined above, KM and Vmax, are critical to attempts to understand how enzymes work together to control metabolism.
Making these predictions is not trivial, even for simple systems. For example, oxaloacetate is formed by malate dehydrogenase within the mitochondrion. Oxaloacetate can then be consumed by citrate synthase, phosphoenolpyruvate carboxykinase or aspartate aminotransferase, feeding into the citric acid cycle, gluconeogenesis or aspartic acid biosynthesis, respectively. Being able to predict how much oxaloacetate goes into which pathway requires knowledge of the concentration of oxaloacetate as well as the concentration and kinetics of each of these enzymes. This aim of predicting the behaviour of metabolic pathways reaches its most complex expression in the synthesis of huge amounts of kinetic and gene expression data into mathematical models of entire organisms. Alternatively, one useful simplification of the metabolic modelling problem is to ignore the underlying enzyme kinetics and only rely on information about the reaction network's stoichiometry, a technique called flux balance analysis.
Michaelis–Menten kinetics with intermediate
One could also consider the less simple case
{E} + S
<=>[k_{1}][k_{-1}]
ES
->[k_2]
EI
->[k_3]
{E} + P
where a complex with the enzyme and an intermediate exists and the intermediate is converted into product in a second step. In this case we have a very similar equation
but the constants are different
We see that for the limiting case , thus when the last step from EI -> E + P is much faster than the previous step, we get again the original equation. Mathematically we have then and .
Multi-substrate reactions
Multi-substrate reactions follow complex rate equations that describe how the substrates bind and in what sequence. The analysis of these reactions is much simpler if the concentration of substrate A is kept constant and substrate B varied. Under these conditions, the enzyme behaves just like a single-substrate enzyme and a plot of v by [S] gives apparent KM and Vmax constants for substrate B. If a set of these measurements is performed at different fixed concentrations of A, these data can be used to work out what the mechanism of the reaction is. For an enzyme that takes two substrates A and B and turns them into two products P and Q, there are two types of mechanism: ternary complex and ping–pong.
Ternary-complex mechanisms
In these enzymes, both substrates bind to the enzyme at the same time to produce an EAB ternary complex. The order of binding can either be random (in a random mechanism) or substrates have to bind in a particular sequence (in an ordered mechanism). When a set of v by [S] curves (fixed A, varying B) from an enzyme with a ternary-complex mechanism are plotted in a Lineweaver–Burk plot, the set of lines produced will intersect.
Enzymes with ternary-complex mechanisms include glutathione S-transferase, dihydrofolate reductase and DNA polymerase. The following links show short animations of the ternary-complex mechanisms of the enzymes dihydrofolate reductase and DNA polymerase.
Ping–pong mechanisms
As shown on the right, enzymes with a ping-pong mechanism can exist in two states, E and a chemically modified form of the enzyme E*; this modified enzyme is known as an intermediate. In such mechanisms, substrate A binds, changes the enzyme to E* by, for example, transferring a chemical group to the active site, and is then released. Only after the first substrate is released can substrate B bind and react with the modified enzyme, regenerating the unmodified E form. When a set of v by [S] curves (fixed A, varying B) from an enzyme with a ping–pong mechanism are plotted in a Lineweaver–Burk plot, a set of parallel lines will be produced. This is called a secondary plot.
Enzymes with ping–pong mechanisms include some oxidoreductases such as thioredoxin peroxidase, transferases such as acylneuraminate cytidylyltransferase and serine proteases such as trypsin and chymotrypsin. Serine proteases are a very common and diverse family of enzymes, including digestive enzymes (trypsin, chymotrypsin, and elastase), several enzymes of the blood clotting cascade and many others. In these serine proteases, the E* intermediate is an acyl-enzyme species formed by the attack of an active site serine residue on a peptide bond in a protein substrate. A short animation showing the mechanism of chymotrypsin is linked here.
Reversible catalysis and the Haldane equation
External factors may limit the ability of an enzyme to catalyse a reaction in both directions (whereas the nature of a catalyst in itself means that it cannot catalyse just one direction, according to the principle of microscopic reversibility). We consider the case of an enzyme that catalyses the reaction in both directions:
{E} + {S}
<=>[k_{1}][k_{-1}]
ES
<=>[k_{2}][k_{-2}]
{E} + {P}
The steady-state, initial rate of the reaction is
is positive if the reaction proceed in the forward direction and negative otherwise.
Equilibrium requires that , which occurs when . This shows that thermodynamics forces a relation between the values of the 4 rate constants.
The values of the forward and backward maximal rates, obtained for , , and , , respectively, are and , respectively. Their ratio is not equal to the equilibrium constant, which implies that thermodynamics does not constrain the ratio of the maximal rates. This explains that enzymes can be much "better catalysts" (in terms of maximal rates) in one particular direction of the reaction.
On can also derive the two Michaelis constants and . The Haldane equation is the relation .
Therefore, thermodynamics constrains the ratio between the forward and backward values, not the ratio of values.
Non-Michaelis–Menten kinetics
Many different enzyme systems follow non Michaelis-Menten behavior. A select few examples include kinetics of self-catalytic enzymes, cooperative and allosteric enzymes, interfacial and intracellular enzymes, processive enzymes and so forth. Some enzymes produce a sigmoid v by [S] plot, which often indicates cooperative binding of substrate to the active site. This means that the binding of one substrate molecule affects the binding of subsequent substrate molecules. This behavior is most common in multimeric enzymes with several interacting active sites. Here, the mechanism of cooperation is similar to that of hemoglobin, with binding of substrate to one active site altering the affinity of the other active sites for substrate molecules. Positive cooperativity occurs when binding of the first substrate molecule increases the affinity of the other active sites for substrate. Negative cooperativity occurs when binding of the first substrate decreases the affinity of the enzyme for other substrate molecules.
Allosteric enzymes include mammalian tyrosyl tRNA-synthetase, which shows negative cooperativity, and bacterial aspartate transcarbamoylase and phosphofructokinase, which show positive cooperativity.
Cooperativity is surprisingly common and can help regulate the responses of enzymes to changes in the concentrations of their substrates. Positive cooperativity makes enzymes much more sensitive to [S] and their activities can show large changes over a narrow range of substrate concentration. Conversely, negative cooperativity makes enzymes insensitive to small changes in [S].
The Hill equation is often used to describe the degree of cooperativity quantitatively in non-Michaelis–Menten kinetics. The derived Hill coefficient n measures how much the binding of substrate to one active site affects the binding of substrate to the other active sites. A Hill coefficient of <1 indicates negative cooperativity and a coefficient of >1 indicates positive cooperativity.
Pre-steady-state kinetics
In the first moment after an enzyme is mixed with substrate, no product has been formed and no intermediates exist. The study of the next few milliseconds of the reaction is called pre-steady-state kinetics. Pre-steady-state kinetics is therefore concerned with the formation and consumption of enzyme–substrate intermediates (such as ES or E*) until their steady-state concentrations are reached.
This approach was first applied to the hydrolysis reaction catalysed by chymotrypsin. Often, the detection of an intermediate is a vital piece of evidence in investigations of what mechanism an enzyme follows. For example, in the ping–pong mechanisms that are shown above, rapid kinetic measurements can follow the release of product P and measure the formation of the modified enzyme intermediate E*. In the case of chymotrypsin, this intermediate is formed by an attack on the substrate by the nucleophilic serine in the active site and the formation of the acyl-enzyme intermediate.
In the figure to the right, the enzyme produces E* rapidly in the first few seconds of the reaction. The rate then slows as steady state is reached. This rapid burst phase of the reaction measures a single turnover of the enzyme. Consequently, the amount of product released in this burst, shown as the intercept on the y-axis of the graph, also gives the amount of functional enzyme which is present in the assay.
Chemical mechanism
An important goal of measuring enzyme kinetics is to determine the chemical mechanism of an enzyme reaction, i.e., the sequence of chemical steps that transform substrate into product. The kinetic approaches discussed above will show at what rates intermediates are formed and inter-converted, but they cannot identify exactly what these intermediates are.
Kinetic measurements taken under various solution conditions or on slightly modified enzymes or substrates often shed light on this chemical mechanism, as they reveal the rate-determining step or intermediates in the reaction. For example, the breaking of a covalent bond to a hydrogen atom is a common rate-determining step. Which of the possible hydrogen transfers is rate determining can be shown by measuring the kinetic effects of substituting each hydrogen by deuterium, its stable isotope. The rate will change when the critical hydrogen is replaced, due to a primary kinetic isotope effect, which occurs because bonds to deuterium are harder to break than bonds to hydrogen. It is also possible to measure similar effects with other isotope substitutions, such as 13C/12C and 18O/16O, but these effects are more subtle.
Isotopes can also be used to reveal the fate of various parts of the substrate molecules in the final products. For example, it is sometimes difficult to discern the origin of an oxygen atom in the final product; since it may have come from water or from part of the substrate. This may be determined by systematically substituting oxygen's stable isotope 18O into the various molecules that participate in the reaction and checking for the isotope in the product. The chemical mechanism can also be elucidated by examining the kinetics and isotope effects under different pH conditions, by altering the metal ions or other bound cofactors, by site-directed mutagenesis of conserved amino acid residues, or by studying the behaviour of the enzyme in the presence of analogues of the substrate(s).
Enzyme inhibition and activation
Enzyme inhibitors are molecules that reduce or abolish enzyme activity, while enzyme activators are molecules that increase the catalytic rate of enzymes. These interactions can be either reversible (i.e., removal of the inhibitor restores enzyme activity) or irreversible (i.e., the inhibitor permanently inactivates the enzyme).
Reversible inhibitors
Traditionally reversible enzyme inhibitors have been classified as competitive, uncompetitive, or non-competitive, according to their effects on KM and Vmax. These different effects result from the inhibitor binding to the enzyme E, to the enzyme–substrate complex ES, or to both, respectively. The division of these classes arises from a problem in their derivation and results in the need to use two different binding constants for one binding event. The binding of an inhibitor and its effect on the enzymatic activity are two distinctly different things, another problem the traditional equations fail to acknowledge. In noncompetitive inhibition the binding of the inhibitor results in 100% inhibition of the enzyme only, and fails to consider the possibility of anything in between. In noncompetitive inhibition, the inhibitor will bind to an enzyme at its allosteric site; therefore, the binding affinity, or inverse of KM, of the substrate with the enzyme will remain the same. On the other hand, the Vmax will decrease relative to an uninhibited enzyme. On a Lineweaver-Burk plot, the presence of a noncompetitive inhibitor is illustrated by a change in the y-intercept, defined as 1/Vmax. The x-intercept, defined as −1/KM, will remain the same. In competitive inhibition, the inhibitor will bind to an enzyme at the active site, competing with the substrate. As a result, the KM will increase and the Vmax will remain the same. The common form of the inhibitory term also obscures the relationship between the inhibitor binding to the enzyme and its relationship to any other binding term be it the Michaelis–Menten equation or a dose response curve associated with ligand receptor binding. To demonstrate the relationship the following rearrangement can be made:
Adding zero to the bottom ([I]-[I])
Dividing by [I]+Ki
This notation demonstrates that similar to the Michaelis–Menten equation, where the rate of reaction depends on the percent of the enzyme population interacting with substrate, the effect of the inhibitor is a result of the percent of the enzyme population interacting with inhibitor. The only problem with this equation in its present form is that it assumes absolute inhibition of the enzyme with inhibitor binding, when in fact there can be a wide range of effects anywhere from 100% inhibition of substrate turn over to just >0%. To account for this the equation can be easily modified to allow for different degrees of inhibition by including a delta Vmax term.
or
This term can then define the residual enzymatic activity present when the inhibitor is interacting with individual enzymes in the population. However the inclusion of this term has the added value of allowing for the possibility of activation if the secondary Vmax term turns out to be higher than the initial term. To account for the possibly of activation as well the notation can then be rewritten replacing the inhibitor "I" with a modifier term denoted here as "X".
While this terminology results in a simplified way of dealing with kinetic effects relating to the maximum velocity of the Michaelis–Menten equation, it highlights potential problems with the term used to describe effects relating to the KM. The KM relating to the affinity of the enzyme for the substrate should in most cases relate to potential changes in the binding site of the enzyme which would directly result from enzyme inhibitor interactions. As such a term similar to the one proposed above to modulate Vmax should be appropriate in most situations:
Irreversible inhibitors
Enzyme inhibitors can also irreversibly inactivate enzymes, usually by covalently modifying active site residues. These reactions, which may be called suicide substrates, follow exponential decay functions and are usually saturable. Below saturation, they follow first order kinetics with respect to inhibitor. Irreversible inhibition could be classified into two distinct types. Affinity labelling is a type of irreversible inhibition where a functional group that is highly reactive modifies a catalytically critical residue on the protein of interest to bring about inhibition. Mechanism-based inhibition, on the other hand, involves binding of the inhibitor followed by enzyme mediated alterations that transform the latter into a reactive group that irreversibly modifies the enzyme.
Philosophical discourse on reversibility and irreversibility of inhibition
Having discussed reversible inhibition and irreversible inhibition in the above two headings, it would have to be pointed out that the concept of reversibility (or irreversibility) is a purely theoretical construct exclusively dependent on the time-frame of the assay, i.e., a reversible assay involving association and dissociation of the inhibitor molecule in the minute timescales would seem irreversible if an assay assess the outcome in the seconds and vice versa. There is a continuum of inhibitor behaviors spanning reversibility and irreversibility at a given non-arbitrary assay time frame. There are inhibitors that show slow-onset behavior and most of these inhibitors, invariably, also show tight-binding to the protein target of interest.
Mechanisms of catalysis
The favoured model for the enzyme–substrate interaction is the induced fit model. This model proposes that the initial interaction between enzyme and substrate is relatively weak, but that these weak interactions rapidly induce conformational changes in the enzyme that strengthen binding. These conformational changes also bring catalytic residues in the active site close to the chemical bonds in the substrate that will be altered in the reaction. Conformational changes can be measured using circular dichroism or dual polarisation interferometry. After binding takes place, one or more mechanisms of catalysis lower the energy of the reaction's transition state by providing an alternative chemical pathway for the reaction. Mechanisms of catalysis include catalysis by bond strain; by proximity and orientation; by active-site proton donors or acceptors; covalent catalysis and quantum tunnelling.
Enzyme kinetics cannot prove which modes of catalysis are used by an enzyme. However, some kinetic data can suggest possibilities to be examined by other techniques. For example, a ping–pong mechanism with burst-phase pre-steady-state kinetics would suggest covalent catalysis might be important in this enzyme's mechanism. Alternatively, the observation of a strong pH effect on Vmax but not KM might indicate that a residue in the active site needs to be in a particular ionisation state for catalysis to occur.
History
In 1902 Victor Henri proposed a quantitative theory of enzyme kinetics, but at the time the experimental significance of the hydrogen ion concentration was not yet recognized. After Peter Lauritz Sørensen had defined the logarithmic pH-scale and introduced the concept of buffering in 1909 the German chemist Leonor Michaelis and Dr. Maud Leonora Menten (a postdoctoral researcher in Michaelis's lab at the time) repeated Henri's experiments and confirmed his equation, which is now generally referred to as Michaelis-Menten kinetics (sometimes also Henri-Michaelis-Menten kinetics). Their work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely considered today a starting point in modeling enzymatic activity.
The major contribution of the Henri-Michaelis-Menten approach was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis complex. The enzyme then catalyzes the chemical step in the reaction and releases the product. The kinetics of many enzymes is adequately described by the simple Michaelis-Menten model, but all enzymes have internal motions that are not accounted for in the model and can have significant contributions to the overall reaction kinetics. This can be modeled by introducing several Michaelis-Menten pathways that are connected with fluctuating rates, which is a mathematical extension of the basic Michaelis Menten mechanism.
Software
ENZO (Enzyme Kinetics) is a graphical interface tool for building kinetic models of enzyme catalyzed reactions. ENZO automatically generates the corresponding differential equations from a stipulated enzyme reaction scheme. These differential equations are processed by a numerical solver and a regression algorithm which fits the coefficients of differential equations to experimentally observed time course curves. ENZO allows rapid evaluation of rival reaction schemes and can be used for routine tests in enzyme kinetics.
See also
Protein dynamics
Diffusion limited enzyme
Langmuir adsorption model
Footnotes
α. Link: Interactive Michaelis–Menten kinetics tutorial (Java required)
β. Link: dihydrofolate reductase mechanism (Gif)
γ. Link: DNA polymerase mechanism (Gif)
δ. Link: Chymotrypsin mechanism (Flash required)
References
Further reading
Introductory
Advanced
External links
Animation of an enzyme assay — Shows effects of manipulating assay conditions
MACiE — A database of enzyme reaction mechanisms
ENZYME — Expasy enzyme nomenclature database
ENZO — Web application for easy construction and quick testing of kinetic models of enzyme catalyzed reactions.
ExCatDB — A database of enzyme catalytic mechanisms
BRENDA — Comprehensive enzyme database, giving substrates, inhibitors and reaction diagrams
SABIO-RK — A database of reaction kinetics
Joseph Kraut's Research Group, University of California San Diego — Animations of several enzyme reaction mechanisms
Symbolism and Terminology in Enzyme Kinetics — A comprehensive explanation of concepts and terminology in enzyme kinetics
An introduction to enzyme kinetics — An accessible set of on-line tutorials on enzyme kinetics
Enzyme kinetics animated tutorial — An animated tutorial with audio
Catalysis | 0.789612 | 0.992353 | 0.783574 |
Chemical kinetics | Chemical kinetics, also known as reaction kinetics, is the branch of physical chemistry that is concerned with understanding the rates of chemical reactions. It is different from chemical thermodynamics, which deals with the direction in which a reaction occurs but in itself tells nothing about its rate. Chemical kinetics includes investigations of how experimental conditions influence the speed of a chemical reaction and yield information about the reaction's mechanism and transition states, as well as the construction of mathematical models that also can describe the characteristics of a chemical reaction.
History
The pioneering work of chemical kinetics was done by German chemist Ludwig Wilhelmy in 1850. He experimentally studied the rate of inversion of sucrose and he used integrated rate law for the determination of the reaction kinetics of this reaction. His work was noticed 34 years later by Wilhelm Ostwald. After Wilhelmy, Peter Waage and Cato Guldberg published 1864 the law of mass action, which states that the speed of a chemical reaction is proportional to the quantity of the reacting substances.
Van 't Hoff studied chemical dynamics and in 1884 published his famous "Études de dynamique chimique". In 1901 he was awarded by the first Nobel Prize in Chemistry "in recognition of the extraordinary services he has rendered by the discovery of the laws of chemical dynamics and osmotic pressure in solutions". After van 't Hoff, chemical kinetics deals with the experimental determination of reaction rates from which rate laws and rate constants are derived. Relatively simple rate laws exist for zero order reactions (for which reaction rates are independent of concentration), first order reactions, and second order reactions, and can be derived for others. Elementary reactions follow the law of mass action, but the rate law of stepwise reactions has to be derived by combining the rate laws of the various elementary steps, and can become rather complex. In consecutive reactions, the rate-determining step often determines the kinetics. In consecutive first order reactions, a steady state approximation can simplify the rate law. The activation energy for a reaction is experimentally determined through the Arrhenius equation and the Eyring equation. The main factors that influence the reaction rate include: the physical state of the reactants, the concentrations of the reactants, the temperature at which the reaction occurs, and whether or not any catalysts are present in the reaction.
Gorban and Yablonsky have suggested that the history of chemical dynamics can be divided into three eras. The first is the van 't Hoff wave searching for the general laws of chemical reactions and relating kinetics to thermodynamics. The second may be called the Semenov-Hinshelwood wave with emphasis on reaction mechanisms, especially for chain reactions. The third is associated with Aris and the detailed mathematical description of chemical reaction networks.
Factors affecting reaction rate
Nature of the reactants
The reaction rate varies depending upon what substances are reacting. Acid/base reactions, the formation of salts, and ion exchange are usually fast reactions. When covalent bond formation takes place between the molecules and when large molecules are formed, the reactions tend to be slower.
The nature and strength of bonds in reactant molecules greatly influence the rate of their transformation into products.
Physical state
The physical state (solid, liquid, or gas) of a reactant is also an important factor of the rate of change. When reactants are in the same phase, as in aqueous solution, thermal motion brings them into contact. However, when they are in separate phases, the reaction is limited to the interface between the reactants. Reaction can occur only at their area of contact; in the case of a liquid and a gas, at the surface of the liquid. Vigorous shaking and stirring may be needed to bring the reaction to completion. This means that the more finely divided a solid or liquid reactant the greater its surface area per unit volume and the more contact it with the other reactant, thus the faster the reaction. To make an analogy, for example, when one starts a fire, one uses wood chips and small branches — one does not start with large logs right away. In organic chemistry, on water reactions are the exception to the rule that homogeneous reactions take place faster than heterogeneous reactions ( are those reactions in which solute and solvent not mix properly)
Surface area of solid state
In a solid, only those particles that are at the surface can be involved in a reaction. Crushing a solid into smaller parts means that more particles are present at the surface, and the frequency of collisions between these and reactant particles increases, and so reaction occurs more rapidly. For example, Sherbet (powder) is a mixture of very fine powder of malic acid (a weak organic acid) and sodium hydrogen carbonate. On contact with the saliva in the mouth, these chemicals quickly dissolve and react, releasing carbon dioxide and providing for the fizzy sensation. Also, fireworks manufacturers modify the surface area of solid reactants to control the rate at which the fuels in fireworks are oxidised, using this to create diverse effects. For example, finely divided aluminium confined in a shell explodes violently. If larger pieces of aluminium are used, the reaction is slower and sparks are seen as pieces of burning metal are ejected.
Concentration
The reactions are due to collisions of reactant species. The frequency with which the molecules or ions collide depends upon their concentrations. The more crowded the molecules are, the more likely they are to collide and react with one another. Thus, an increase in the concentrations of the reactants will usually result in the corresponding increase in the reaction rate, while a decrease in the concentrations will usually have a reverse effect. For example, combustion will occur more rapidly in pure oxygen than in air (21% oxygen).
The rate equation shows the detailed dependence of the reaction rate on the concentrations of reactants and other species present. The mathematical forms depend on the reaction mechanism. The actual rate equation for a given reaction is determined experimentally and provides information about the reaction mechanism. The mathematical expression of the rate equation is often given by
Here is the reaction rate constant, is the molar concentration of reactant i and is the partial order of reaction for this reactant. The partial order for a reactant can only be determined experimentally and is often not indicated by its stoichiometric coefficient.
Temperature
Temperature usually has a major effect on the rate of a chemical reaction. Molecules at a higher temperature have more thermal energy. Although collision frequency is greater at higher temperatures, this alone contributes only a very small proportion to the increase in rate of reaction. Much more important is the fact that the proportion of reactant molecules with sufficient energy to react (energy greater than activation energy: E > Ea) is significantly higher and is explained in detail by the Maxwell–Boltzmann distribution of molecular energies.
The effect of temperature on the reaction rate constant usually obeys the Arrhenius equation , where A is the pre-exponential factor or A-factor, Ea is the activation energy, R is the molar gas constant and T is the absolute temperature.
At a given temperature, the chemical rate of a reaction depends on the value of the A-factor, the magnitude of the activation energy, and the concentrations of the reactants. Usually, rapid reactions require relatively small activation energies.
The 'rule of thumb' that the rate of chemical reactions doubles for every 10 °C temperature rise is a common misconception. This may have been generalized from the special case of biological systems, where the α (temperature coefficient) is often between 1.5 and 2.5.
The kinetics of rapid reactions can be studied with the temperature jump method. This involves using a sharp rise in temperature and observing the relaxation time of the return to equilibrium. A particularly useful form of temperature jump apparatus is a shock tube, which can rapidly increase a gas's temperature by more than 1000 degrees.
Catalysts
A catalyst is a substance that alters the rate of a chemical reaction but it remains chemically unchanged afterwards. The catalyst increases the rate of the reaction by providing a new reaction mechanism to occur with in a lower activation energy. In autocatalysis a reaction product is itself a catalyst for that reaction leading to positive feedback. Proteins that act as catalysts in biochemical reactions are called enzymes. Michaelis–Menten kinetics describe the rate of enzyme mediated reactions. A catalyst does not affect the position of the equilibrium, as the catalyst speeds up the backward and forward reactions equally.
In certain organic molecules, specific substituents can have an influence on reaction rate in neighbouring group participation.
Pressure
Increasing the pressure in a gaseous reaction will increase the number of collisions between reactants, increasing the rate of reaction. This is because the activity of a gas is directly proportional to the partial pressure of the gas. This is similar to the effect of increasing the concentration of a solution.
In addition to this straightforward mass-action effect, the rate coefficients themselves can change due to pressure. The rate coefficients and products of many high-temperature gas-phase reactions change if an inert gas is added to the mixture; variations on this effect are called fall-off and chemical activation. These phenomena are due to exothermic or endothermic reactions occurring faster than heat transfer, causing the reacting molecules to have non-thermal energy distributions (non-Boltzmann distribution). Increasing the pressure increases the heat transfer rate between the reacting molecules and the rest of the system, reducing this effect.
Condensed-phase rate coefficients can also be affected by pressure, although rather high pressures are required for a measurable effect because ions and molecules are not very compressible. This effect is often studied using diamond anvils.
A reaction's kinetics can also be studied with a pressure jump approach. This involves making fast changes in pressure and observing the relaxation time of the return to equilibrium.
Absorption of light
The activation energy for a chemical reaction can be provided when one reactant molecule absorbs light of suitable wavelength and is promoted to an excited state. The study of reactions initiated by light is photochemistry, one prominent example being photosynthesis.
Experimental methods
The experimental determination of reaction rates involves measuring how the concentrations of reactants or products change over time. For example, the concentration of a reactant can be measured by spectrophotometry at a wavelength where no other reactant or product in the system absorbs light.
For reactions which take at least several minutes, it is possible to start the observations after the reactants have been mixed at the temperature of interest.
Fast reactions
For faster reactions, the time required to mix the reactants and bring them to a specified temperature may be comparable or longer than the half-life of the reaction. Special methods to start fast reactions without slow mixing step include
Stopped flow methods, which can reduce the mixing time to the order of a millisecond The stopped flow methods have limitation, for example, we need to consider the time it takes to mix gases or solutions and are not suitable if the half-life is less than about a hundredth of a second.
Chemical relaxation methods such as temperature jump and pressure jump, in which a pre-mixed system initially at equilibrium is perturbed by rapid heating or depressurization so that it is no longer at equilibrium, and the relaxation back to equilibrium is observed. For example, this method has been used to study the neutralization H3O+ + OH− with a half-life of 1 μs or less under ordinary conditions.
Flash photolysis, in which a laser pulse produces highly excited species such as free radicals, whose reactions are then studied.
Equilibrium
While chemical kinetics is concerned with the rate of a chemical reaction, thermodynamics determines the extent to which reactions occur. In a reversible reaction, chemical equilibrium is reached when the rates of the forward and reverse reactions are equal (the principle of dynamic equilibrium) and the concentrations of the reactants and products no longer change. This is demonstrated by, for example, the Haber–Bosch process for combining nitrogen and hydrogen to produce ammonia. Chemical clock reactions such as the Belousov–Zhabotinsky reaction demonstrate that component concentrations can oscillate for a long time before finally attaining the equilibrium.
Free energy
In general terms, the free energy change (ΔG) of a reaction determines whether a chemical change will take place, but kinetics describes how fast the reaction is. A reaction can be very exothermic and have a very positive entropy change but will not happen in practice if the reaction is too slow. If a reactant can produce two products, the thermodynamically most stable one will form in general, except in special circumstances when the reaction is said to be under kinetic reaction control. The Curtin–Hammett principle applies when determining the product ratio for two reactants interconverting rapidly, each going to a distinct product. It is possible to make predictions about reaction rate constants for a reaction from free-energy relationships.
The kinetic isotope effect is the difference in the rate of a chemical reaction when an atom in one of the reactants is replaced by one of its isotopes.
Chemical kinetics provides information on residence time and heat transfer in a chemical reactor in chemical engineering and the molar mass distribution in polymer chemistry. It is also provides information in corrosion engineering.
Applications and models
The mathematical models that describe chemical reaction kinetics provide chemists and chemical engineers with tools to better understand and describe chemical processes such as food decomposition, microorganism growth, stratospheric ozone decomposition, and the chemistry of biological systems. These models can also be used in the design or modification of chemical reactors to optimize product yield, more efficiently separate products, and eliminate environmentally harmful by-products. When performing catalytic cracking of heavy hydrocarbons into gasoline and light gas, for example, kinetic models can be used to find the temperature and pressure at which the highest yield of heavy hydrocarbons into gasoline will occur.
Chemical Kinetics is frequently validated and explored through modeling in specialized packages as a function of ordinary differential equation-solving (ODE-solving) and curve-fitting.
Numerical methods
In some cases, equations are unsolvable analytically, but can be solved using numerical methods if data values are given. There are two different ways to do this, by either using software programmes or mathematical methods such as the Euler method. Examples of software for chemical kinetics are i) Tenua, a Java app which simulates chemical reactions numerically and allows comparison of the simulation to real data, ii) Python coding for calculations and estimates and iii) the Kintecus software compiler to model, regress, fit and optimize reactions.
-Numerical integration: for a 1st order reaction A → B
The differential equation of the reactant A is:
It can also be expressed as
which is the same as
To solve the differential equations with Euler and Runge-Kutta methods we need to have the initial values.
See also
Autocatalytic reactions and order creation
Corrosion engineering
Detonation
Electrochemical kinetics
Flame speed
Heterogenous catalysis
Intrinsic low-dimensional manifold
MLAB chemical kinetics modeling package
Nonthermal surface reaction
PottersWheel Matlab toolbox to fit chemical rate constants to experimental data
Reaction progress kinetic analysis
References
External links
Chemistry applets
University of Waterloo
Chemical Kinetics of Gas Phase Reactions
Kinpy: Python code generator for solving kinetic equations
Reaction rate law and reaction profile - a question of temperature, concentration, solvent and catalyst - how fast will a reaction proceed (Video by SciFox on TIB AV-Portal)
Jacobus Henricus van 't Hoff | 0.787564 | 0.994699 | 0.783389 |
Thermochemistry | Thermochemistry is the study of the heat energy which is associated with chemical reactions and/or phase changes such as melting and boiling. A reaction may release or absorb energy, and a phase change may do the same. Thermochemistry focuses on the energy exchange between a system and its surroundings in the form of heat. Thermochemistry is useful in predicting reactant and product quantities throughout the course of a given reaction. In combination with entropy determinations, it is also used to predict whether a reaction is spontaneous or non-spontaneous, favorable or unfavorable.
Endothermic reactions absorb heat, while exothermic reactions release heat. Thermochemistry coalesces the concepts of thermodynamics with the concept of energy in the form of chemical bonds. The subject commonly includes calculations of such quantities as heat capacity, heat of combustion, heat of formation, enthalpy, entropy, and free energy.
Thermochemistry is one part of the broader field of chemical thermodynamics, which deals with the exchange of all forms of energy between system and surroundings, including not only heat but also various forms of work, as well the exchange of matter. When all forms of energy are considered, the concepts of exothermic and endothermic reactions are generalized to exergonic reactions and endergonic reactions.
History
Thermochemistry rests on two generalizations. Stated in modern terms, they are as follows:
Lavoisier and Laplace's law (1780): The energy change accompanying any transformation is equal and opposite to energy change accompanying the reverse process.
Hess' law of constant heat summation (1840): The energy change accompanying any transformation is the same whether the process occurs in one step or many.
These statements preceded the first law of thermodynamics (1845) and helped in its formulation.
Thermochemistry also involves the measurement of the latent heat of phase transitions. Joseph Black had already introduced the concept of latent heat in 1761, based on the observation that heating ice at its melting point did not raise the temperature but instead caused some ice to melt.
Gustav Kirchhoff showed in 1858 that the variation of the heat of reaction is given by the difference in heat capacity between products and reactants: dΔH / dT = ΔCp. Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature.
Calorimetry
The measurement of heat changes is performed using calorimetry, usually an enclosed chamber within which the change to be examined occurs. The temperature of the chamber is monitored either using a thermometer or thermocouple, and the temperature plotted against time to give a graph from which fundamental quantities can be calculated. Modern calorimeters are frequently supplied with automatic devices to provide a quick read-out of information, one example being the differential scanning calorimeter.
Systems
Several thermodynamic definitions are very useful in thermochemistry. A system is the specific portion of the universe that is being studied. Everything outside the system is considered the surroundings or environment. A system may be:
a (completely) isolated system which can exchange neither energy nor matter with the surroundings, such as an insulated bomb calorimeter
a thermally isolated system which can exchange mechanical work but not heat or matter, such as an insulated closed piston or balloon
a mechanically isolated system which can exchange heat but not mechanical work or matter, such as an uninsulated bomb calorimeter
a closed system which can exchange energy but not matter, such as an uninsulated closed piston or balloon
an open system which it can exchange both matter and energy with the surroundings, such as a pot of boiling water
Processes
A system undergoes a process when one or more of its properties changes. A process relates to the change of state. An isothermal (same-temperature) process occurs when temperature of the system remains constant. An isobaric (same-pressure) process occurs when the pressure of the system remains constant. A process is adiabatic when no heat exchange occurs.
See also
Calorimetry
Chemical kinetics
Cryochemistry
Differential scanning calorimetry
Isodesmic reaction
Important publications in thermochemistry
Photoelectron photoion coincidence spectroscopy
Principle of maximum work
Reaction Calorimeter
Thermodynamic databases for pure substances
Thermodynamics
Thomsen-Berthelot principle
Julius Thomsen
References
External links
Physical chemistry
Branches of thermodynamics | 0.794635 | 0.985799 | 0.78335 |
Heteroatom | In chemistry, a heteroatom is, strictly, any atom that is not carbon or hydrogen.
Organic chemistry
In practice, the term is usually used more specifically to indicate that non-carbon atoms have replaced carbon in the backbone of the molecular structure. Typical heteroatoms are nitrogen (N), oxygen (O), sulfur (S), phosphorus (P), chlorine (Cl), bromine (Br), and iodine (I), as well as the metals lithium (Li) and magnesium (Mg).
Proteins
It can also be used with highly specific meanings in specialised contexts. In the description of protein structure, in particular in the Protein Data Bank file format, a heteroatom record (HETATM) describes an atom as belonging to a small molecule cofactor rather than being part of a biopolymer chain.
Zeolites
In the context of zeolites, the term heteroatom refers to partial isomorphous substitution of the typical framework atoms (silicon, aluminium, and phosphorus) by other elements such as beryllium, vanadium, and chromium. The goal is usually to adjust properties of the material (e.g., Lewis acidity) to optimize the material for a certain application (e.g., catalysis).
References
External links
Journal - Heteroatom Chemistry
Organic chemistry | 0.798007 | 0.981504 | 0.783247 |
Thermodynamic system | A thermodynamic system is a body of matter and/or radiation separate from its surroundings that can be studied using the laws of thermodynamics.
Thermodynamic systems can be passive and active according to internal processes. According to internal processes, passive systems and active systems are distinguished: passive, in which there is a redistribution of available energy, active, in which one type of energy is converted into another.
Depending on its interaction with the environment, a thermodynamic system may be an isolated system, a closed system, or an open system. An isolated system does not exchange matter or energy with its surroundings. A closed system may exchange heat, experience forces, and exert forces, but does not exchange matter. An open system can interact with its surroundings by exchanging both matter and energy.
The physical condition of a thermodynamic system at a given time is described by its state, which can be specified by the values of a set of thermodynamic state variables. A thermodynamic system is in thermodynamic equilibrium when there are no macroscopically apparent flows of matter or energy within it or between it and other systems.
Overview
Thermodynamic equilibrium is characterized not only by the absence of any flow of mass or energy, but by “the absence of any tendency toward change on a macroscopic scale.”
Equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium. It uses the concept of thermodynamic processes, by which bodies pass from one equilibrium state to another by transfer of matter and energy between them. The term 'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics. The possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time. Equilibrium thermodynamics is a relatively simple and well settled subject. One reason for this is the existence of a well defined physical quantity called 'the entropy of a body'.
Non-equilibrium thermodynamics, as a subject in physics, considers bodies of matter and energy that are not in states of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to allow description in terms of quantities that are closely related to thermodynamic state variables. It is characterized by presence of flows of matter and energy. For this topic, very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough defined. Thus the description of non-equilibrium thermodynamic systems is a field theory, more complicated than the theory of equilibrium thermodynamics. Non-equilibrium thermodynamics is a growing subject, not an established edifice. Example theories and modeling approaches include the GENERIC formalism for complex fluids, viscoelasticity, and soft materials. In general, it is not possible to find an exactly defined entropy for non-equilibrium problems. For many non-equilibrium thermodynamical problems, an approximately defined quantity called 'time rate of entropy production' is very useful. Non-equilibrium thermodynamics is mostly beyond the scope of the present article.
Another kind of thermodynamic system is considered in most engineering. It takes part in a flow process. The account is in terms that approximate, well enough in practice in many cases, equilibrium thermodynamical concepts. This is mostly beyond the scope of the present article, and is set out in other articles, for example the article Flow process.
History
The classification of thermodynamic systems arose with the development of thermodynamics as a science.
Theoretical studies of thermodynamic processes in the period from the first theory of heat engines (Saadi Carnot, France, 1824) to the theory of dissipative structures (Ilya Prigozhin, Belgium, 1971) mainly concerned the patterns of interaction of thermodynamic systems with the environment.
At the same time, thermodynamic systems were mainly classified as isolated, closed and open, with corresponding properties in various thermodynamic states, for example, in states close to equilibrium, nonequilibrium and strongly nonequilibrium.
In 2010, Boris Dobroborsky (Israel, Russia) proposed a classification of thermodynamic systems according to internal processes consisting in energy redistribution (passive systems) and energy conversion (active systems).
Passive systems
If there is a temperature difference inside the thermodynamic system, for example in a rod, one end of which is warmer than the other, then thermal energy transfer processes occur in it, in which the temperature of the colder part rises and the warmer part decreases. As a result, after some time, the temperature in the rod will equalize – the rod will come to a state of thermodynamic equilibrium.
Active systems
If the process of converting one type of energy into another takes place inside a thermodynamic system, for example, in chemical reactions, in electric or pneumatic motors, when one solid body rubs against another, then the processes of energy release or absorption will occur, and the thermodynamic system will always tend to a non-equilibrium state with respect to the environment.
Systems in equilibrium
In isolated systems it is consistently observed that as time goes on internal rearrangements diminish and stable conditions are approached. Pressures and temperatures tend to equalize, and matter arranges itself into one or a few relatively homogeneous phases. A system in which all processes of change have gone practically to completion is considered in a state of thermodynamic equilibrium. The thermodynamic properties of a system in equilibrium are unchanging in time. Equilibrium system states are much easier to describe in a deterministic manner than non-equilibrium states. In some cases, when analyzing a thermodynamic process, one can assume that each intermediate state in the process is at equilibrium. Such a process is called quasistatic.
For a process to be reversible, each step in the process must be reversible. For a step in a process to be reversible, the system must be in equilibrium throughout the step. That ideal cannot be accomplished in practice because no step can be taken without perturbing the system from equilibrium, but the ideal can be approached by making changes slowly.
The very existence of thermodynamic equilibrium, defining states of thermodynamic systems, is the essential, characteristic, and most fundamental postulate of thermodynamics, though it is only rarely cited as a numbered law. According to Bailyn, the commonly rehearsed statement of the zeroth law of thermodynamics is a consequence of this fundamental postulate. In reality, practically nothing in nature is in strict thermodynamic equilibrium, but the postulate of thermodynamic equilibrium often provides very useful idealizations or approximations, both theoretically and experimentally; experiments can provide scenarios of practical thermodynamic equilibrium.
In equilibrium thermodynamics the state variables do not include fluxes because in a state of thermodynamic equilibrium all fluxes have zero values by definition. Equilibrium thermodynamic processes may involve fluxes but these must have ceased by the time a thermodynamic process or operation is complete bringing a system to its eventual thermodynamic state. Non-equilibrium thermodynamics allows its state variables to include non-zero fluxes, which describe transfers of mass or energy or entropy between a system and its surroundings.
Walls
A system is enclosed by walls that bound it and connect it to its surroundings. Often a wall restricts passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more than an imaginary two-dimensional closed surface through which the connection to the surroundings is direct.
A wall can be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in a reciprocating engine, a fixed wall means the piston is locked at its position; then, a constant volume process may occur. In that same engine, a piston may be unlocked and allowed to move in and out. Ideally, a wall may be declared adiabatic, diathermal, impermeable, permeable, or semi-permeable. Actual physical materials that provide walls with such idealized properties are not always readily available.
The system is delimited by walls or boundaries, either actual or notional, across which conserved (such as matter and energy) or unconserved (such as entropy) quantities can pass into and out of the system. The space outside the thermodynamic system is known as the surroundings, a reservoir, or the environment. The properties of the walls determine what transfers can occur. A wall that allows transfer of a quantity is said to be permeable to it, and a thermodynamic system is classified by the permeabilities of its several walls. A transfer between system and surroundings can arise by contact, such as conduction of heat, or by long-range forces such as an electric field in the surroundings.
A system with walls that prevent all transfers is said to be isolated. This is an idealized conception, because in practice some transfer is always possible, for example by gravitational forces. It is an axiom of thermodynamics that an isolated system eventually reaches internal thermodynamic equilibrium, when its state no longer changes with time.
The walls of a closed system allow transfer of energy as heat and as work, but not of matter, between it and its surroundings. The walls of an open system allow transfer both of matter and of energy. This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is here used.
Anything that passes across the boundary and effects a change in the contents of the system must be accounted for in an appropriate balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. It could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics.
Surroundings
The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment or the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in the analysis of the system, except in regards to these interactions.
Closed system
In a closed system, no mass may be transferred in or out of the system boundaries. The system always contains the same amount of matter, but (sensible) heat and (boundary) work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary.
Adiabatic boundary – not allowing any heat exchange: A thermally isolated system
Rigid boundary – not allowing exchange of work: A mechanically isolated system
One example is fluid being compressed by a piston in a cylinder. Another example of a closed system is a bomb calorimeter, a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce a spark between the electrodes and initiates combustion. Heat transfer occurs across the boundary after combustion but no mass transfer takes place either way.
The first law of thermodynamics for energy transfers for closed system may be stated:
where denotes the internal energy of the system, heat added to the system, the work done by the system. For infinitesimal changes the first law for closed systems may stated:
If the work is due to a volume expansion by at a pressure then:
For a quasi-reversible heat transfer, the second law of thermodynamics reads:
where denotes the thermodynamic temperature and the entropy of the system. With these relations the fundamental thermodynamic relation, used to compute changes in internal energy, is expressed as:
For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. For systems undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically:
where denotes the number of -type molecules, the number of atoms of element in molecule , and the total number of atoms of element in the system, which remains constant, since the system is closed. There is one such equation for each element in the system.
Isolated system
An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is in a state of thermodynamic equilibrium.
Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere. However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena.
In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann's H-theorem used equations, which assumed that a system (for example, a gas) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions. This inevitably led to Loschmidt's paradox. However, if the stochastic behavior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann's assumption of molecular chaos can be justified.
The second law of thermodynamics for isolated systems states that the entropy of an isolated system not in equilibrium tends to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the internal energy is constant and the entropy can never decrease. A closed system's entropy can decrease e.g. when heat is extracted from the system.
Isolated systems are not equivalent to closed systems. Closed systems cannot exchange matter with the surroundings, but can exchange energy. Isolated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe).
'Closed system' is often used in thermodynamics discussions when 'isolated system' would be correct – i.e. there is an assumption that energy does not enter or leave the system.
Selective transfer of matter
For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes.
An open system has one or several walls that allow transfer of matter. To account for the internal energy of the open system, this requires energy transfer terms in addition to those for heat and work. It also leads to the idea of the chemical potential.
A wall selectively permeable only to a pure substance can put the system in diffusive contact with a reservoir of that pure substance in the surroundings. Then a process is possible in which that pure substance is transferred between system and surroundings. Also, across that wall a contact equilibrium with respect to that substance is possible. By suitable thermodynamic operations, the pure substance reservoir can be dealt with as a closed system. Its internal energy and its entropy can be determined as functions of its temperature, pressure, and mole number.
A thermodynamic operation can render impermeable to matter all system walls other than the contact equilibrium wall for that substance. This allows the definition of an intensive state variable, with respect to a reference state of the surroundings, for that substance. The intensive variable is called the chemical potential; for component substance it is usually denoted . The corresponding extensive variable can be the number of moles of the component substance in the system.
For a contact equilibrium across a wall permeable to a substance, the chemical potentials of the substance must be same on either side of the wall. This is part of the nature of thermodynamic equilibrium, and may be regarded as related to the zeroth law of thermodynamics.
Open system
In an open system, there is an exchange of energy and matter between the system and the surroundings. The presence of reactants in an open beaker is an example of an open system. Here the boundary is an imaginary surface enclosing the beaker and reactants. It is named closed, if borders are impenetrable for substance, but allow transit of energy in the form of heat, and isolated, if there is no exchange of heat and substances. The open system cannot exist in the equilibrium state. To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that was described above, a set of internal variables have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their trending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable
where is a relaxation time of a corresponding variable. It is convenient to consider the initial value equal to zero.
The specific contribution to the thermodynamics of open non-equilibrium systems was made by Ilya Prigogine, who investigated a system of chemically reacting substances. In this case the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalized, to consider any deviations from the equilibrium state, such as structure of the system, gradients of temperature, difference of concentrations of substances and so on, to say nothing of degrees of completeness of all chemical reactions, to be internal variables.
The increments of Gibbs free energy and entropy at and are determined as
The stationary states of the system exist due to exchange of both thermal energy and a stream of particles. The sum of the last terms in the equations presents the total energy coming into the system with the stream of particles of substances that can be positive or negative; the quantity is chemical potential of substance .The middle terms in equations (2) and (3) depict energy dissipation (entropy production) due to the relaxation of internal variables , while are thermodynamic forces.
This approach to the open system allows describing the growth and development of living objects in thermodynamic terms.
See also
Dynamical system
Energy system
Isolated system
Mechanical system
Physical system
Quantum system
Thermodynamic cycle
Thermodynamic process
Two-state quantum system
GENERIC formalism
References
Sources
Carnot, Sadi (1824). Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance (in French). Paris: Bachelier.
Dobroborsky B.S. Machine safety and the human factor / Edited by Doctor of Technical Sciences, prof. S.A. Volkov. — St. Petersburg: SPbGASU, 2011. — pp. 33–35. — 114 p. — ISBN 978-5-9227-0276-8. (Ru)
Thermodynamic systems
Equilibrium chemistry
Thermodynamic cycles
Thermodynamic processes | 0.787558 | 0.994382 | 0.783133 |
Heterogeneous catalysis | Heterogeneous catalysis is catalysis where the phase of catalysts differs from that of the reagents or products. The process contrasts with homogeneous catalysis where the reagents, products and catalyst exist in the same phase. Phase distinguishes between not only solid, liquid, and gas components, but also immiscible mixtures (e.g., oil and water), or anywhere an interface is present.
Heterogeneous catalysis typically involves solid phase catalysts and gas phase reactants. In this case, there is a cycle of molecular adsorption, reaction, and desorption occurring at the catalyst surface. Thermodynamics, mass transfer, and heat transfer influence the rate (kinetics) of reaction.
Heterogeneous catalysis is very important because it enables faster, large-scale production and the selective product formation. Approximately 35% of the world's GDP is influenced by catalysis. The production of 90% of chemicals (by volume) is assisted by solid catalysts. The chemical and energy industries rely heavily on heterogeneous catalysis. For example, the Haber–Bosch process uses metal-based catalysts in the synthesis of ammonia, an important component in fertilizer; 144 million tons of ammonia were produced in 2016.
Adsorption
Adsorption is an essential step in heterogeneous catalysis. Adsorption is the process by which a gas (or solution) phase molecule (the adsorbate) binds to solid (or liquid) surface atoms (the adsorbent). The reverse of adsorption is desorption, the adsorbate splitting from adsorbent. In a reaction facilitated by heterogeneous catalysis, the catalyst is the adsorbent and the reactants are the adsorbate.
Types of adsorption
Two types of adsorption are recognized: physisorption, weakly bound adsorption, and chemisorption, strongly bound adsorption. Many processes in heterogeneous catalysis lie between the two extremes. The Lennard-Jones model provides a basic framework for predicting molecular interactions as a function of atomic separation.
Physisorption
In physisorption, a molecule becomes attracted to the surface atoms via van der Waals forces. These include dipole-dipole interactions, induced dipole interactions, and London dispersion forces. Note that no chemical bonds are formed between adsorbate and adsorbent, and their electronic states remain relatively unperturbed. Typical energies for physisorption are from 3 to 10 kcal/mol. In heterogeneous catalysis, when a reactant molecule physisorbs to a catalyst, it is commonly said to be in a precursor state, an intermediate energy state before chemisorption, a more strongly bound adsorption. From the precursor state, a molecule can either undergo chemisorption, desorption, or migration across the surface. The nature of the precursor state can influence the reaction kinetics.
Chemisorption
When a molecule approaches close enough to surface atoms such that their electron clouds overlap, chemisorption can occur. In chemisorption, the adsorbate and adsorbent share electrons signifying the formation of chemical bonds. Typical energies for chemisorption range from 20 to 100 kcal/mol. Two cases of chemisorption are:
Molecular adsorption: the adsorbate remains intact. An example is alkene binding by platinum.
Dissociation adsorption: one or more bonds break concomitantly with adsorption. In this case, the barrier to dissociation affects the rate of adsorption. An example of this is the binding of H2 to a metal catalyst, where the H-H bond is broken upon adsorption.
Surface reactions
Most metal surface reactions occur by chain propagation in which catalytic intermediates are cyclically produced and consumed. Two main mechanisms for surface reactions can be described for A + B → C.
Langmuir–Hinshelwood mechanism: The reactant molecules, A and B, both adsorb to the catalytic surface. While adsorbed to the surface, they combine to form product C, which then desorbs.
Eley–Rideal mechanism: One reactant molecule, A, adsorbs to the catalytic surface. Without adsorbing, B reacts with absorbed A to form C, that then desorbs from the surface.
Most heterogeneously catalyzed reactions are described by the Langmuir–Hinshelwood model.
In heterogeneous catalysis, reactants diffuse from the bulk fluid phase to adsorb to the catalyst surface. The adsorption site is not always an active catalyst site, so reactant molecules must migrate across the surface to an active site. At the active site, reactant molecules will react to form product molecule(s) by following a more energetically facile path through catalytic intermediates (see figure to the right). The product molecules then desorb from the surface and diffuse away. The catalyst itself remains intact and free to mediate further reactions. Transport phenomena such as heat and mass transfer, also play a role in the observed reaction rate.
Catalyst design
Catalysts are not active towards reactants across their entire surface; only specific locations possess catalytic activity, called active sites. The surface area of a solid catalyst has a strong influence on the number of available active sites. In industrial practice, solid catalysts are often porous to maximize surface area, commonly achieving 50–400 m2/g. Some mesoporous silicates, such as the MCM-41, have surface areas greater than 1000 m2/g. Porous materials are cost effective due to their high surface area-to-mass ratio and enhanced catalytic activity.
In many cases, a solid catalyst is dispersed on a supporting material to increase surface area (spread the number of active sites) and provide stability. Usually catalyst supports are inert, high melting point materials, but they can also be catalytic themselves. Most catalyst supports are porous (frequently carbon, silica, zeolite, or alumina-based) and chosen for their high surface area-to-mass ratio. For a given reaction, porous supports must be selected such that reactants and products can enter and exit the material.
Often, substances are intentionally added to the reaction feed or on the catalyst to influence catalytic activity, selectivity, and/or stability. These compounds are called promoters. For example, alumina (Al2O3) is added during ammonia synthesis to providing greater stability by slowing sintering processes on the Fe-catalyst.
Sabatier principle can be considered one of the cornerstones of modern theory of catalysis. Sabatier principle states that the surface-adsorbates interaction has to be an optimal amount: not too weak to be inert toward the reactants and not too strong to poison the surface and avoid desorption of the products. The statement that the surface-adsorbate interaction has to be an optimum, is a qualitative one. Usually the number of adsorbates and transition states associated with a chemical reaction is a large number, thus the optimum has to be found in a many-dimensional space. Catalyst design in such a many-dimensional space is not a computationally viable task. Additionally, such optimization process would be far from intuitive. Scaling relations are used to decrease the dimensionality of the space of catalyst design. Such relations are correlations among adsorbates binding energies (or among adsorbate binding energies and transition states also known as BEP relations) that are "similar enough" e.g., OH versus OOH scaling. Applying scaling relations to the catalyst design problems greatly reduces the space dimensionality (sometimes to as small as 1 or 2). One can also use micro-kinetic modeling based on such scaling relations to take into account the kinetics associated with adsorption, reaction and desorption of molecules under specific pressure or temperature conditions. Such modeling then leads to well-known volcano-plots at which the optimum qualitatively described by the Sabatier principle is referred to as the "top of the volcano". Scaling relations can be used not only to connect the energetics of radical surface-adsorbed groups (e.g., O*,OH*), but also to connect the energetics of closed-shell molecules among each other or to the counterpart radical adsorbates. A recent challenge for researchers in catalytic sciences is to "break" the scaling relations. The correlations which are manifested in the scaling relations confine the catalyst design space, preventing one from reaching the "top of the volcano". Breaking scaling relations can refer to either designing surfaces or motifs that do not follow a scaling relation, or ones that follow a different scaling relation (than the usual relation for the associated adsorbates) in the right direction: one that can get us closer to the top of the reactivity volcano. In addition to studying catalytic reactivity, scaling relations can be used to study and screen materials for selectivity toward a special product. There are special combination of binding energies that favor specific products over the others. Sometimes a set of binding energies that can change the selectivity toward a specific product "scale" with each other, thus to improve the selectivity one has to break some scaling relations; an example of this is the scaling between methane and methanol oxidative activation energies that leads to the lack of selectivity in direct conversion of methane to methanol.
Catalyst deactivation
Catalyst deactivation is defined as a loss in catalytic activity and/or selectivity over time.
Substances that decrease reaction rate are called poisons. Poisons chemisorb to catalyst surface and reduce the number of available active sites for reactant molecules to bind to. Common poisons include Group V, VI, and VII elements (e.g. S, O, P, Cl), some toxic metals (e.g. As, Pb), and adsorbing species with multiple bonds (e.g. CO, unsaturated hydrocarbons). For example, sulfur disrupts the production of methanol by poisoning the Cu/ZnO catalyst. Substances that increase reaction rate are called promoters. For example, the presence of alkali metals in ammonia synthesis increases the rate of N2 dissociation.
The presence of poisons and promoters can alter the activation energy of the rate-limiting step and affect a catalyst's selectivity for the formation of certain products. Depending on the amount, a substance can be favorable or unfavorable for a chemical process. For example, in the production of ethylene, a small amount of chemisorbed chlorine will act as a promoter by improving Ag-catalyst selectivity towards ethylene over CO2, while too much chlorine will act as a poison.
Other mechanisms for catalyst deactivation include:
Sintering: when heated, dispersed catalytic metal particles can migrate across the support surface and form crystals. This results in a reduction of catalyst surface area.
Fouling: the deposition of materials from the fluid phase onto the solid phase catalyst and/or support surfaces. This results in active site and/or pore blockage.
Coking: the deposition of heavy, carbon-rich solids onto surfaces due to the decomposition of hydrocarbons
Vapor-solid reactions: formation of an inactive surface layer and/or formation of a volatile compound that exits the reactor. This results in a loss of surface area and/or catalyst material.
Solid-state transformation: solid-state diffusion of catalyst support atoms to the surface followed by a reaction that forms an inactive phase. This results in a loss of catalyst surface area.
Erosion: continual attrition of catalyst material common in fluidized-bed reactors. This results in a loss of catalyst material.
In industry, catalyst deactivation costs billions every year due to process shutdown and catalyst replacement.
Industrial examples
In industry, many design variables must be considered including reactor and catalyst design across multiple scales ranging from the subnanometer to tens of meters. The conventional heterogeneous catalysis reactors include batch, continuous, and fluidized-bed reactors, while more recent setups include fixed-bed, microchannel, and multi-functional reactors. Other variables to consider are reactor dimensions, surface area, catalyst type, catalyst support, as well as reactor operating conditions such as temperature, pressure, and reactant concentrations.
Some large-scale industrial processes incorporating heterogeneous catalysts are listed below.
Other examples
Reduction of nitriles in the synthesis of phenethylamine with Raney nickel catalyst and hydrogen in ammonia:
The cracking, isomerisation, and reformation of hydrocarbons to form appropriate and useful blends of petrol.
In automobiles, catalytic converters are used to catalyze three main reactions:
The oxidation of carbon monoxide to carbon dioxide:
2CO(g) + O2(g) → 2CO2(g)
The reduction of nitrogen monoxide back to nitrogen:
2NO(g) + 2CO(g) → N2(g) + 2CO2(g)
The oxidation of hydrocarbons to water and carbon dioxide:
2 C6H6 + 15 O2 → 12 CO2 + 6 H2O
This process can occur with any of hydrocarbon, but most commonly is performed with petrol or diesel.
Asymmetric heterogeneous catalysis facilitates the production of pure enantiomer compounds using chiral heterogeneous catalysts.
The majority of heterogeneous catalysts are based on metals or metal oxides; however, some chemical reactions can be catalyzed by carbon-based materials, e.g., oxidative dehydrogenations or selective oxidations.
Ethylbenzene + 1/2 O2 → Styrene + H2O
Acrolein + 1/2 O2 → Acrylic acid
Solid-Liquid and Liquid-Liquid Catalyzed Reactions
Although the majority of heterogeneous catalysts are solids, there are a few variations which are of practical value. For two immiscible solutions (liquids), one carries the catalyst while the other carries the reactant. This set up is the basis of biphasic catalysis as implemented in the industrial production of butyraldehyde by the hydroformylation of propylene.
See also
Heterogeneous gold catalysis
Nanomaterial-based catalysts
Platinum nanoparticles
Temperature-programmed reduction
Thermal desorption spectroscopy
References
External links
Catalysis | 0.7942 | 0.985944 | 0.783037 |
CHNOPS | CHNOPS and CHON are mnemonic acronyms for the most common elements in living organisms. "CHON" stands for carbon, hydrogen, oxygen, and nitrogen, which together make up more than 95 percent of the mass of biological systems. "CHNOPS" adds phosphorus and sulfur.
Description
Carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur are the six most important chemical elements whose covalent combinations make up most biological molecules on Earth. All of these elements are nonmetals.
In animals in general,the four elements—C, H, O, and N—compose about 96% of the weight, and major minerals (macrominerals) and minor minerals (also called trace elements) compose the remainder. To be organic, something must include Carbon and Hydrogen. Carbs and Lipids are also the best sourse of energy in your body.
Sulfur is contained in the amino acids cysteine and methionine.
Phosphorus is contained in phospholipids, a class of lipids that are a major component of all cell membranes, as they can form lipid bilayers, which keep ions, proteins, and other molecules where they are needed for cell function, and prevent them from diffusing into areas where they should not be. Phosphate groups are also an essential component of the backbone of nucleic acids (general name for DNA & RNA) and are required to form ATP – the main molecule used as energy powering the cell in all living creatures.
Carbonaceous asteroids are rich in CHON elements.
These asteroids are the most common type, and frequently collide with Earth as meteorites. Such collisions were especially common early in Earth's history, and these impactors may have been crucial in the formation of the planet's oceans.
The simplest compounds to contain all of the CHON elements are isomers fulminic acid (HCNO), isofulminic acid (HONC), cyanic acid (HOCN) and isocyanic acid (HNCO), having one of each atom.
See also
Abundance of the chemical elements
Biochemistry
Bioinorganic chemistry
Carbon-based life
References
External links
"Impact of the Biosphere on the Earth", University of Texas at Dallas
Astrobiology
Biology and pharmacology of chemical elements
Mnemonic acronyms
Science mnemonics
Science fiction themes
Astrochemistry | 0.789183 | 0.992187 | 0.783017 |
Reagent | In chemistry, a reagent or analytical reagent is a substance or compound added to a system to cause a chemical reaction, or test if one occurs. The terms reactant and reagent are often used interchangeably, but reactant specifies a substance consumed in the course of a chemical reaction. Solvents, though involved in the reaction mechanism, are usually not called reactants. Similarly, catalysts are not consumed by the reaction, so they are not reactants. In biochemistry, especially in connection with enzyme-catalyzed reactions, the reactants are commonly called substrates.
Definitions
Organic chemistry
In organic chemistry, the term "reagent" denotes a chemical ingredient (a compound or mixture, typically of inorganic or small organic molecules) introduced to cause the desired transformation of an organic substance. Examples include the Collins reagent, Fenton's reagent, and Grignard reagents.
Analytical chemistry
In analytical chemistry, a reagent is a compound or mixture used to detect the presence or absence of another substance, e.g. by a color change, or to measure the concentration of a substance, e.g. by colorimetry. Examples include Fehling's reagent, Millon's reagent, and Tollens' reagent.
Commercial or laboratory preparations
In commercial or laboratory preparations, reagent-grade designates chemical substances meeting standards of purity that ensure the scientific precision and reliability of chemical analysis, chemical reactions or physical testing. Purity standards for reagents are set by organizations such as ASTM International or the American Chemical Society. For instance, reagent-quality water must have very low levels of impurities such as sodium and chloride ions, silica, and bacteria, as well as a very high electrical resistivity. Laboratory products which are less pure, but still useful and economical for undemanding work, may be designated as technical, practical, or crude grade to distinguish them from reagent versions.
Biology
In the field of biology, the biotechnology revolution in the 1980s grew from the development of reagents that could be used to identify and manipulate the chemical matter in and on cells. These reagents included antibodies (polyclonal and monoclonal), oligomers, all sorts of model organisms and immortalised cell lines, reagents and methods for molecular cloning and DNA replication, and many others.
Tool compounds
Tool compounds are an important class of reagent in biology. They are small molecules or biochemicals like siRNA or antibodies that are known to affect a given biomolecule—for example a drug target—but are unlikely to be useful as drugs themselves, and are often starting points in the drug discovery process.
However, many natural substances are hits in almost any assay in which they are tested, and therefore not useful as tool compounds. Medicinal chemists class them instead as pan-assay interference compounds. One example is curcumin.
See also
Limiting reagent
Common reagents
Product
Reagent bottle
Substrate
References
External links
Biological techniques and tools
Chemical reactions
Reagents for biochemistry | 0.787461 | 0.994326 | 0.782993 |
Biogeochemical cycle | A biogeochemical cycle, or more generally a cycle of matter, is the movement and transformation of chemical elements and compounds between living organisms, the atmosphere, and the Earth's crust. Major biogeochemical cycles include the carbon cycle, the nitrogen cycle and the water cycle. In each cycle, the chemical element or molecule is transformed and cycled by living organisms and through various geological forms and reservoirs, including the atmosphere, the soil and the oceans. It can be thought of as the pathway by which a chemical substance cycles (is turned over or moves through) the biotic compartment and the abiotic compartments of Earth. The biotic compartment is the biosphere and the abiotic compartments are the atmosphere, lithosphere and hydrosphere.
For example, in the carbon cycle, atmospheric carbon dioxide is absorbed by plants through photosynthesis, which converts it into organic compounds that are used by organisms for energy and growth. Carbon is then released back into the atmosphere through respiration and decomposition. Additionally, carbon is stored in fossil fuels and is released into the atmosphere through human activities such as burning fossil fuels. In the nitrogen cycle, atmospheric nitrogen gas is converted by plants into usable forms such as ammonia and nitrates through the process of nitrogen fixation. These compounds can be used by other organisms, and nitrogen is returned to the atmosphere through denitrification and other processes. In the water cycle, the universal solvent water evaporates from land and oceans to form clouds in the atmosphere, and then precipitates back to different parts of the planet. Precipitation can seep into the ground and become part of groundwater systems used by plants and other organisms, or can runoff the surface to form lakes and rivers. Subterranean water can then seep into the ocean along with river discharges, rich with dissolved and particulate organic matter and other nutrients.
There are biogeochemical cycles for many other elements, such as for oxygen, hydrogen, phosphorus, calcium, iron, sulfur, mercury and selenium. There are also cycles for molecules, such as water and silica. In addition there are macroscopic cycles such as the rock cycle, and human-induced cycles for synthetic compounds such as for polychlorinated biphenyls (PCBs). In some cycles there are geological reservoirs where substances can remain or be sequestered for long periods of time.
Biogeochemical cycles involve the interaction of biological, geological, and chemical processes. Biological processes include the influence of microorganisms, which are critical drivers of biogeochemical cycling. Microorganisms have the ability to carry out wide ranges of metabolic processes essential for the cycling of nutrients and chemicals throughout global ecosystems. Without microorganisms many of these processes would not occur, with significant impact on the functioning of land and ocean ecosystems and the planet's biogeochemical cycles as a whole. Changes to cycles can impact human health. The cycles are interconnected and play important roles regulating climate, supporting the growth of plants, phytoplankton and other organisms, and maintaining the health of ecosystems generally. Human activities such as burning fossil fuels and using large amounts of fertilizer can disrupt cycles, contributing to climate change, pollution, and other environmental problems.
Overview
Energy flows directionally through ecosystems, entering as sunlight (or inorganic molecules for chemoautotrophs) and leaving as heat during the many transfers between trophic levels. However, the matter that makes up living organisms is conserved and recycled. The six most common elements associated with organic molecules — carbon, nitrogen, hydrogen, oxygen, phosphorus, and sulfur — take a variety of chemical forms and may exist for long periods in the atmosphere, on land, in water, or beneath the Earth's surface. Geologic processes, such as weathering, erosion, water drainage, and the subduction of the continental plates, all play a role in this recycling of materials. Because geology and chemistry have major roles in the study of this process, the recycling of inorganic matter between living organisms and their environment is called a biogeochemical cycle.
The six aforementioned elements are used by organisms in a variety of ways. Hydrogen and oxygen are found in water and organic molecules, both of which are essential to life. Carbon is found in all organic molecules, whereas nitrogen is an important component of nucleic acids and proteins. Phosphorus is used to make nucleic acids and the phospholipids that comprise biological membranes. Sulfur is critical to the three-dimensional shape of proteins. The cycling of these elements is interconnected. For example, the movement of water is critical for leaching sulfur and phosphorus into rivers which can then flow into oceans. Minerals cycle through the biosphere between the biotic and abiotic components and from one organism to another.
Ecological systems (ecosystems) have many biogeochemical cycles operating as a part of the system, for example, the water cycle, the carbon cycle, the nitrogen cycle, etc. All chemical elements occurring in organisms are part of biogeochemical cycles. In addition to being a part of living organisms, these chemical elements also cycle through abiotic factors of ecosystems such as water (hydrosphere), land (lithosphere), and/or the air (atmosphere).
The living factors of the planet can be referred to collectively as the biosphere. All the nutrients — such as carbon, nitrogen, oxygen, phosphorus, and sulfur — used in ecosystems by living organisms are a part of a closed system; therefore, these chemicals are recycled instead of being lost and replenished constantly such as in an open system.
The major parts of the biosphere are connected by the flow of chemical elements and compounds in biogeochemical cycles. In many of these cycles, the biota plays an important role. Matter from the Earth's interior is released by volcanoes. The atmosphere exchanges some compounds and elements rapidly with the biota and oceans. Exchanges of materials between rocks, soils, and the oceans are generally slower by comparison.
The flow of energy in an ecosystem is an open system; the Sun constantly gives the planet energy in the form of light while it is eventually used and lost in the form of heat throughout the trophic levels of a food web. Carbon is used to make carbohydrates, fats, and proteins, the major sources of food energy. These compounds are oxidized to release carbon dioxide, which can be captured by plants to make organic compounds. The chemical reaction is powered by the light energy of sunshine.
Sunlight is required to combine carbon with hydrogen and oxygen into an energy source, but ecosystems in the deep sea, where no sunlight can penetrate, obtain energy from sulfur. Hydrogen sulfide near hydrothermal vents can be utilized by organisms such as the giant tube worm. In the sulfur cycle, sulfur can be forever recycled as a source of energy. Energy can be released through the oxidation and reduction of sulfur compounds (e.g., oxidizing elemental sulfur to sulfite and then to sulfate).
Although the Earth constantly receives energy from the Sun, its chemical composition is essentially fixed, as the additional matter is only occasionally added by meteorites. Because this chemical composition is not replenished like energy, all processes that depend on these chemicals must be recycled. These cycles include both the living biosphere and the nonliving lithosphere, atmosphere, and hydrosphere.
Biogeochemical cycles can be contrasted with geochemical cycles. The latter deals only with crustal and subcrustal reservoirs even though some process from both overlap.
Compartments
Atmosphere
Hydrosphere
The global ocean covers more than 70% of the Earth's surface and is remarkably heterogeneous. Marine productive areas, and coastal ecosystems comprise a minor fraction of the ocean in terms of surface area, yet have an enormous impact on global biogeochemical cycles carried out by microbial communities, which represent 90% of the ocean's biomass. Work in recent years has largely focused on cycling of carbon and macronutrients such as nitrogen, phosphorus, and silicate: other important elements such as sulfur or trace elements have been less studied, reflecting associated technical and logistical issues. Increasingly, these marine areas, and the taxa that form their ecosystems, are subject to significant anthropogenic pressure, impacting marine life and recycling of energy and nutrients. A key example is that of cultural eutrophication, where agricultural runoff leads to nitrogen and phosphorus enrichment of coastal ecosystems, greatly increasing productivity resulting in algal blooms, deoxygenation of the water column and seabed, and increased greenhouse gas emissions, with direct local and global impacts on nitrogen and carbon cycles. However, the runoff of organic matter from the mainland to coastal ecosystems is just one of a series of pressing threats stressing microbial communities due to global change. Climate change has also resulted in changes in the cryosphere, as glaciers and permafrost melt, resulting in intensified marine stratification, while shifts of the redox-state in different biomes are rapidly reshaping microbial assemblages at an unprecedented rate.
Global change is, therefore, affecting key processes including primary productivity, CO2 and N2 fixation, organic matter respiration/remineralization, and the sinking and burial deposition of fixed CO2. In addition to this, oceans are experiencing an acidification process, with a change of ~0.1 pH units between the pre-industrial period and today, affecting carbonate/bicarbonate buffer chemistry. In turn, acidification has been reported to impact planktonic communities, principally through effects on calcifying taxa. There is also evidence for shifts in the production of key intermediary volatile products, some of which have marked greenhouse effects (e.g., N2O and CH4, reviewed by Breitburg in 2018, due to the increase in global temperature, ocean stratification and deoxygenation, driving as much as 25 to 50% of nitrogen loss from the ocean to the atmosphere in the so-called oxygen minimum zones or anoxic marine zones, driven by microbial processes. Other products, that are typically toxic for the marine nekton, including reduced sulfur species such as H2S, have a negative impact for marine resources like fisheries and coastal aquaculture. While global change has accelerated, there has been a parallel increase in awareness of the complexity of marine ecosystems, and especially the fundamental role of microbes as drivers of ecosystem functioning.
Lithosphere
Biosphere
Microorganisms drive much of the biogeochemical cycling in the earth system.
Reservoirs
The chemicals are sometimes held for long periods of time in one place. This place is called a reservoir, which, for example, includes such things as coal deposits that are storing carbon for a long period of time. When chemicals are held for only short periods of time, they are being held in exchange pools. Examples of exchange pools include plants and animals.
Plants and animals utilize carbon to produce carbohydrates, fats, and proteins, which can then be used to build their internal structures or to obtain energy. Plants and animals temporarily use carbon in their systems and then release it back into the air or surrounding medium. Generally, reservoirs are abiotic factors whereas exchange pools are biotic factors. Carbon is held for a relatively short time in plants and animals in comparison to coal deposits. The amount of time that a chemical is held in one place is called its residence time or turnover time (also called the renewal time or exit age).
Box models
Box models are widely used to model biogeochemical systems. Box models are simplified versions of complex systems, reducing them to boxes (or storage reservoirs) for chemical materials, linked by material fluxes (flows). Simple box models have a small number of boxes with properties, such as volume, that do not change with time. The boxes are assumed to behave as if they were mixed homogeneously. These models are often used to derive analytical formulas describing the dynamics and steady-state abundance of the chemical species involved.
The diagram at the right shows a basic one-box model. The reservoir contains the amount of material M under consideration, as defined by chemical, physical or biological properties. The source Q is the flux of material into the reservoir, and the sink S is the flux of material out of the reservoir. The budget is the check and balance of the sources and sinks affecting material turnover in a reservoir. The reservoir is in a steady state if Q = S, that is, if the sources balance the sinks and there is no change over time.
The residence or turnover time is the average time material spends resident in the reservoir. If the reservoir is in a steady state, this is the same as the time it takes to fill or drain the reservoir. Thus, if τ is the turnover time, then τ = M/S. The equation describing the rate of change of content in a reservoir is
When two or more reservoirs are connected, the material can be regarded as cycling between the reservoirs, and there can be predictable patterns to the cyclic flow. More complex multibox models are usually solved using numerical techniques.
The diagram on the left shows a simplified budget of ocean carbon flows. It is composed of three simple interconnected box models, one for the euphotic zone, one for the ocean interior or dark ocean, and one for ocean sediments. In the euphotic zone, net phytoplankton production is about 50 Pg C each year. About 10 Pg is exported to the ocean interior while the other 40 Pg is respired. Organic carbon degradation occurs as particles (marine snow) settle through the ocean interior. Only 2 Pg eventually arrives at the seafloor, while the other 8 Pg is respired in the dark ocean. In sediments, the time scale available for degradation increases by orders of magnitude with the result that 90% of the organic carbon delivered is degraded and only 0.2 Pg C yr−1 is eventually buried and transferred from the biosphere to the geosphere.
The diagram on the right shows a more complex model with many interacting boxes. Reservoir masses here represents carbon stocks, measured in Pg C. Carbon exchange fluxes, measured in Pg C yr−1, occur between the atmosphere and its two major sinks, the land and the ocean. The black numbers and arrows indicate the reservoir mass and exchange fluxes estimated for the year 1750, just before the Industrial Revolution. The red arrows (and associated numbers) indicate the annual flux changes due to anthropogenic activities, averaged over the 2000–2009 time period. They represent how the carbon cycle has changed since 1750. Red numbers in the reservoirs represent the cumulative changes in anthropogenic carbon since the start of the Industrial Period, 1750–2011.
Fast and slow cycles
There are fast and slow biogeochemical cycles. Fast cycle operate in the biosphere and slow cycles operate in rocks. Fast or biological cycles can complete within years, moving substances from atmosphere to biosphere, then back to the atmosphere. Slow or geological cycles can take millions of years to complete, moving substances through the Earth's crust between rocks, soil, ocean and atmosphere.
As an example, the fast carbon cycle is illustrated in the diagram below on the left. This cycle involves relatively short-term biogeochemical processes between the environment and living organisms in the biosphere. It includes movements of carbon between the atmosphere and terrestrial and marine ecosystems, as well as soils and seafloor sediments. The fast cycle includes annual cycles involving photosynthesis and decadal cycles involving vegetative growth and decomposition. The reactions of the fast carbon cycle to human activities will determine many of the more immediate impacts of climate change.
The slow cycle is illustrated in the diagram above on the right. It involves medium to long-term geochemical processes belonging to the rock cycle. The exchange between the ocean and atmosphere can take centuries, and the weathering of rocks can take millions of years. Carbon in the ocean precipitates to the ocean floor where it can form sedimentary rock and be subducted into the Earth's mantle. Mountain building processes result in the return of this geologic carbon to the Earth's surface. There the rocks are weathered and carbon is returned to the atmosphere by degassing and to the ocean by rivers. Other geologic carbon returns to the ocean through the hydrothermal emission of calcium ions. In a given year between 10 and 100 million tonnes of carbon moves around this slow cycle. This includes volcanoes returning geologic carbon directly to the atmosphere in the form of carbon dioxide. However, this is less than one percent of the carbon dioxide put into the atmosphere by burning fossil fuels.
Deep cycles
The terrestrial subsurface is the largest reservoir of carbon on earth, containing 14–135 Pg of carbon and 2–19% of all biomass. Microorganisms drive organic and inorganic compound transformations in this environment and thereby control biogeochemical cycles. Current knowledge of the microbial ecology of the subsurface is primarily based on 16S ribosomal RNA (rRNA) gene sequences. Recent estimates show that <8% of 16S rRNA sequences in public databases derive from subsurface organisms and only a small fraction of those are represented by genomes or isolates. Thus, there is remarkably little reliable information about microbial metabolism in the subsurface. Further, little is known about how organisms in subsurface ecosystems are metabolically interconnected. Some cultivation-based studies of syntrophic consortia and small-scale metagenomic analyses of natural communities suggest that organisms are linked via metabolic handoffs: the transfer of redox reaction products of one organism to another. However, no complex environments have been dissected completely enough to resolve the metabolic interaction networks that underpin them. This restricts the ability of biogeochemical models to capture key aspects of the carbon and other nutrient cycles. New approaches such as genome-resolved metagenomics, an approach that can yield a comprehensive set of draft and even complete genomes for organisms without the requirement for laboratory isolation have the potential to provide this critical level of understanding of biogeochemical processes.
Some examples
Some of the more well-known biogeochemical cycles are shown below:
Many biogeochemical cycles are currently being studied for the first time. Climate change and human impacts are drastically changing the speed, intensity, and balance of these relatively unknown cycles, which include:
the mercury cycle, and
the human-caused cycle of PCBs.
Biogeochemical cycles always involve active equilibrium states: a balance in the cycling of the element between compartments. However, overall balance may involve compartments distributed on a global scale.
As biogeochemical cycles describe the movements of substances on the entire globe, the study of these is inherently multidisciplinary. The carbon cycle may be related to research in ecology and atmospheric sciences. Biochemical dynamics would also be related to the fields of geology and pedology.
See also
Carbonate–silicate cycle
Ecological recycling
Great Acceleration
Hydrogen cycle
Redox gradient
References
Further reading
Schink, Bernhard; "Microbes: Masters of the Global Element Cycles" pp 33–58. "Metals, Microbes and Minerals: The Biogeochemical Side of Life", pp xiv + 341. Walter de Gruyter, Berlin. DOI 10.1515/9783110589771-002
Biogeography
Biosphere
Geochemistry | 0.785844 | 0.99623 | 0.782882 |
Quantitative analysis (chemistry) | In analytical chemistry, quantitative analysis is the determination of the absolute or relative abundance (often expressed as a concentration) of one, several or all particular substance(s) present in a sample. It relates to the determination of percentage of constituents in any give sample.
Methods
Once the presence of certain substances in a sample is known, the study of their absolute or relative abundance could help in determining specific properties. Knowing the composition of a sample is very important, and several ways have been developed to make it possible, like gravimetric and volumetric analysis. Gravimetric analysis yields more accurate data about the composition of a sample than volumetric analysis but also takes more time to perform in the laboratory. Volumetric analysis, on the other hand, doesn't take that much time and can produce satisfactory results. Volumetric analysis can be simply a titration based in a neutralization reaction but it can also be a precipitation or a complex forming reaction as well as a titration based in a redox reaction. However, each method in quantitative analysis has a general specification, in neutralization reactions, for example, the reaction that occurs is between an acid and a base, which yields a salt and water, hence the name neutralization. In the precipitation reactions the standard solution is in the most cases silver nitrate which is used as a reagent to react with the ions present in the sample and to form a highly insoluble precipitate. Precipitation methods are often called simply as argentometry. In the two other methods the situation is the same. Complex forming titration is a reaction that occurs between metal ions and a standard solution that is in the most cases EDTA (Ethylene Diamine Tetra Acetic acid). In the redox titration that reaction is carried out between an oxidizing agent and a reduction agent. There are some more methods like Liebig method / Duma's method / Kjeldahl's method and Carius method for estimation of organic compounds.
For example, quantitative analysis performed by mass spectrometry on biological samples can determine, by the relative abundance ratio of specific proteins, indications of certain diseases, like cancer.
Quantitative vs. qualitative
The term "quantitative analysis" is often used in comparison (or contrast) with "qualitative analysis", which seeks information about the identity or form of substance present. For instance, a chemist might be given an unknown solid sample. They will use "qualitative" techniques (perhaps NMR or IR spectroscopy) to identify the compounds present, and then quantitative techniques to determine the amount of each compound in the sample. Careful procedures for recognizing the presence of different metal ions have been developed, although they have largely been replaced by modern instruments; these are collectively known as qualitative inorganic analysis. Similar tests for identifying organic compounds (by testing for different functional groups) are also known.
Many techniques can be used for either qualitative or quantitative measurements. For instance, suppose an indicator solution changes color in the presence of a metal ion. It could be used as a qualitative test: does the indicator solution change color when a drop of sample is added? It could also be used as a quantitative test, by studying the color of the indicator solution with different concentrations of the metal ion. (This would probably be done using ultraviolet-visible spectroscopy.)
See also
Microanalysis
Isotope dilution
References
Analytical chemistry | 0.80085 | 0.977551 | 0.782872 |
Assay | An assay is an investigative (analytic) procedure in laboratory medicine, mining, pharmacology, environmental biology and molecular biology for qualitatively assessing or quantitatively measuring the presence, amount, or functional activity of a target entity. The measured entity is often called the analyte, the measurand, or the target of the assay. The analyte can be a drug, biochemical substance, chemical element or compound, or cell in an organism or organic sample. An assay usually aims to measure an analyte's intensive property and express it in the relevant measurement unit (e.g. molarity, density, functional activity in enzyme international units, degree of effect in comparison to a standard, etc.).
If the assay involves exogenous reactants (the reagents), then their quantities are kept fixed (or in excess) so that the quantity and quality of the target are the only limiting factors. The difference in the assay outcome is used to deduce the unknown quality or quantity of the target in question. Some assays (e.g., biochemical assays) may be similar to chemical analysis and titration. However, assays typically involve biological material or phenomena that are intrinsically more complex in composition or behavior, or both. Thus, reading of an assay may be noisy and involve greater difficulties in interpretation than an accurate chemical titration. On the other hand, older generation qualitative assays, especially bioassays, may be much more gross and less quantitative (e.g., counting death or dysfunction of an organism or cells in a population, or some descriptive change in some body part of a group of animals).
Assays have become a routine part of modern medical, environmental, pharmaceutical, and forensic technology. Other businesses may also employ them at the industrial, curbside, or field levels. Assays in high commercial demand have been well investigated in research and development sectors of professional industries. They have also undergone generations of development and sophistication. In some cases, they are protected by intellectual property regulations such as patents granted for inventions. Such industrial-scale assays are often performed in well-equipped laboratories and with automated organization of the procedure, from ordering an assay to pre-analytic sample processing (sample collection, necessary manipulations e.g. spinning for separation, aliquoting if necessary, storage, retrieval, pipetting, aspiration, etc.). Analytes are generally tested in high-throughput autoanalyzers, and the results are verified and automatically returned to ordering service providers and end-users. These are made possible through the use of an advanced laboratory informatics system that interfaces with multiple computer terminals with end-users, central servers, the physical autoanalyzer instruments, and other automata.
Etymology
According to Etymology Online, the verb assay means "to try, endeavor, strive, test the quality of"; from Anglo-French assaier, from assai (noun), from Old French essai, "trial". Thus the noun assay means "trial, test of quality, test of character" (from mid-14th century), from Anglo-French assai; and its meaning "analysis" is from the late 14th century.
For assay of currency coins this literally meant analysis of the purity of the gold or silver (or whatever the precious component) that represented the true value of the coin. This might have translated later (possibly after the 14th century) into a broader usage of "analysis", e.g., in pharmacology, analysis for an important component of a target inside a mixture—such as the active ingredient of a drug inside the inert excipients in a formulation that previously was measured only grossly by its observable action on an organism (e.g., a lethal dose or inhibitory dose).
General steps
An assay (analysis) is never an isolated process, as it must be accompanied with pre- and post-analytic procedures. Both the communication order (the request to perform an assay plus related information) and the handling of the specimen itself (the collecting, documenting, transporting, and processing done before beginning the assay) are pre-analytic steps. Similarly, after the assay is completed the results must be documented, verified and communicated—the post-analytic steps. As with any multi-step information handling and transmission system, the variation and errors in reporting final results entail not only those intrinsic to the assay itself but also those occurring in the pre-analytic and post-analytic procedures.
While the analytic steps of the assay itself get much attention, it is those that get less attention of the chain of users—the pre-analytic and post-analytic procedures—that typically accumulate the most errors; e.g., pre-analytic steps in medical laboratory assays may contribute 32–75% of all lab errors.
Assays can be very diverse, but generally involve the following general steps:
Sample processing and manipulation in order to selectively present the target in a discernible or measurable form to a discrimination/identification/detection system. It might involve a simple centrifugal separation or washing or filtration or capture by some form of selective binding or it may even involve modifying the target e.g. epitope retrieval in immunological assays or cutting down the target into pieces e.g. in Mass Spectrometry. Generally there are multiple separate steps done before an assay and are called preanalytic processing. But some of the manipulations may be inseparable part of the assay itself and will not thus be considered pre-analytic.
Target-specific discrimination/identification principle: to discriminate from background (noise) of similar components and specifically identify a particular target component ("analyte") in a biological material by its specific attributes. (e.g. in a PCR assay a specific oligonucleotide primer identifies the target by base pairing based on the specific nucleotide sequence unique to the target).
Signal (or target) amplification system: The presence and quantity of that analyte is converted into a detectable signal generally involving some method of signal amplification, so that it can be easily discriminated from noise and measured - e.g. in a PCR assay among a mixture of DNA sequences only the specific target is amplified into millions of copies by a DNA polymerase enzyme so that it can be discerned as a more prominent component compared to any other potential components. Sometimes the concentration of the analyte is too large and in that case the assay may involve sample dilution or some sort of signal diminution system which is a negative amplification.
Signal detection (and interpretation) system: A system of deciphering the amplified signal into an interpretable output that can be quantitative or qualitative. It can be visual or manual very crude methods or can be very sophisticated electronic digital or analog detectors.
Signal enhancement and noise filtering may be done at any or all of the steps above. Since the more downstream a step/process during an assay, the higher the chance of carrying over noise from the previous process and amplifying it, multiple steps in a sophisticated assay might involve various means of signal-specific sharpening/enhancement arrangements and noise reduction or filtering arrangements. These may simply be in the form of a narrow band-pass optical filter, or a blocking reagent in a binding reaction that prevents nonspecific binding or a quenching reagent in a fluorescence detection system that prevents "autofluorescence" of background objects.
Assay types based on the nature of the assay process
Time and number of measurements taken
Depending on whether an assay just looks at a single time point or timed readings taken at multiple time points, an assay may be:
An end point assay, in which a single measurement is performed after a fixed incubation period; or
A kinetic assay, in which measurements are performed multiple times over a fixed time interval. Kinetic assay results may be visualized numerically (for example, as a slope parameter representing the rate of signal change over time), or graphically (for example, as a plot of the signal measured at each time point). For kinetic assays, both the magnitude and shape of the measured response over time provide important information.
A high throughput assay can be either an endpoint or a kinetic assay usually done on an automated platform in 96-, 384- or 1536-well microplate formats (High Throughput Screening). Such assays are able to test large number of compounds or analytes or make functional biological readouts in response to a stimuli and/or compounds being tested.
Number of analytes detected
Depending on how many targets or analytes are being measured:
Usual assays are simple or single target assays which is usually the default unless it is called multiplex.
Multiplex assays are used to simultaneously measure the presence, concentration, activity, or quality of multiple analytes in a single test. The advent of multiplexing enabled rapid, efficient sample testing in many fields, including immunology, cytochemistry, genetics/genomics, pharmacokinetics, and toxicology.
Result type
Depending on the quality of the result produced, assays may be classified into:
Qualitative assays, i.e. assays which generally give just a pass or fail, or positive or negative or some such sort of only small number of qualitative gradation rather than an exact quantity.
Semi-quantitative assays, i.e. assays that give the read-out in an approximate fashion rather than an exact number for the quantity of the substance. Generally they have a few more gradations than just two outcomes, positive or negative, e.g. scoring on a scale of 1+ to 4+ as used for blood grouping tests based on RBC agglutination in response to grouping reagents (antibody against blood group antigens).
Quantitative assays, i.e. assays that give accurate and exact numeric quantitative measure of the amount of a substance in a sample. An example of such an assay used in coagulation testing laboratories for the most common inherited bleeding disease - Von Willebrand disease is VWF antigen assay where the amount of VWF present in a blood sample is measured by an immunoassay.
Functional assays, i.e. an assay that tries to quantify functioning of an active substance rather than just its quantity. The functional counterpart of the VWF antigen assay is Ristocetin Cofactor assay, which measures the functional activity of the VWF present in a patient's plasma by adding exogenous formalin-fixed platelets and gradually increasing quantities of drug named ristocetin while measuring agglutination of the fixed platelets. A similar assay but used for a different purpose is called Ristocetin Induced Platelet Aggregation or RIPA, which tests response of endogenous live platelets from a patient in response to Ristocetin (exogenous) & VWF (usually endogenous).
Sample type and method
Depending on the general substrate on which the assay principle is applied:
Bioassay: when the response is biological activity of live objects. Examples include
in vivo, whole organism (e.g. mouse or other subject injected with a drug)
ex vivo body part (e.g. leg of a frog)
ex vivo organ (e.g. heart of a dog)
ex vivo part of an organ (e.g. a segment of an intestine).
tissue (e.g. limulus lysate)
cell (e.g. platelets)
Ligand binding assay when a ligand (usually a small molecule) binds a receptor (usually a large protein).
Immunoassay when the response is an antigen antibody binding type reaction.
Signal amplification
Depending on the nature of the signal amplification system assays may be of numerous types, to name a few:
Enzyme assay: Enzymes may be tested by their highly repeating activity on a large number of substrates when loss of a substrate or the making of a product may have a measurable attribute like color or absorbance at a particular wavelength or light or Electrochemiluminescence or electrical/redox activity.
Light detection systems that may use amplification e.g. by a photodiode or a photomultiplier tube or a cooled charge-coupled device.
Radioisotope labeled substrates as used in radioimmunoassays and equilibrium dialysis assays and can be detected by the amplification in Gamma counters or X-ray plates, or phosphorimager
Polymerase Chain Reaction Assays that amplify a DNA (or RNA) target rather than the signal
Combination Methods Assays may utilize a combination of the above and other amplification methods to improve sensitivity. e.g. Enzyme-linked immunoassay or EIA, enzyme linked immunosorbent assay.
Detection method or technology
Depending on the nature of the Detection system assays can be based on:
Colony forming or virtual colony count: e.g. by multiplying bacteria or proliferating cells.
Photometry / spectrophotometry When the absorbance of a specific wavelength of light while passing through a fixed path-length through a cuvette of liquid test sample is measured and the absorbance is compared with a blank and standards with graded amounts of the target compound. If the emitted light is of a specific visible wavelength it may be called colorimetry, or it may involve specific wavelength of light e.g. by use of laser and emission of fluorescent signals of another specific wavelength which is detected via very specific wavelength optical filters.
Transmittance of light may be used to measure e.g. clearing of opacity of a liquid created by suspended particles due to decrease in number of clumps during a platelet agglutination reaction.
Turbidimetry when the opacity of straight-transmitted light passing through a liquid sample is measured by detectors placed straight across the light source.
Nephelometry where a measurement of the amount of light scattering that occurs when a beam of light is passed through the solution is used to determine size and/or concentration and/or size distribution of particles in the sample.
Reflectometry When color of light reflected from a (usually dry) sample or reactant is assessed e.g. the automated readings of the strip urine dipstick assays.
Viscoelastic measurements e.g. viscometry, elastography (e.g. thromboelastography)
Counting assays: e.g. optic Flow cytometric cell or particle counters, or coulter/impedance principle based cell counters
Imaging assays, that involve image analysis manually or by software:
Cytometry: When the size statistics of cells is assessed by an image processor.
Electric detection e.g. involving amperometry, Voltammetry, coulometry may be used directly or indirectly for many types of quantitative measurements.
Other physical property based assays may use
Osmometer
Viscometer
Ion Selective electrodes
Syndromic testing
Assay types based on the targets being measured
DNA
Assays for studying interactions of proteins with DNA include:
DNase footprinting assay
Filter binding assay
Gel shift assay
Protein
Bicinchoninic acid assay (BCA assay)
Bradford protein assay
Lowry protein assay
Secretion assay
RNA
Nuclear run-on
Ribosome profiling
Cell counting, viability, proliferation or cytotoxicity assays
A cell-counting assay may determine the number of living cells, the number of dead cells, or the ratio of one cell type to another, such as enumerating and typing red versus different types of white blood cells. This is measured by different physical methods (light transmission, electric current change). But other methods use biochemical probing cell structure or physiology (stains). Another application is to monitor cell culture (assays of cell proliferation or cytotoxicity).
A cytotoxicity assay measures how toxic a chemical compound is to cells.
MTT assay
Cell Counting Kit-8 (WST-8 based cell viability assay)
SRB (Sulforhodamine B) assay
CellTiter-Glo® Luminescent Cell Viability Assay
Cell counting instruments and methods: CASY cell counting technology, Coulter counter, Electric cell-substrate impedance sensing
Cell viability assays: resazurin method, ATP test, Ethidium homodimer assay (detect dead or dying cells), Bacteriological water analysis, Clonogenic assays, ...
Environmental or food contaminants
Bisphenol F
Aquatic toxicity tests
Surfactants
An MBAS assay indicates anionic surfactants in water with a bluing reaction.
Other cell assays
Many cell assays have been developed to assess specific parameters or response of cells (biomarkers, cell physiology). Techniques used to study cells include :
reporter assays using i.e. Luciferase, calcium signaling assays using Coelenterazine, CFSE or Calcein
Immunostaining of cells on slides by Microscopy (ImmunoHistoChemistry or Fluorescence), on microplates by photometry including the ELISpot (and its variant FluoroSpot) to enumerate B-Cells or antigen-specific cells, in solution by Flow cytometry
Molecular biology techniques such as DNA microarrays, in situ hybridization, combined to PCR, Computational genomics, and Transfection; Cell fractionation or Immunoprecipitation
Migration assays, Chemotaxis assay
Secretion assays
Apoptosis assays such as the DNA laddering assay, the Nicoletti assay, caspase activity assays, and Annexin V staining
Chemosensitivity assay measures the number of tumor cells that are killed by a cancer drug
Tetramer assay detect the presence of antigen specific T-cells
Gentamicin protection assay or survival assay or invasion assay to assess ability of pathogens (bacteria) to invade eukaryotic cells
Metastasis Assay
Petrochemistry
Crude oil assay
Virology
The HPCE-based viral titer assay uses a proprietary, high-performance capillary electrophoresis system to determine baculovirus titer.
The Trofile assay is used to determine HIV tropism.
The viral plaque assay is to calculate the number of viruses present in a sample. In this technique the number of viral plaques formed by a viral inoculum is counted, from which the actual virus concentration can be determined.
Cellular secretions
A wide range of cellular secretions (say, a specific antibody or cytokine) can be detected using the ELISA technique. The number of cells which secrete those particular substances can be determined using a related technique, the ELISPOT assay.
Drugs
Testing for Illegal Drugs
Radioligand binding assay
Quality
When multiple assays measure the same target their results and utility may or may not be comparable depending on the natures of the assay and their methodology, reliability etc. Such comparisons are possible through study of general quality attributes of the assays e.g. principles of measurement (including identification, amplification and detection), dynamic range of detection (usually the range of linearity of the standard curve), analytic sensitivity, functional sensitivity, analytic specificity, positive, negative predictive values, turn around time i.e. time taken to finish a whole cycle from the preanalytic steps till the end of the last post analytic step (report dispatch/transmission), throughput i.e. number of assays done per unit time (usually expressed as per hour) etc. Organizations or laboratories that perform Assays for professional purposes e.g. medical diagnosis and prognostics, environmental analysis, forensic proceeding, pharmaceutical research and development must undergo well regulated quality assurance procedures including method validation, regular calibration, analytical quality control, proficiency testing, test accreditation, test licensing and must document appropriate certifications from the relevant regulating bodies in order to establish the reliability of their assays, especially to remain legally acceptable and accountable for the quality of the assay results and also to convince customers to use their assay commercially/professionally.
List of BioAssay databases
Bioactivity databases
Bioactivity databases correlate structures or other chemical information to bioactivity results taken from bioassays in literature, patents, and screening programs.
Protocol databases
Protocol databases correlate results from bioassays to their metadata about experimental conditions and protocol designs.
See also
Analytical chemistry
MELISA
Multiplex (assay)
Pharmaceutical chemistry
Titration
References
External links
This includes a detailed, technical explanation of contemporaneous metallic ore assay techniques.
Biochemistry
Laboratory techniques
Titration | 0.7887 | 0.992558 | 0.78283 |
Phytochemistry | Phytochemistry is the study of phytochemicals, which are chemicals derived from plants. Phytochemists strive to describe the structures of the large number of secondary metabolites found in plants, the functions of these compounds in human and plant biology, and the biosynthesis of these compounds. Plants synthesize phytochemicals for many reasons, including to protect themselves against insect attacks and plant diseases. The compounds found in plants are of many kinds, but most can be grouped into four major biosynthetic classes: alkaloids, phenylpropanoids, polyketides, and terpenoids.
Phytochemistry can be considered a subfield of botany or chemistry. Activities can be led in botanical gardens or in the wild with the aid of ethnobotany. Phytochemical studies directed toward human (i.e. drug discovery) use may fall under the discipline of pharmacognosy, whereas phytochemical studies focused on the ecological functions and evolution of phytochemicals likely fall under the discipline of chemical ecology. Phytochemistry also has relevance to the field of plant physiology.
Techniques
Techniques commonly used in the field of phytochemistry are extraction, isolation, and structural elucidation (MS,1D and 2D NMR) of natural products, as well as various chromatography techniques (MPLC, HPLC, and LC-MS).
Phytochemicals
Many plants produce chemical compounds for defence against herbivores. The major classes of pharmacologically active phytochemicals are described below, with examples of medicinal plants that contain them. Human settlements are often surrounded by weeds containing phytochemicals, such as nettle, dandelion and chickweed.
Many phytochemicals, including curcumin, epigallocatechin gallate, genistein, and resveratrol are pan-assay interference compounds and are not useful in drug discovery.
Alkaloids
Alkaloids are bitter-tasting chemicals, widespread in nature, and often toxic. There are several classes with different modes of action as drugs, both recreational and pharmaceutical. Medicines of different classes include atropine, scopolamine, and hyoscyamine (all from nightshade), the traditional medicine berberine (from plants such as Berberis and Mahonia), caffeine (Coffea), cocaine (Coca), ephedrine (Ephedra), morphine (opium poppy), nicotine (tobacco), reserpine (Rauvolfia serpentina), quinidine and quinine (Cinchona), vincamine (Vinca minor), and vincristine (Catharanthus roseus).
Glycosides
Anthraquinone glycosides are found in senna, rhubarb, and Aloe.
The cardiac glycosides are phytochemicals from plants including foxglove and lily of the valley. They include digoxin and digitoxin which act as diuretics.
Polyphenols
Polyphenols of several classes are widespread in plants, including anthocyanins, phytoestrogens, and tannins. Polyphenols are secondary metabolites produced by almost every part of plants, including fruits, flowers, leaves and bark.
Terpenes
Terpenes and terpenoids of many kinds are found in resinous plants such as the conifers. They are aromatic and serve to repel herbivores. Their scent makes them useful in essential oils, whether for perfumes such as rose and lavender, or for aromatherapy. Some have had medicinal uses: thymol is an antiseptic and was once used as a vermifuge (anti-worm medicine).
Genetics
Contrary to bacteria and fungi, most plant metabolic pathways are not grouped into biosynthetic gene clusters, but instead are scattered as individual genes. Some exceptions have been discovered: steroidal glycoalkaloids in Solanum, polyketides in Pooideae, benzoxazinoids in Zea mays, triterpenes in Avena sativa, Cucurbitaceae, Arabidopsis, and momilactone diterpenes in Oryza sativa.
References
Phytochemicals
Biochemistry
Chemistry
Botany
Herbalism
Branches of botany
Pharmacognosy | 0.794186 | 0.985654 | 0.782793 |
Equilibrium constant | The equilibrium constant of a chemical reaction is the value of its reaction quotient at chemical equilibrium, a state approached by a dynamic chemical system after sufficient time has elapsed at which its composition has no measurable tendency towards further change. For a given set of reaction conditions, the equilibrium constant is independent of the initial analytical concentrations of the reactant and product species in the mixture. Thus, given the initial composition of a system, known equilibrium constant values can be used to determine the composition of the system at equilibrium. However, reaction parameters like temperature, solvent, and ionic strength may all influence the value of the equilibrium constant.
A knowledge of equilibrium constants is essential for the understanding of many chemical systems, as well as the biochemical processes such as oxygen transport by hemoglobin in blood and acid–base homeostasis in the human body.
Stability constants, formation constants, binding constants, association constants and dissociation constants are all types of equilibrium constants.
Basic definitions and properties
For a system undergoing a reversible reaction described by the general chemical equation
a thermodynamic equilibrium constant, denoted by , is defined to be the value of the reaction quotient Qt when forward and reverse reactions occur at the same rate. At chemical equilibrium, the chemical composition of the mixture does not change with time, and the Gibbs free energy change for the reaction is zero. If the composition of a mixture at equilibrium is changed by addition of some reagent, a new equilibrium position will be reached, given enough time. An equilibrium constant is related to the composition of the mixture at equilibrium by
where {X} denotes the thermodynamic activity of reagent X at equilibrium, [X] the numerical value of the corresponding concentration in moles per liter, and γ the corresponding activity coefficient. If X is a gas, instead of [X] the numerical value of the partial pressure in bar is used. If it can be assumed that the quotient of activity coefficients, , is constant over a range of experimental conditions, such as pH, then an equilibrium constant can be derived as a quotient of concentrations.
An equilibrium constant is related to the standard Gibbs free energy change of reaction by
where R is the universal gas constant, T is the absolute temperature (in kelvins), and is the natural logarithm. This expression implies that must be a pure number and cannot have a dimension, since logarithms can only be taken of pure numbers. must also be a pure number. On the other hand, the reaction quotient at equilibrium
does have the dimension of concentration raised to some power (see , below). Such reaction quotients are often referred to, in the biochemical literature, as equilibrium constants.
For an equilibrium mixture of gases, an equilibrium constant can be defined in terms of partial pressure or fugacity.
An equilibrium constant is related to the forward and backward rate constants, kf and kr of the reactions involved in reaching equilibrium:
Types of equilibrium constants
Cumulative and stepwise formation constants
A cumulative or overall constant, given the symbol β, is the constant for the formation of a complex from reagents. For example, the cumulative constant for the formation of ML2 is given by
M + 2 L ML2; [ML2] = β12[M][L]2
The stepwise constant, K, for the formation of the same complex from ML and L is given by
ML + L ML2; [ML2] = K[ML][L] = Kβ11[M][L]2
It follows that
β12 = Kβ11
A cumulative constant can always be expressed as the product of stepwise constants. There is no agreed notation for stepwise constants, though a symbol such as K is sometimes found in the literature. It is best always to define each stability constant by reference to an equilibrium expression.
Competition method
A particular use of a stepwise constant is in the determination of stability constant values outside the normal range for a given method. For example, EDTA complexes of many metals are outside the range for the potentiometric method. The stability constants for those complexes were determined by competition with a weaker ligand.
ML + L′ ML′ + L
The formation constant of [Pd(CN)4]2− was determined by the competition method.
Association and dissociation constants
In organic chemistry and biochemistry it is customary to use pKa values for acid dissociation equilibria.
where log denotes a logarithm to base 10 or common logarithm, and Kdiss is a stepwise acid dissociation constant. For bases, the base association constant, pKb is used. For any given acid or base the two constants are related by , so pKa can always be used in calculations.
On the other hand, stability constants for metal complexes, and binding constants for host–guest complexes are generally expressed as association constants. When considering equilibria such as
M + HL ML + H
it is customary to use association constants for both ML and HL. Also, in generalized computer programs dealing with equilibrium constants it is general practice to use cumulative constants rather than stepwise constants and to omit ionic charges from equilibrium expressions. For example, if NTA, nitrilotriacetic acid, N(CH2CO2H)3 is designated as H3L and forms complexes ML and MHL with a metal ion M, the following expressions would apply for the dissociation constants.
The cumulative association constants can be expressed as
Note how the subscripts define the stoichiometry of the equilibrium product.
Micro-constants
When two or more sites in an asymmetrical molecule may be involved in an equilibrium reaction there are more than one possible equilibrium constants. For example, the molecule -DOPA has two non-equivalent hydroxyl groups which may be deprotonated. Denoting -DOPA as LH2, the following diagram shows all the species that may be formed (X = ).
The concentration of the species LH is equal to the sum of the concentrations of the two micro-species with the same chemical formula, labelled L1H and L2H. The constant K2 is for a reaction with these two micro-species as products, so that [LH] = [L1H] + [L2H] appears in the numerator, and it follows that this macro-constant is equal to the sum of the two micro-constants for the component reactions.
K2 = k21 + k22
However, the constant K1 is for a reaction with these two micro-species as reactants, and [LH] = [L1H] + [L2H] in the denominator, so that in this case
1/K1 =1/ k11 + 1/k12,
and therefore K1 =k11 k12 / (k11 + k12).
Thus, in this example there are four micro-constants whose values are subject to two constraints; in consequence, only the two macro-constant values, for K1 and K2 can be derived from experimental data.
Micro-constant values can, in principle, be determined using a spectroscopic technique, such as infrared spectroscopy, where each micro-species gives a different signal. Methods which have been used to estimate micro-constant values include
Chemical: blocking one of the sites, for example by methylation of a hydroxyl group, followed by determination of the equilibrium constant of the related molecule, from which the micro-constant value for the "parent" molecule may be estimated.
Mathematical: applying numerical procedures to 13C NMR data.
Although the value of a micro-constant cannot be determined from experimental data, site occupancy, which is proportional to the micro-constant value, can be very important for biological activity. Therefore, various methods have been developed for estimating micro-constant values. For example, the isomerization constant for -DOPA has been estimated to have a value of 0.9, so the micro-species L1H and L2H have almost equal concentrations at all pH values.
pH considerations (Brønsted constants)
pH is defined in terms of the activity of the hydrogen ion
pH = −log10 {H+}
In the approximation of ideal behaviour, activity is replaced by concentration. pH is measured by means of a glass electrode, a mixed equilibrium constant, also known as a Brønsted constant, may result.
HL L + H;
It all depends on whether the electrode is calibrated by reference to solutions of known activity or known concentration. In the latter case the equilibrium constant would be a concentration quotient. If the electrode is calibrated in terms of known hydrogen ion concentrations it would be better to write p[H] rather than pH, but this suggestion is not generally adopted.
Hydrolysis constants
In aqueous solution the concentration of the hydroxide ion is related to the concentration of the hydrogen ion by
\mathit{K}_W =[H][OH]
[OH]=\mathit{K}_W[H]^{-1}
The first step in metal ion hydrolysis can be expressed in two different ways
It follows that . Hydrolysis constants are usually reported in the β* form and therefore often have values much less than 1. For example, if and so that β* = 10−10. In general when the hydrolysis product contains n hydroxide groups
Conditional constants
Conditional constants, also known as apparent constants, are concentration quotients which are not true equilibrium constants but can be derived from them. A very common instance is where pH is fixed at a particular value. For example, in the case of iron(III) interacting with EDTA, a conditional constant could be defined by
This conditional constant will vary with pH. It has a maximum at a certain pH. That is the pH where the ligand sequesters the metal most effectively.
In biochemistry equilibrium constants are often measured at a pH fixed by means of a buffer solution. Such constants are, by definition, conditional and different values may be obtained when using different buffers.
Gas-phase equilibria
For equilibria in a gas phase, fugacity, f, is used in place of activity. However, fugacity has the dimension of pressure, so it must be divided by a standard pressure, usually 1 bar, in order to produce a dimensionless quantity, . An equilibrium constant is expressed in terms of the dimensionless quantity. For example, for the equilibrium 2NO2 N2O4,
Fugacity is related to partial pressure, , by a dimensionless fugacity coefficient ϕ: . Thus, for the example,
Usually the standard pressure is omitted from such expressions. Expressions for equilibrium constants in the gas phase then resemble the expression for solution equilibria with fugacity coefficient in place of activity coefficient and partial pressure in place of concentration.
Thermodynamic basis for equilibrium constant expressions
Thermodynamic equilibrium is characterized by the free energy for the whole (closed) system being a minimum. For systems at constant temperature and pressure the Gibbs free energy is minimum. The slope of the reaction free energy with respect to the extent of reaction, ξ, is zero when the free energy is at its minimum value.
The free energy change, dGr, can be expressed as a weighted sum of change in amount times the chemical potential, the partial molar free energy of the species. The chemical potential, μi, of the ith species in a chemical reaction is the partial derivative of the free energy with respect to the number of moles of that species, Ni
A general chemical equilibrium can be written as
where nj are the stoichiometric coefficients of the reactants in the equilibrium equation, and mj are the coefficients of the products. At equilibrium
The chemical potential, μi, of the ith species can be calculated in terms of its activity, ai.
μ is the standard chemical potential of the species, R is the gas constant and T is the temperature. Setting the sum for the reactants j to be equal to the sum for the products, k, so that δGr(Eq) = 0
Rearranging the terms,
This relates the standard Gibbs free energy change, ΔGo to an equilibrium constant, K, the reaction quotient of activity values at equilibrium.
Equivalence of thermodynamic and kinetic expressions for equilibrium constants
At equilibrium the rate of the forward reaction is equal to the backward reaction rate. A simple reaction, such as ester hydrolysis
AB + H2O <=> AH + B(OH)
has reaction rates given by expressions
According to Guldberg and Waage, equilibrium is attained when the forward and backward reaction rates are equal to each other. In these circumstances, an equilibrium constant is defined to be equal to the ratio of the forward and backward reaction rate constants
.
The concentration of water may be taken to be constant, resulting in the simpler expression
.
This particular concentration quotient, , has the dimension of concentration, but the thermodynamic equilibrium constant, , is always dimensionless.
Unknown activity coefficient values
It is very rare for activity coefficient values to have been determined experimentally for a system at equilibrium. There are three options for dealing with the situation where activity coefficient values are not known from experimental measurements.
Use calculated activity coefficients, together with concentrations of reactants. For equilibria in solution estimates of the activity coefficients of charged species can be obtained using Debye–Hückel theory, an extended version, or SIT theory. For uncharged species, the activity coefficient γ0 mostly follows a "salting-out" model: log10 γ0 = bI where I stands for ionic strength.
Assume that the activity coefficients are all equal to 1. This is acceptable when all concentrations are very low.
For equilibria in solution use a medium of high ionic strength. In effect this redefines the standard state as referring to the medium. Activity coefficients in the standard state are, by definition, equal to 1. The value of an equilibrium constant determined in this manner is dependent on the ionic strength. When published constants refer to an ionic strength other than the one required for a particular application, they may be adjusted by means of specific ion theory (SIT) and other theories.
Dimensionality
An equilibrium constant is related to the standard Gibbs free energy of reaction change, , for the reaction by the expression
Therefore, K, must be a dimensionless number from which a logarithm can be derived. In the case of a simple equilibrium
A + B <=> AB,
the thermodynamic equilibrium constant is defined in terms of the activities, {AB}, {A} and {B}, of the species in equilibrium with each other:
Now, each activity term can be expressed as a product of a concentration and a corresponding activity coefficient, . Therefore,
When , the quotient of activity coefficients, is set equal to 1, we get
K then appears to have the dimension of 1/concentration. This is what usually happens in practice when an equilibrium constant is calculated as a quotient of concentration values. This can be avoided by dividing each concentration by its standard-state value (usually mol/L or bar), which is standard practice in chemistry.
The assumption underlying this practice is that the quotient of activities is constant under the conditions in which the equilibrium constant value is determined. These conditions are usually achieved by keeping the reaction temperature constant and by using a medium of relatively high ionic strength as the solvent. It is not unusual, particularly in texts relating to biochemical equilibria, to see an equilibrium constant value quoted with a dimension. The justification for this practice is that the concentration scale used may be either mol dm−3 or mmol dm−3, so that the concentration unit has to be stated in order to avoid there being any ambiguity.
Note. When the concentration values are measured on the mole fraction scale all concentrations and activity coefficients are dimensionless quantities.
In general equilibria between two reagents can be expressed as
{\mathit{p}A} + \mathit{q}B <=> A_\mathit{p}B_\mathit{q} ,
in which case the equilibrium constant is defined, in terms of numerical concentration values, as
The apparent dimension of this K value is concentration1−p−q; this may be written as M(1−p−q) or mM(1−p−q), where the symbol M signifies a molar concentration. The apparent dimension of a dissociation constant is the reciprocal of the apparent dimension of the corresponding association constant, and vice versa.
When discussing the thermodynamics of chemical equilibria it is necessary to take dimensionality into account. There are two possible approaches.
Set the dimension of to be the reciprocal of the dimension of the concentration quotient. This is almost universal practice in the field of stability constant determinations. The "equilibrium constant" , is dimensionless. It will be a function of the ionic strength of the medium used for the determination. Setting the numerical value of to be 1 is equivalent to re-defining the standard states.
Replace each concentration term by the dimensionless quotient , where is the concentration of reagent in its standard state (usually 1 mol/L or 1 bar). By definition the numerical value of is 1, so also has a numerical value of 1.
In both approaches the numerical value of the stability constant is unchanged. The first is more useful for practical purposes; in fact, the unit of the concentration quotient is often attached to a published stability constant value in the biochemical literature. The second approach is consistent with the standard exposition of Debye–Hückel theory, where , etc. are taken to be pure numbers.
Water as both reactant and solvent
For reactions in aqueous solution, such as an acid dissociation reaction
AH + H2O A− + H3O+
the concentration of water may be taken as being constant and the formation of the hydronium ion is implicit.
AH A− + H+
Water concentration is omitted from expressions defining equilibrium constants, except when solutions are very concentrated.
(K defined as a dissociation constant)
Similar considerations apply to metal ion hydrolysis reactions.
Enthalpy and entropy: temperature dependence
If both the equilibrium constant, and the standard enthalpy change, , for a reaction have been determined experimentally, the standard entropy change for the reaction is easily derived. Since and
To a first approximation the standard enthalpy change is independent of temperature. Using this approximation, definite integration of the van 't Hoff equation
gives
This equation can be used to calculate the value of log K at a temperature, T2, knowing the value at temperature T1.
The van 't Hoff equation also shows that, for an exothermic reaction, when temperature increases K decreases and when temperature decreases K increases, in accordance with Le Chatelier's principle. The reverse applies when the reaction is endothermic.
When K has been determined at more than two temperatures, a straight line fitting procedure may be applied to a plot of against to obtain a value for . Error propagation theory can be used to show that, with this procedure, the error on the calculated value is much greater than the error on individual log K values. Consequently, K needs to be determined to high precision when using this method. For example, with a silver ion-selective electrode each log K value was determined with a precision of ca. 0.001 and the method was applied successfully.
Standard thermodynamic arguments can be used to show that, more generally, enthalpy will change with temperature.
where Cp is the heat capacity at constant pressure.
A more complex formulation
The calculation of K at a particular temperature from a known K at another given temperature can be approached as follows if standard thermodynamic properties are available. The effect of temperature on equilibrium constant is equivalent to the effect of temperature on Gibbs energy because:
where ΔrGo is the reaction standard Gibbs energy, which is the sum of the standard Gibbs energies of the reaction products minus the sum of standard Gibbs energies of reactants.
Here, the term "standard" denotes the ideal behaviour (i.e., an infinite dilution) and a hypothetical standard concentration (typically 1 mol/kg). It does not imply any particular temperature or pressure because, although contrary to IUPAC recommendation, it is more convenient when describing aqueous systems over wide temperature and pressure ranges.
The standard Gibbs energy (for each species or for the entire reaction) can be represented (from the basic definitions) as:
In the above equation, the effect of temperature on Gibbs energy (and thus on the equilibrium constant) is ascribed entirely to heat capacity. To evaluate the integrals in this equation, the form of the dependence of heat capacity on temperature needs to be known.
If the standard molar heat capacity C can be approximated by some analytic function of temperature (e.g. the Shomate equation), then the integrals involved in calculating other parameters may be solved to yield analytic expressions for them. For example, using approximations of the following forms:
For pure substances (solids, gas, liquid):
For ionic species at :
then the integrals can be evaluated and the following final form is obtained:
The constants A, B, C, a, b and the absolute entropy, S̆, required for evaluation of C(T), as well as the values of G298 K and S298 K for many species are tabulated in the literature.
Pressure dependence
The pressure dependence of the equilibrium constant is usually weak in the range of pressures normally encountered in industry, and therefore, it is usually neglected in practice. This is true for condensed reactant/products (i.e., when reactants and products are solids or liquid) as well as gaseous ones.
For a gaseous-reaction example, one may consider the well-studied reaction of hydrogen with nitrogen to produce ammonia:
N2 + 3 H2 2 NH3
If the pressure is increased by the addition of an inert gas, then neither the composition at equilibrium nor the equilibrium constant are appreciably affected (because the partial pressures remain constant, assuming an ideal-gas behaviour of all gases involved). However, the composition at equilibrium will depend appreciably on pressure when:
the pressure is changed by compression or expansion of the gaseous reacting system, and
the reaction results in the change of the number of moles of gas in the system.
In the example reaction above, the number of moles changes from 4 to 2, and an increase of pressure by system compression will result in appreciably more ammonia in the equilibrium mixture. In the general case of a gaseous reaction:
α A + β B σ S + τ T
the change of mixture composition with pressure can be quantified using:
where p denote the partial pressures and X the mole fractions of the components, P is the total system pressure, Kp is the equilibrium constant expressed in terms of partial pressures and KX is the equilibrium constant expressed in terms of mole fractions.
The above change in composition is in accordance with Le Chatelier's principle and does not involve any change of the equilibrium constant with the total system pressure. Indeed, for ideal-gas reactions Kp is independent of pressure.
In a condensed phase, the pressure dependence of the equilibrium constant is associated with the reaction volume. For reaction:
α A + β B σ S + τ T
the reaction volume is:
where V̄ denotes a partial molar volume of a reactant or a product.
For the above reaction, one can expect the change of the reaction equilibrium constant (based either on mole-fraction or molal-concentration scale) with pressure at constant temperature to be:
The matter is complicated as partial molar volume is itself dependent on pressure.
Effect of isotopic substitution
Isotopic substitution can lead to changes in the values of equilibrium constants, especially if hydrogen is replaced by deuterium (or tritium). This equilibrium isotope effect is analogous to the kinetic isotope effect on rate constants, and is primarily due to the change in zero-point vibrational energy of H–X bonds due to the change in mass upon isotopic substitution. The zero-point energy is inversely proportional to the square root of the mass of the vibrating hydrogen atom, and will therefore be smaller for a D–X bond that for an H–X bond.
An example is a hydrogen atom abstraction reaction R' + H–R R'–H + R with equilibrium constant KH, where R' and R are organic radicals such that R' forms a stronger bond to hydrogen than does R. The decrease in zero-point energy due to deuterium substitution will then be more important for R'–H than for R–H, and R'–D will be stabilized more than R–D, so that the equilibrium constant KD for R' + D–R R'–D + R is greater than KH. This is summarized in the rule the heavier atom favors the stronger bond.
Similar effects occur in solution for acid dissociation constants (Ka) which describe the transfer of H+ or D+ from a weak aqueous acid to a solvent molecule: HA + H2O = H3O+ + A− or DA + D2O D3O+ + A−. The deuterated acid is studied in heavy water, since if it were dissolved in ordinary water the deuterium would rapidly exchange with hydrogen in the solvent.
The product species H3O+ (or D3O+) is a stronger acid than the solute acid, so that it dissociates more easily, and its H–O (or D–O) bond is weaker than the H–A (or D–A) bond of the solute acid. The decrease in zero-point energy due to isotopic substitution is therefore less important in D3O+ than in DA so that KD < KH, and the deuterated acid in D2O is weaker than the non-deuterated acid in H2O. In many cases the difference of logarithmic constants pKD – pKH is about 0.6, so that the pD corresponding to 50% dissociation of the deuterated acid is about 0.6 units higher than the pH for 50% dissociation of the non-deuterated acid.
For similar reasons the self-ionization of heavy water is less than that of ordinary water at the same temperature.
See also
Determination of equilibrium constants
Stability constants of complexes
Equilibrium fractionation
References
Data sources
IUPAC SC-Database A comprehensive database of published data on equilibrium constants of metal complexes and ligands
NIST Standard Reference Database 46 : Critically selected stability constants of metal complexes
Inorganic and organic acids and bases pKa data in water and DMSO
NASA Glenn Thermodynamic Database webpage with links to (self-consistent) temperature-dependent specific heat, enthalpy, and entropy for elements and molecules
Equilibrium chemistry
Dimensionless numbers of chemistry | 0.78732 | 0.994195 | 0.78275 |
Reverse engineering | Reverse engineering (also known as backwards engineering or back engineering) is a process or method through which one attempts to understand through deductive reasoning how a previously made device, process, system, or piece of software accomplishes a task with very little (if any) insight into exactly how it does so. Depending on the system under consideration and the technologies employed, the knowledge gained during reverse engineering can help with repurposing obsolete objects, doing security analysis, or learning how something works.
Although the process is specific to the object on which it is being performed, all reverse engineering processes consist of three basic steps: information extraction, modeling, and review. Information extraction is the practice of gathering all relevant information for performing the operation. Modeling is the practice of combining the gathered information into an abstract model, which can be used as a guide for designing the new object or system. Review is the testing of the model to ensure the validity of the chosen abstract. Reverse engineering is applicable in the fields of computer engineering, mechanical engineering, design, electronic engineering, software engineering, chemical engineering, and systems biology.
Overview
There are many reasons for performing reverse engineering in various fields. Reverse engineering has its origins in the analysis of hardware for commercial or military advantage. However, the reverse engineering process may not always be concerned with creating a copy or changing the artifact in some way. It may be used as part of an analysis to deduce design features from products with little or no additional knowledge about the procedures involved in their original production.
In some cases, the goal of the reverse engineering process can simply be a redocumentation of legacy systems. Even when the reverse-engineered product is that of a competitor, the goal may not be to copy it but to perform competitor analysis. Reverse engineering may also be used to create interoperable products and despite some narrowly-tailored United States and European Union legislation, the legality of using specific reverse engineering techniques for that purpose has been hotly contested in courts worldwide for more than two decades.
Software reverse engineering can help to improve the understanding of the underlying source code for the maintenance and improvement of the software, relevant information can be extracted to make a decision for software development and graphical representations of the code can provide alternate views regarding the source code, which can help to detect and fix a software bug or vulnerability. Frequently, as some software develops, its design information and improvements are often lost over time, but that lost information can usually be recovered with reverse engineering. The process can also help to cut down the time required to understand the source code, thus reducing the overall cost of the software development. Reverse engineering can also help to detect and to eliminate a malicious code written to the software with better code detectors. Reversing a source code can be used to find alternate uses of the source code, such as detecting the unauthorized replication of the source code where it was not intended to be used, or revealing how a competitor's product was built. That process is commonly used for "cracking" software and media to remove their copy protection, or to create a possibly-improved copy or even a knockoff, which is usually the goal of a competitor or a hacker.
Malware developers often use reverse engineering techniques to find vulnerabilities in an operating system to build a computer virus that can exploit the system vulnerabilities. Reverse engineering is also being used in cryptanalysis to find vulnerabilities in substitution cipher, symmetric-key algorithm or public-key cryptography.
There are other uses to reverse engineering:
Interfacing. Reverse engineering can be used when a system is required to interface to another system and how both systems would negotiate is to be established. Such requirements typically exist for interoperability.
Military or commercial espionage. Learning about an enemy's or competitor's latest research by stealing or capturing a prototype and dismantling it may result in the development of a similar product or a better countermeasure against it.
Obsolescence. Integrated circuits are often designed on proprietary systems and built on production lines, which become obsolete in only a few years. When systems using those parts can no longer be maintained since the parts are no longer made, the only way to incorporate the functionality into new technology is to reverse-engineer the existing chip and then to redesign it using newer tools by using the understanding gained as a guide. Another obsolescence originated problem that can be solved by reverse engineering is the need to support (maintenance and supply for continuous operation) existing legacy devices that are no longer supported by their original equipment manufacturer. The problem is particularly critical in military operations.
Product security analysis. That examines how a product works by determining the specifications of its components and estimate costs and identifies potential patent infringement. Also part of product security analysis is acquiring sensitive data by disassembling and analyzing the design of a system component. Another intent may be to remove copy protection or to circumvent access restrictions.
Competitive technical intelligence. That is to understand what one's competitor is actually doing, rather than what it says that it is doing.
Saving money. Finding out what a piece of electronics can do may spare a user from purchasing a separate product.
Repurposing. Obsolete objects are then reused in a different-but-useful manner.
Design. Production and design companies applied Reverse Engineering to practical craft-based manufacturing process. The companies can work on "historical" manufacturing collections through 3D scanning, 3D re-modeling and re-design. In 2013 Italian manufactures Baldi and Savio Firmino together with University of Florence optimized their innovation, design, and production processes.
Common uses
Machines
As computer-aided design (CAD) has become more popular, reverse engineering has become a viable method to create a 3D virtual model of an existing physical part for use in 3D CAD, CAM, CAE, or other software. The reverse-engineering process involves measuring an object and then reconstructing it as a 3D model. The physical object can be measured using 3D scanning technologies like CMMs, laser scanners, structured light digitizers, or industrial CT scanning (computed tomography). The measured data alone, usually represented as a point cloud, lacks topological information and design intent. The former may be recovered by converting the point cloud to a triangular-faced mesh. Reverse engineering aims to go beyond producing such a mesh and to recover the design intent in terms of simple analytical surfaces where appropriate (planes, cylinders, etc.) as well as possibly NURBS surfaces to produce a boundary-representation CAD model. Recovery of such a model allows a design to be modified to meet new requirements, a manufacturing plan to be generated, etc.
Hybrid modeling is a commonly used term when NURBS and parametric modeling are implemented together. Using a combination of geometric and freeform surfaces can provide a powerful method of 3D modeling. Areas of freeform data can be combined with exact geometric surfaces to create a hybrid model. A typical example of this would be the reverse engineering of a cylinder head, which includes freeform cast features, such as water jackets and high-tolerance machined areas.
Reverse engineering is also used by businesses to bring existing physical geometry into digital product development environments, to make a digital 3D record of their own products, or to assess competitors' products. It is used to analyze how a product works, what it does, what components it has; estimate costs; identify potential patent infringement; etc.
Value engineering, a related activity that is also used by businesses, involves deconstructing and analyzing products. However, the objective is to find opportunities for cost-cutting.
Printed circuit boards
Reverse engineering of printed circuit boards involves recreating fabrication data for a particular circuit board. This is done primarily to identify a design, and learn the functional and structural characteristics of a design. It also allows for the discovery of the design principles behind a product, especially if this design information is not easily available.
Outdated PCBs are often subject to reverse engineering, especially when they perform highly critical functions such as powering machinery, or other electronic components. Reverse engineering these old parts can allow the reconstruction of the PCB if it performs some crucial task, as well as finding alternatives which provide the same function, or in upgrading the old PCB.
Reverse engineering PCBs largely follow the same series of steps. First, images are created by drawing, scanning, or taking photographs of the PCB. Then, these images are ported to suitable reverse engineering software in order to create a rudimentary design for the new PCB. The quality of these images that is necessary for suitable reverse engineering is proportional to the complexity of the PCB itself. More complicated PCBs require well lighted photos on dark backgrounds, while fairly simple PCBs can be recreated simply with just basic dimensioning. Each layer of the PCB is carefully recreated in the software with the intent of producing a final design as close to the initial. Then, the schematics for the circuit are finally generated using an appropriate tool.
Software
In 1990, the Institute of Electrical and Electronics Engineers (IEEE) defined (software) reverse engineering (SRE) as "the process of analyzing a
subject system to identify the system's components and their interrelationships and to create representations of the system in another form or at a higher
level of abstraction" in which the "subject system" is the end product of software development. Reverse engineering is a process of examination only, and the software system under consideration is not modified, which would otherwise be re-engineering or restructuring. Reverse engineering can be performed from any stage of the product cycle, not necessarily from the functional end product.
There are two components in reverse engineering: redocumentation and design recovery. Redocumentation is the creation of new representation of the computer code so that it is easier to understand. Meanwhile, design recovery is the use of deduction or reasoning from general knowledge or personal experience of the product to understand the product's functionality fully. It can also be seen as "going backwards through the development cycle". In this model, the output of the implementation phase (in source code form) is reverse-engineered back to the analysis phase, in an inversion of the traditional waterfall model. Another term for this technique is program comprehension. The Working Conference on Reverse Engineering (WCRE) has been held yearly to explore and expand the techniques of reverse engineering. Computer-aided software engineering (CASE) and automated code generation have contributed greatly in the field of reverse engineering.
Software anti-tamper technology like obfuscation is used to deter both reverse engineering and re-engineering of proprietary software and software-powered systems. In practice, two main types of reverse engineering emerge. In the first case, source code is already available for the software, but higher-level aspects of the program, which are perhaps poorly documented or documented but no longer valid, are discovered. In the second case, there is no source code available for the software, and any efforts towards discovering one possible source code for the software are regarded as reverse engineering. The second usage of the term is more familiar to most people. Reverse engineering of software can make use of the clean room design technique to avoid copyright infringement.
On a related note, black box testing in software engineering has a lot in common with reverse engineering. The tester usually has the API but has the goals to find bugs and undocumented features by bashing the product from outside.
Other purposes of reverse engineering include security auditing, removal of copy protection ("cracking"), circumvention of access restrictions often present in consumer electronics, customization of embedded systems (such as engine management systems), in-house repairs or retrofits, enabling of additional features on low-cost "crippled" hardware (such as some graphics card chip-sets), or even mere satisfaction of curiosity.
Binary software
Binary reverse engineering is performed if source code for a software is unavailable. This process is sometimes termed reverse code engineering, or RCE. For example, decompilation of binaries for the Java platform can be accomplished by using Jad. One famous case of reverse engineering was the first non-IBM implementation of the PC BIOS, which launched the historic IBM PC compatible industry that has been the overwhelmingly-dominant computer hardware platform for many years. Reverse engineering of software is protected in the US by the fair use exception in copyright law. The Samba software, which allows systems that do not run Microsoft Windows systems to share files with systems that run it, is a classic example of software reverse engineering since the Samba project had to reverse-engineer unpublished information about how Windows file sharing worked so that non-Windows computers could emulate it. The Wine project does the same thing for the Windows API, and OpenOffice.org is one party doing that for the Microsoft Office file formats. The ReactOS project is even more ambitious in its goals by striving to provide binary (ABI and API) compatibility with the current Windows operating systems of the NT branch, which allows software and drivers written for Windows to run on a clean-room reverse-engineered free software (GPL) counterpart. WindowsSCOPE allows for reverse-engineering the full contents of a Windows system's live memory including a binary-level, graphical reverse engineering of all running processes.
Another classic, if not well-known, example is that in 1987 Bell Laboratories reverse-engineered the Mac OS System 4.1, originally running on the Apple Macintosh SE, so that it could run it on RISC machines of their own.
Binary software techniques
Reverse engineering of software can be accomplished by various methods.
The three main groups of software reverse engineering are
Analysis through observation of information exchange, most prevalent in protocol reverse engineering, which involves using bus analyzers and packet sniffers, such as for accessing a computer bus or computer network connection and revealing the traffic data thereon. Bus or network behavior can then be analyzed to produce a standalone implementation that mimics that behavior. That is especially useful for reverse engineering device drivers. Sometimes, reverse engineering on embedded systems is greatly assisted by tools deliberately introduced by the manufacturer, such as JTAG ports or other debugging means. In Microsoft Windows, low-level debuggers such as SoftICE are popular.
Disassembly using a disassembler, meaning the raw machine language of the program is read and understood in its own terms, only with the aid of machine-language mnemonics. It works on any computer program but can take quite some time, especially for those who are not used to machine code. The Interactive Disassembler is a particularly popular tool.
Decompilation using a decompiler, a process that tries, with varying results, to recreate the source code in some high-level language for a program only available in machine code or bytecode.
Software classification
Software classification is the process of identifying similarities between different software binaries (such as two different versions of the same binary) used to detect code relations between software samples. The task was traditionally done manually for several reasons (such as patch analysis for vulnerability detection and copyright infringement), but it can now be done somewhat automatically for large numbers of samples.
This method is being used mostly for long and thorough reverse engineering tasks (complete analysis of a complex algorithm or big piece of software). In general, statistical classification is considered to be a hard problem, which is also true for software classification, and so few solutions/tools that handle this task well.
Source code
A number of UML tools refer to the process of importing and analysing source code to generate UML diagrams as "reverse engineering". See List of UML tools.
Although UML is one approach in providing "reverse engineering" more recent advances in international standards activities have resulted in the development of the Knowledge Discovery Metamodel (KDM). The standard delivers an ontology for the intermediate (or abstracted) representation of programming language constructs and their interrelationships. An Object Management Group standard (on its way to becoming an ISO standard as well), KDM has started to take hold in industry with the development of tools and analysis environments that can deliver the extraction and analysis of source, binary, and byte code. For source code analysis, KDM's granular standards' architecture enables the extraction of software system flows (data, control, and call maps), architectures, and business layer knowledge (rules, terms, and process). The standard enables the use of a common data format (XMI) enabling the correlation of the various layers of system knowledge for either detailed analysis (such as root cause, impact) or derived analysis (such as business process extraction). Although efforts to represent language constructs can be never-ending because of the number of languages, the continuous evolution of software languages, and the development of new languages, the standard does allow for the use of extensions to support the broad language set as well as evolution. KDM is compatible with UML, BPMN, RDF, and other standards enabling migration into other environments and thus leverage system knowledge for efforts such as software system transformation and enterprise business layer analysis.
Protocols
Protocols are sets of rules that describe message formats and how messages are exchanged: the protocol state machine. Accordingly, the problem of protocol reverse-engineering can be partitioned into two subproblems: message format and state-machine reverse-engineering.
The message formats have traditionally been reverse-engineered by a tedious manual process, which involved analysis of how protocol implementations process messages, but recent research proposed a number of automatic solutions. Typically, the automatic approaches group observe messages into clusters by using various clustering analyses, or they emulate the protocol implementation tracing the message processing.
There has been less work on reverse-engineering of state-machines of protocols. In general, the protocol state-machines can be learned either through a process of offline learning, which passively observes communication and attempts to build the most general state-machine accepting all observed sequences of messages, and online learning, which allows interactive generation of probing sequences of messages and listening to responses to those probing sequences. In general, offline learning of small state-machines is known to be NP-complete, but online learning can be done in polynomial time. An automatic offline approach has been demonstrated by Comparetti et al. and an online approach by Cho et al.
Other components of typical protocols, like encryption and hash functions, can be reverse-engineered automatically as well. Typically, the automatic approaches trace the execution of protocol implementations and try to detect buffers in memory holding unencrypted packets.
Integrated circuits/smart cards
Reverse engineering is an invasive and destructive form of analyzing a smart card. The attacker uses chemicals to etch away layer after layer of the smart card and takes pictures with a scanning electron microscope (SEM). That technique can reveal the complete hardware and software part of the smart card. The major problem for the attacker is to bring everything into the right order to find out how everything works. The makers of the card try to hide keys and operations by mixing up memory positions, such as by bus scrambling.
In some cases, it is even possible to attach a probe to measure voltages while the smart card is still operational. The makers of the card employ sensors to detect and prevent that attack. That attack is not very common because it requires both a large investment in effort and special equipment that is generally available only to large chip manufacturers. Furthermore, the payoff from this attack is low since other security techniques are often used such as shadow accounts. It is still uncertain whether attacks against chip-and-PIN cards to replicate encryption data and then to crack PINs would provide a cost-effective attack on multifactor authentication.
Full reverse engineering proceeds in several major steps.
The first step after images have been taken with a SEM is stitching the images together, which is necessary because each layer cannot be captured by a single shot. A SEM needs to sweep across the area of the circuit and take several hundred images to cover the entire layer. Image stitching takes as input several hundred pictures and outputs a single properly-overlapped picture of the complete layer.
Next, the stitched layers need to be aligned because the sample, after etching, cannot be put into the exact same position relative to the SEM each time. Therefore, the stitched versions will not overlap in the correct fashion, as on the real circuit. Usually, three corresponding points are selected, and a transformation applied on the basis of that.
To extract the circuit structure, the aligned, stitched images need to be segmented, which highlights the important circuitry and separates it from the uninteresting background and insulating materials.
Finally, the wires can be traced from one layer to the next, and the netlist of the circuit, which contains all of the circuit's information, can be reconstructed.
Military applications
Reverse engineering is often used by people to copy other nations' technologies, devices, or information that have been obtained by regular troops in the fields or by intelligence operations. It was often used during the Second World War and the Cold War. Here are well-known examples from the Second World War and later:
Jerry can: British and American forces in WW2 noticed that the Germans had gasoline cans with an excellent design. They reverse-engineered copies of those cans, which cans were popularly known as "Jerry cans".
Nakajima G5N: In 1939, the U.S. Douglas Aircraft Company sold its DC-4E airliner prototype to Imperial Japanese Airways, which was secretly acting as a front for the Imperial Japanese Navy, which wanted a long-range strategic bomber but had been hindered by the Japanese aircraft industry's inexperience with heavy long-range aircraft. The DC-4E was transferred to the Nakajima Aircraft Company and dismantled for study; as a cover story, the Japanese press reported that it had crashed in Tokyo Bay. The wings, engines, and landing gear of the G5N were copied directly from the DC-4E.
Panzerschreck: The Germans captured an American bazooka during the Second World War and reverse engineered it to create the larger Panzerschreck.
Tupolev Tu-4: In 1944, three American B-29 bombers on missions over Japan were forced to land in the Soviet Union. The Soviets, who did not have a similar strategic bomber, decided to copy the B-29. Within three years, they had developed the Tu-4, a nearly-perfect copy.
SCR-584 radar: copied by the Soviet Union after the Second World War, it is known for a few modifications - СЦР-584, Бинокль-Д.
V-2 rocket: Technical documents for the V-2 and related technologies were captured by the Western Allies at the end of the war. The Americans focused their reverse engineering efforts via Operation Paperclip, which led to the development of the PGM-11 Redstone rocket. The Soviets used captured German engineers to reproduce technical documents and plans and worked from captured hardware to make their clone of the rocket, the R-1. Thus began the postwar Soviet rocket program, which led to the R-7 and the beginning of the space race.
K-13/R-3S missile (NATO reporting name AA-2 Atoll), a Soviet reverse-engineered copy of the AIM-9 Sidewinder, was made possible after a Taiwanese (ROCAF) AIM-9B hit a Chinese PLA MiG-17 without exploding in September 1958. The missile became lodged within the airframe, and the pilot returned to base with what Soviet scientists would describe as a university course in missile development.
Toophan missile: In May 1975, negotiations between Iran and Hughes Missile Systems on co-production of the BGM-71 TOW and Maverick missiles stalled over disagreements in the pricing structure, the subsequent 1979 revolution ending all plans for such co-production. Iran was later successful in reverse-engineering the missile and now produces its own copy, the Toophan.
China has reversed engineered many examples of Western and Russian hardware, from fighter aircraft to missiles and HMMWV cars, such as the MiG-15,17,19,21 (which became the J-2,5,6,7) and the Su-33 (which became the J-15).
During the Second World War, Polish and British cryptographers studied captured German "Enigma" message encryption machines for weaknesses. Their operation was then simulated on electromechanical devices, "bombes", which tried all the possible scrambler settings of the "Enigma" machines that helped the breaking of coded messages that had been sent by the Germans.
Also during the Second World War, British scientists analyzed and defeated a series of increasingly-sophisticated radio navigation systems used by the Luftwaffe to perform guided bombing missions at night. The British countermeasures to the system were so effective that in some cases, German aircraft were led by signals to land at RAF bases since they believed that they had returned to German territory.
Gene networks
Reverse engineering concepts have been applied to biology as well, specifically to the task of understanding the structure and function of gene regulatory networks. They regulate almost every aspect of biological behavior and allow cells to carry out physiological processes and responses to perturbations. Understanding the structure and the dynamic behavior of gene networks is therefore one of the paramount challenges of systems biology, with immediate practical repercussions in several applications that are beyond basic research.
There are several methods for reverse engineering gene regulatory networks by using molecular biology and data science methods. They have been generally divided into six classes:
Coexpression methods are based on the notion that if two genes exhibit a similar expression profile, they may be related although no causation can be simply inferred from coexpression.
Sequence motif methods analyze gene promoters to find specific transcription factor binding domains. If a transcription factor is predicted to bind a promoter of a specific gene, a regulatory connection can be hypothesized.
Chromatin ImmunoPrecipitation (ChIP) methods investigate the genome-wide profile of DNA binding of chosen transcription factors to infer their downstream gene networks.
Orthology methods transfer gene network knowledge from one species to another.
Literature methods implement text mining and manual research to identify putative or experimentally-proven gene network connections.
Transcriptional complexes methods leverage information on protein-protein interactions between transcription factors, thus extending the concept of gene networks to include transcriptional regulatory complexes.
Often, gene network reliability is tested by genetic perturbation experiments followed by dynamic modelling, based on the principle that removing one network node has predictable effects on the functioning of the remaining nodes of the network.
Applications of the reverse engineering of gene networks range from understanding mechanisms of plant physiology to the highlighting of new targets for anticancer therapy.
Overlap with patent law
Reverse engineering applies primarily to gaining understanding of a process or artifact in which the manner of its construction, use, or internal processes has not been made clear by its creator.
Patented items do not of themselves have to be reverse-engineered to be studied, for the essence of a patent is that inventors provide a detailed public disclosure themselves, and in return receive legal protection of the invention that is involved. However, an item produced under one or more patents could also include other technology that is not patented and not disclosed. Indeed, one common motivation of reverse engineering is to determine whether a competitor's product contains patent infringement or copyright infringement.
Legality
United States
In the United States, even if an artifact or process is protected by trade secrets, reverse-engineering the artifact or process is often lawful if it has been legitimately obtained.
Reverse engineering of computer software often falls under both contract law as a breach of contract as well as any other relevant laws. That is because most end-user license agreements specifically prohibit it, and US courts have ruled that if such terms are present, they override the copyright law that expressly permits it (see Bowers v. Baystate Technologies). According to Section 103(f) of the Digital Millennium Copyright Act (17 U.S.C. § 1201 (f)), a person in legal possession of a program may reverse-engineer and circumvent its protection if that is necessary to achieve "interoperability", a term that broadly covers other devices and programs that can interact with it, make use of it, and to use and transfer data to and from it in useful ways. A limited exemption exists that allows the knowledge thus gained to be shared and used for interoperability purposes.
European Union
EU Directive 2009/24 on the legal protection of computer programs, which superseded an earlier (1991) directive, governs reverse engineering in the European Union.
See also
Antikythera mechanism
Backward induction
Benchmarking
Bus analyzer
Chonda
Clone (computing)
Clean room design
CMM
Code morphing
Connectix Virtual Game Station
Counterfeiting
Cryptanalysis
Decompile
Deformulation
Digital Millennium Copyright Act (DMCA)
Disassembler
Dongle
Forensic engineering
Industrial CT scanning
Interactive Disassembler
Knowledge Discovery Metamodel
Laser scanner
List of production topics
Listeroid Engines
Logic analyzer
Paycheck
Repurposing
Reverse architecture
Round-trip engineering
Retrodiction
Sega v. Accolade
Software archaeology
Software cracking
Structured light digitizer
Value engineering
AI-assisted reverse engineering
Notes
References
Sources
Elvidge, Julia, "Using Reverse Engineering to Discover Patent Infringement," Chipworks, Sept. 2010. Online: http://www.photonics.com/Article.aspx?AID=44063
Hausi A. Müller and Holger M. Kienle, "A Small Primer on Software Reverse Engineering," Technical Report, University of Victoria, 17 pages, March 2009. Online: http://holgerkienle.wikispaces.com/file/view/MK-UVic-09.pdf
Heines, Henry, "Determining Infringement by X-Ray Diffraction," Chemical Engineering Process, Jan. 1999 (example of reverse engineering used to detect IP infringement)
(introduction to hardware teardowns, including methodology, goals)
Samuelson, Pamela and Scotchmer, Suzanne, "The Law and Economics of Reverse Engineering," 111 Yale L.J. 1575 (2002). Online: http://people.ischool.berkeley.edu/~pam/papers/l&e%20reveng3.pdf
(xviii+856+vi pages, 3.5"-floppy) Errata: (NB. On general methodology of reverse engineering, applied to mass-market software: a program for exploring DOS, disassembling DOS.)
(pp. 59–188 on general methodology of reverse engineering, applied to mass-market software: examining Windows executables, disassembling Windows, tools for exploring Windows)
Schulman, Andrew, "Hiding in Plain Sight: Using Reverse Engineering to Uncover Software Patent Infringement," Intellectual Property Today, Nov. 2010. Online: http://www.iptoday.com/issues/2010/11/hiding-in-plain-sight-using-reverse-engineering-to-uncover-software-patent-infringement.asp
Schulman, Andrew, "Open to Inspection: Using Reverse Engineering to Uncover Software Prior Art," New Matter (Calif. State Bar IP Section), Summer 2011 (Part 1); Fall 2011 (Part 2). Online: http://www.SoftwareLitigationConsulting.com
Computer security
Espionage
Patent law
Industrial engineering
Technical intelligence
Technological races
NP-complete problems | 0.784573 | 0.997633 | 0.782716 |
Organic synthesis | Organic synthesis is a branch of chemical synthesis concerned with the construction of organic compounds. Organic compounds are molecules consisting of combinations of covalently-linked hydrogen, carbon, oxygen, and nitrogen atoms. Within the general subject of organic synthesis, there are many different types of synthetic routes that can be completed including total synthesis, stereoselective synthesis, automated synthesis, and many more. Additionally, in understanding organic synthesis it is necessary to be familiar with the methodology, techniques, and applications of the subject.
Total synthesis
A total synthesis refers to the complete chemical synthesis of molecules from simple, natural precursors. Total synthesis is accomplished either via a linear or convergent approach. In a linear synthesis—often adequate for simple structures—several steps are performed sequentially until the molecule is complete; the chemical compounds made in each step are called synthetic intermediates. Most often, each step in a synthesis is a separate reaction taking place to modify the starting materials. For more complex molecules, a convergent synthetic approach may be better suited. This type of reaction scheme involves the individual preparations of several key intermediates, which are then combined to form the desired product.
Robert Burns Woodward, who received the 1965 Nobel Prize for Chemistry for several total syntheses including his synthesis of strychnine, is regarded as the grandfather of modern organic synthesis. Some latter-day examples of syntheses include Wender's, Holton's, Nicolaou's, and Danishefsky's total syntheses of the anti-cancer drug paclitaxel (trade name Taxol).
Methodology and applications
Before beginning any organic synthesis, it is important to understand the chemical reactions, reagents, and conditions required in each step to guarantee successful product formation. When determining optimal reaction conditions for a given synthesis, the goal is to produce an adequate yield of pure product with as few steps as possible. When deciding conditions for a reaction, the literature can offer examples of previous reaction conditions that can be repeated, or a new synthetic route can be developed and tested. For practical, industrial applications additional reaction conditions must be considered to include the safety of both the researchers and the environment, as well as product purity.
Synthetic techniques
Organic Synthesis requires many steps to separate and purify products. Depending on the chemical state of the product to be isolated, different techniques are required. For liquid products, a very common separation technique is liquid–liquid extraction and for solid products, filtration (gravity or vacuum) can be used.
Liquid–liquid extraction
Liquid–liquid extraction uses the density and polarity of the product and solvents to perform a separation. Based on the concept of "like-dissolves-like", non-polar compounds are more soluble in non-polar solvents, and polar compounds are more soluble in polar solvents. By using this concept, the relative solubility of compounds can be exploited by adding immiscible solvents into the same flask and separating the product into the solvent with the most similar polarity. Solvent miscibility is of major importance as it allows for the formation of two layers in the flask, one layer containing the side reaction material and one containing the product. As a result of the differing densities of the layers, the product-containing layer can be isolated and the other layer can be removed.
Heated reactions and reflux condensers
Many reactions require heat to increase reaction speed. However, in many situations increased heat can cause the solvent to boil uncontrollably which negatively affects the reaction, and can potentially reduce product yield. To address this issue, reflux condensers can be fitted to reaction glassware. Reflux condensers are specially calibrated pieces of glassware that possess two inlets for water to run in and out through the glass against gravity. This flow of water cools any escaping substrate and condenses it back into the reaction flask to continue reacting and ensure that all product is contained. The use of reflux condensers is an important technique within organic syntheses and is utilized in reflux steps, as well as recrystallization steps.
When being used for refluxing a solution, reflux condensers are fitted and closely observed. Reflux occurs when condensation can be seen dripping back into the reaction flask from the reflux condenser; 1 drop every second or few seconds.
For recrystallization, the product-containing solution is equipped with a condenser and brought to reflux again. Reflux is complete when the product-containing solution is clear. Once clear, the reaction is taken off heat and allowed to cool which will cause the product to re-precipitate, yielding a purer product.
Gravity and vacuum filtration
Solid products can be separated from a reaction mixture using filtration techniques. To obtain solid products a vacuum filtration apparatus can be used.
Vacuum filtration uses suction to pull liquid through a Büchner funnel equipped with filter paper, which catches the desired solid product. This process removes any unwanted solution in the reaction mixture by pulling it into the filtration flask and leaving the desired product to collect on the filter paper.
Liquid products can also be separated from solids by using gravity filtration. In this separatory method, filter paper is folded into a funnel and placed on top of a reaction flask. The reaction mixture is then poured through the filter paper, at a rate such that the total volume of liquid in the funnel does not exceed the volume of the funnel. This method allows for the product to be separated from other reaction components by the force of gravity, instead of a vacuum.
Stereoselective synthesis
Most complex natural products are chiral, and the bioactivity of chiral molecules varies with the enantiomer. Some total syntheses target racemic mixtures, which are mixtures of both possible enantiomers. A single enantiomer can then be selected via enantiomeric resolution.
As chemistry has developed methods of stereoselective catalysis and kinetic resolution have been introduced whereby reactions can be directed, producing only one enantiomer rather than a racemic mixture. Early examples include stereoselective hydrogenations (e.g., as reported by William Knowles and Ryōji Noyori) and functional group modifications such as the asymmetric epoxidation by Barry Sharpless; for these advancements in stereochemical preference, these chemists were awarded the Nobel Prize in Chemistry in 2001. Such preferential stereochemical reactions give chemists a much more diverse choice of enantiomerically pure materials.
Using techniques developed by Robert B. Woodward paired with advancements in synthetic methodology, chemists have been able synthesize stereochemically selective complex molecules without racemization. Stereocontrol provides the target molecules to be synthesized as pure enantiomers (i.e., without need for resolution). Such techniques are referred to as stereoselective synthesis.
Synthesis design
Many synthetic procedures are developed from a retrosynthetic framework, a type of synthetic design developed by Elias James Corey, for which he won the Nobel Prize in Chemistry in 1990. In this approach, the synthesis is planned backwards from the product, obliging to standard chemical rules. Each step breaks down the parent structure into achievable components, which are shown via the use of graphical schemes with retrosynthetic arrows (drawn as ⇒, which in effect, means "is made from"). Retrosynthesis allows for the visualization of desired synthetic designs.
Automated organic synthesis
A recent development within organic synthesis is automated synthesis. To conduct organic synthesis without human involvement, researchers are adapting existing synthetic methods and techniques to create entirely automated synthetic processes using organic synthesis software. This type of synthesis is advantageous as synthetic automation can increase yield with continual "flowing" reactions. In flow chemistry, substrates are continually fed into the reaction to produce a higher yield. Previously, this type of reaction was reserved for large-scale industrial chemistry but has recently transitioned to bench-scale chemistry to improve the efficiency of reactions on a smaller scale.
Currently integrating automated synthesis into their work is SRI International, a nonprofit research institute. Recently SRI International has developed Autosyn, an automated multi-step chemical synthesizer that can synthesize many FDA-approved small molecule drugs. This synthesizer demonstrates the versatility of substrates and the capacity to potentially expand the type of research conducted on novel drug molecules without human intervention.
Automated chemistry and the automated synthesizers used demonstrate a potential direction for synthetic chemistry in the future.
Characterization
Necessary to organic synthesis is characterization. Characterization refers to the measurement of chemical and physical properties of a given compound, and comes in many forms. Examples of common characterization methods include: nuclear magnetic resonance (NMR), mass spectrometry, Fourier-transform infrared spectroscopy (FTIR), and melting point analysis. Each of these techniques allow for a chemist to obtain structural information about a newly synthesized organic compound. Depending on the nature of the product, the characterization method used can vary.
Relevance
Organic synthesis is an important chemical process that is integral to many scientific fields. Examples of fields beyond chemistry that require organic synthesis include the medical industry, pharmaceutical industry, and many more. Organic processes allow for the industrial-scale creation of pharmaceutical products. An example of such a synthesis is Ibuprofen. Ibuprofen can be synthesized from a series of reactions including: reduction, acidification, formation of a Grignard reagent, and carboxylation.
In the synthesis of Ibuprofen proposed by Kjonass et al., p-isobutylacetophenone, the starting material, is reduced with sodium borohydride (NaBH4) to form an alcohol functional group. The resulting intermediate is acidified with HCl to create a chlorine group. The chlorine group is then reacted with magnesium turnings to form a Grignard reagent. This Grignard is carboxylated and the resulting product is worked up to synthesize ibuprofen.
This synthetic route is just one of many medically and industrially relevant reactions that have been created, and continued to be used.
See also
Automated synthesis
Electrosynthesis
Methods in Organic Synthesis (journal)
Organic Syntheses (journal)
References
Further reading
External links
The Organic Synthesis Archive
Chemical synthesis database
https://web.archive.org/web/20070927231356/http://www.webreactions.net/search.html
https://www.organic-chemistry.org/synthesis/
Prof. Hans Reich's collection of natural product syntheses
Chemical synthesis semantic wiki | 0.790534 | 0.989995 | 0.782625 |
Free energy principle | The free energy principle is a theoretical framework suggesting that the brain reduces surprise or uncertainty by making predictions based on internal models and updating them using sensory input. It highlights the brain's objective of aligning its internal model and the external world to enhance prediction accuracy. This principle integrates Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. It has wide-ranging implications for comprehending brain function, perception, and action.
Overview
In biophysics and cognitive science, the free energy principle is a mathematical principle describing a formal account of the representational capacities of physical systems: that is, why things that exist look as if they track properties of the systems to which they are coupled.
It establishes that the dynamics of physical systems minimise a quantity known as surprisal (which is the negative log probability of some outcome); or equivalently, its variational upper bound, called free energy. The principle is used especially in Bayesian approaches to brain function, but also some approaches to artificial intelligence; it is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception-action loops in neuroscience.
The free energy principle models the behaviour of systems that are distinct from, but coupled to, another system (e.g., an embedding environment), where the degrees of freedom that implement the interface between the two systems is known as a Markov blanket. More formally, the free energy principle says that if a system has a "particular partition" (i.e., into particles, with their Markov blankets), then subsets of that system will track the statistical structure of other subsets (which are known as internal and external states or paths of a system).
The free energy principle is based on the Bayesian idea of the brain as an “inference engine.” Under the free energy principle, systems pursue paths of least surprise, or equivalently, minimize the difference between predictions based on their model of the world and their sense and associated perception. This difference is quantified by variational free energy and is minimized by continuous correction of the world model of the system, or by making the world more like the predictions of the system. By actively changing the world to make it closer to the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction. Friston also believes his principle applies to mental disorders as well as to artificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods.
The free energy principle is a mathematical principle of information physics: much like the principle of maximum entropy or the principle of least action, it is true on mathematical grounds. To attempt to falsify the free energy principle is a category mistake, akin to trying to falsify calculus by making empirical observations. (One cannot invalidate a mathematical theory in this way; instead, one would need to derive a formal contradiction from the theory.) In a 2018 interview, Friston explained what it entails for the free energy principle to not be subject to falsification: "I think it is useful to make a fundamental distinction at this point—that we can appeal to later. The distinction is between a state and process theory; i.e., the difference between a normative principle that things may or may not conform to, and a process theory or hypothesis about how that principle is realized. Under this distinction, the free energy principle stands in stark distinction to things like predictive coding and the Bayesian brain hypothesis. This is because the free energy principle is what it is — a principle. Like Hamilton's principle of stationary action, it cannot be falsified. It cannot be disproven. In fact, there’s not much you can do with it, unless you ask whether measurable systems conform to the principle. On the other hand, hypotheses that the brain performs some form of Bayesian inference or predictive coding are what they are—hypotheses. These hypotheses may or may not be supported by empirical evidence." There are many examples of these hypotheses being supported by empirical evidence.
Background
The notion that self-organising biological systems – like a cell or brain – can be understood as minimising variational free energy is based upon Helmholtz’s work on unconscious inference and subsequent treatments in psychology and machine learning. Variational free energy is a function of observations and a probability density over their hidden causes. This variational density is defined in relation to a probabilistic model that generates predicted observations from hypothesized causes. In this setting, free energy provides an approximation to Bayesian model evidence. Therefore, its minimisation can be seen as a Bayesian inference process. When a system actively makes observations to minimise free energy, it implicitly performs active inference and maximises the evidence for its model of the world.
However, free energy is also an upper bound on the self-information of outcomes, where the long-term average of surprise is entropy. This means that if a system acts to minimise free energy, it will implicitly place an upper bound on the entropy of the outcomes – or sensory states – it samples.
Relationship to other theories
Active inference is closely related to the good regulator theorem and related accounts of self-organisation, such as self-assembly, pattern formation, autopoiesis and practopoiesis. It addresses the themes considered in cybernetics, synergetics and embodied cognition. Because free energy can be expressed as the expected energy of observations under the variational density minus its entropy, it is also related to the maximum entropy principle. Finally, because the time average of energy is action, the principle of minimum variational free energy is a principle of least action. Active inference allowing for scale invariance has also been applied to other theories and domains. For instance, it has been applied to sociology, linguistics and communication, semiotics, and epidemiology among others.
Negative free energy is formally equivalent to the evidence lower bound, which is commonly used in machine learning to train generative models, such as variational autoencoders.
Action and perception
Active inference applies the techniques of approximate Bayesian inference to infer the causes of sensory data from a 'generative' model of how that data is caused and then uses these inferences to guide action.
Bayes' rule characterizes the probabilistically optimal inversion of such a causal model, but applying it is typically computationally intractable, leading to the use of approximate methods.
In active inference, the leading class of such approximate methods are variational methods, for both practical and theoretical reasons: practical, as they often lead to simple inference procedures; and theoretical, because they are related to fundamental physical principles, as discussed above.
These variational methods proceed by minimizing an upper bound on the divergence between the Bayes-optimal inference (or 'posterior') and its approximation according to the method.
This upper bound is known as the free energy, and we can accordingly characterize perception as the minimization of the free energy with respect to inbound sensory information, and action as the minimization of the same free energy with respect to outbound action information.
This holistic dual optimization is characteristic of active inference, and the free energy principle is the hypothesis that all systems which perceive and act can be characterized in this way.
In order to exemplify the mechanics of active inference via the free energy principle, a generative model must be specified, and this typically involves a collection of probability density functions which together characterize the causal model.
One such specification is as follows.
The system is modelled as inhabiting a state space , in the sense that its states form the points of this space.
The state space is then factorized according to , where is the space of 'external' states that are 'hidden' from the agent (in the sense of not being directly perceived or accessible), is the space of sensory states that are directly perceived by the agent, is the space of the agent's possible actions, and is a space of 'internal' states that are private to the agent.
Keeping with the Figure 1, note that in the following the and are functions of (continuous) time . The generative model is the specification of the following density functions:
A sensory model, , often written as , characterizing the likelihood of sensory data given external states and actions;
a stochastic model of the environmental dynamics, , often written , characterizing how the external states are expected by the agent to evolve over time , given the agent's actions;
an action model, , written , characterizing how the agent's actions depend upon its internal states and sensory data; and
an internal model, , written , characterizing how the agent's internal states depend upon its sensory data.
These density functions determine the factors of a "joint model", which represents the complete specification of the generative model, and which can be written as
.
Bayes' rule then determines the "posterior density" , which expresses a probabilistically optimal belief about the external state given the preceding state and the agent's actions, sensory signals, and internal states.
Since computing is computationally intractable, the free energy principle asserts the existence of a "variational density" , where is an approximation to .
One then defines the free energy as
and defines action and perception as the joint optimization problem
where the internal states are typically taken to encode the parameters of the 'variational' density and hence the agent's "best guess" about the posterior belief over .
Note that the free energy is also an upper bound on a measure of the agent's (marginal, or average) sensory surprise, and hence free energy minimization is often motivated by the minimization of surprise.
Free energy minimisation
Free energy minimisation and self-organisation
Free energy minimisation has been proposed as a hallmark of self-organising systems when cast as random dynamical systems. This formulation rests on a Markov blanket (comprising action and sensory states) that separates internal and external states. If internal states and action minimise free energy, then they place an upper bound on the entropy of sensory states:
This is because – under ergodic assumptions – the long-term average of surprise is entropy. This bound resists a natural tendency to disorder – of the sort associated with the second law of thermodynamics and the fluctuation theorem. However, formulating a unifying principle for the life sciences in terms of concepts from statistical physics, such as random dynamical system, non-equilibrium steady state and ergodicity, places substantial constraints on the theoretical and empirical study of biological systems with the risk of obscuring all features that make biological systems interesting kinds of self-organizing systems.
Free energy minimisation and Bayesian inference
All Bayesian inference can be cast in terms of free energy minimisation. When free energy is minimised with respect to internal states, the Kullback–Leibler divergence between the variational and posterior density over hidden states is minimised. This corresponds to approximate Bayesian inference – when the form of the variational density is fixed – and exact Bayesian inference otherwise. Free energy minimisation therefore provides a generic description of Bayesian inference and filtering (e.g., Kalman filtering). It is also used in Bayesian model selection, where free energy can be usefully decomposed into complexity and accuracy:
Models with minimum free energy provide an accurate explanation of data, under complexity costs (c.f., Occam's razor and more formal treatments of computational costs). Here, complexity is the divergence between the variational density and prior beliefs about hidden states (i.e., the effective degrees of freedom used to explain the data).
Free energy minimisation and thermodynamics
Variational free energy is an information-theoretic functional and is distinct from thermodynamic (Helmholtz) free energy. However, the complexity term of variational free energy shares the same fixed point as Helmholtz free energy (under the assumption the system is thermodynamically closed but not isolated). This is because if sensory perturbations are suspended (for a suitably long period of time), complexity is minimised (because accuracy can be neglected). At this point, the system is at equilibrium and internal states minimise Helmholtz free energy, by the principle of minimum energy.
Free energy minimisation and information theory
Free energy minimisation is equivalent to maximising the mutual information between sensory states and internal states that parameterise the variational density (for a fixed entropy variational density). This relates free energy minimization to the principle of minimum redundancy.
Free energy minimisation in neuroscience
Free energy minimisation provides a useful way to formulate normative (Bayes optimal) models of neuronal inference and learning under uncertainty and therefore subscribes to the Bayesian brain hypothesis. The neuronal processes described by free energy minimisation depend on the nature of hidden states: that can comprise time-dependent variables, time-invariant parameters and the precision (inverse variance or temperature) of random fluctuations. Minimising variables, parameters, and precision correspond to inference, learning, and the encoding of uncertainty, respectively.
Perceptual inference and categorisation
Free energy minimisation formalises the notion of unconscious inference in perception and provides a normative (Bayesian) theory of neuronal processing. The associated process theory of neuronal dynamics is based on minimising free energy through gradient descent. This corresponds to generalised Bayesian filtering (where ~ denotes a variable in generalised coordinates of motion and is a derivative matrix operator):
Usually, the generative models that define free energy are non-linear and hierarchical (like cortical hierarchies in the brain). Special cases of generalised filtering include Kalman filtering, which is formally equivalent to predictive coding – a popular metaphor for message passing in the brain. Under hierarchical models, predictive coding involves the recurrent exchange of ascending (bottom-up) prediction errors and descending (top-down) predictions that is consistent with the anatomy and physiology of sensory and motor systems.
Perceptual learning and memory
In predictive coding, optimising model parameters through a gradient descent on the time integral of free energy (free action) reduces to associative or Hebbian plasticity and is associated with synaptic plasticity in the brain.
Perceptual precision, attention and salience
Optimizing the precision parameters corresponds to optimizing the gain of prediction errors (c.f., Kalman gain). In neuronally plausible implementations of predictive coding, this corresponds to optimizing the excitability of superficial pyramidal cells and has been interpreted in terms of attentional gain.
With regard to the top-down vs. bottom-up controversy, which has been addressed as a major open problem of attention, a computational model has succeeded in illustrating the circular nature of the interplay between top-down and bottom-up mechanisms. Using an established emergent model of attention, namely SAIM, the authors proposed a model called PE-SAIM, which, in contrast to the standard version, approaches selective attention from a top-down position. The model takes into account the transmission of prediction errors to the same level or a level above, in order to minimise the energy function that indicates the difference between the data and its cause, or, in other words, between the generative model and the posterior. To increase validity, they also incorporated neural competition between stimuli into their model. A notable feature of this model is the reformulation of the free energy function only in terms of prediction errors during task performance:
where is the total energy function of the neural networks entail, and is the prediction error between the generative model (prior) and posterior changing over time.
Comparing the two models reveals a notable similarity between their respective results while also highlighting a remarkable discrepancy, whereby – in the standard version of the SAIM – the model's focus is mainly upon the excitatory connections, whereas in the PE-SAIM, the inhibitory connections are leveraged to make an inference. The model has also proved to be fit to predict the EEG and fMRI data drawn from human experiments with high precision. In the same vein, Yahya et al. also applied the free energy principle to propose a computational model for template matching in covert selective visual attention that mostly relies on SAIM. According to this study, the total free energy of the whole state-space is reached by inserting top-down signals in the original neural networks, whereby we derive a dynamical system comprising both feed-forward and backward prediction error.
Active inference
When gradient descent is applied to action , motor control can be understood in terms of classical reflex arcs that are engaged by descending (corticospinal) predictions. This provides a formalism that generalizes the equilibrium point solution – to the degrees of freedom problem – to movement trajectories.
Active inference and optimal control
Active inference is related to optimal control by replacing value or cost-to-go functions with prior beliefs about state transitions or flow. This exploits the close connection between Bayesian filtering and the solution to the Bellman equation. However, active inference starts with (priors over) flow that are specified with scalar and vector value functions of state space (c.f., the Helmholtz decomposition). Here, is the amplitude of random fluctuations and cost is . The priors over flow induce a prior over states that is the solution to the appropriate forward Kolmogorov equations. In contrast, optimal control optimises the flow, given a cost function, under the assumption that (i.e., the flow is curl free or has detailed balance). Usually, this entails solving backward Kolmogorov equations.
Active inference and optimal decision (game) theory
Optimal decision problems (usually formulated as partially observable Markov decision processes) are treated within active inference by absorbing utility functions into prior beliefs. In this setting, states that have a high utility (low cost) are states an agent expects to occupy. By equipping the generative model with hidden states that model control, policies (control sequences) that minimise variational free energy lead to high utility states.
Neurobiologically, neuromodulators such as dopamine are considered to report the precision of prediction errors by modulating the gain of principal cells encoding prediction error. This is closely related to – but formally distinct from – the role of dopamine in reporting prediction errors per se and related computational accounts.
Active inference and cognitive neuroscience
Active inference has been used to address a range of issues in cognitive neuroscience, brain function and neuropsychiatry, including action observation, mirror neurons, saccades and visual search, eye movements, sleep, illusions, attention, action selection, consciousness, hysteria and psychosis. Explanations of action in active inference often depend on the idea that the brain has 'stubborn predictions' that it cannot update, leading to actions that cause these predictions to come true.
See also
Constructal law - Law of design evolution in nature, animate and inanimate
References
External links
Behavioral and Brain Sciences (by Andy Clark)
Biological systems
Systems theory
Computational neuroscience
Mathematical and theoretical biology | 0.787172 | 0.994046 | 0.782486 |
Developmental biology | Developmental biology is the study of the process by which animals and plants grow and develop. Developmental biology also encompasses the biology of regeneration, asexual reproduction, metamorphosis, and the growth and differentiation of stem cells in the adult organism.
Perspectives
The main processes involved in the embryonic development of animals are: tissue patterning (via regional specification and patterned cell differentiation); tissue growth; and tissue morphogenesis.
Regional specification refers to the processes that create the spatial patterns in a ball or sheet of initially similar cells. This generally involves the action of cytoplasmic determinants, located within parts of the fertilized egg, and of inductive signals emitted from signaling centers in the embryo. The early stages of regional specification do not generate functional differentiated cells, but cell populations committed to developing to a specific region or part of the organism. These are defined by the expression of specific combinations of transcription factors.
Cell differentiation relates specifically to the formation of functional cell types such as nerve, muscle, secretory epithelia, etc. Differentiated cells contain large amounts of specific proteins associated with cell function.
Morphogenesis relates to the formation of a three-dimensional shape. It mainly involves the orchestrated movements of cell sheets and of individual cells. Morphogenesis is important for creating the three germ layers of the early embryo (ectoderm, mesoderm, and endoderm) and for building up complex structures during organ development.
Tissue growth involves both an overall increase in tissue size, and also the differential growth of parts (allometry) which contributes to morphogenesis. Growth mostly occurs through cell proliferation but also through changes in cell size or the deposition of extracellular materials.
The development of plants involves similar processes to that of animals. However, plant cells are mostly immotile so morphogenesis is achieved by differential growth, without cell movements. Also, the inductive signals and the genes involved are different from those that control animal development.
Generative biology
Generative biology is the generative science that explores the dynamics guiding the development and evolution of a biological morphological form.
Developmental processes
Cell differentiation
Cell differentiation is the process whereby different functional cell types arise in development. For example, neurons, muscle fibers and hepatocytes (liver cells) are well known types of differentiated cells. Differentiated cells usually produce large amounts of a few proteins that are required for their specific function and this gives them the characteristic appearance that enables them to be recognized under the light microscope. The genes encoding these proteins are highly active. Typically their chromatin structure is very open, allowing access for the transcription enzymes, and specific transcription factors bind to regulatory sequences in the DNA in order to activate gene expression. For example, NeuroD is a key transcription factor for neuronal differentiation, myogenin for muscle differentiation, and HNF4 for hepatocyte differentiation.
Cell differentiation is usually the final stage of development, preceded by several states of commitment which are not visibly differentiated. A single tissue, formed from a single type of progenitor cell or stem cell, often consists of several differentiated cell types. Control of their formation involves a process of lateral inhibition, based on the properties of the Notch signaling pathway. For example, in the neural plate of the embryo this system operates to generate a population of neuronal precursor cells in which NeuroD is highly expressed.
Regeneration
Regeneration indicates the ability to regrow a missing part. This is very prevalent amongst plants, which show continuous growth, and also among colonial animals such as hydroids and ascidians. But most interest by developmental biologists has been shown in the regeneration of parts in free living animals. In particular four models have been the subject of much investigation. Two of these have the ability to regenerate whole bodies: Hydra, which can regenerate any part of the polyp from a small fragment, and planarian worms, which can usually regenerate both heads and tails. Both of these examples have continuous cell turnover fed by stem cells and, at least in planaria, at least some of the stem cells have been shown to be pluripotent. The other two models show only distal regeneration of appendages. These are the insect appendages, usually the legs of hemimetabolous insects such as the cricket, and the limbs of urodele amphibians. Considerable information is now available about amphibian limb regeneration and it is known that each cell type regenerates itself, except for connective tissues where there is considerable interconversion between cartilage, dermis and tendons. In terms of the pattern of structures, this is controlled by a re-activation of signals active in the embryo.
There is still debate about the old question of whether regeneration is a "pristine" or an "adaptive" property. If the former is the case, with improved knowledge, we might expect to be able to improve regenerative ability in humans. If the latter, then each instance of regeneration is presumed to have arisen by natural selection in circumstances particular to the species, so no general rules would be expected.
Embryonic development of animals
The sperm and egg fuse in the process of fertilization to form a fertilized egg, or zygote. This undergoes a period of divisions to form a ball or sheet of similar cells called a blastula or blastoderm. These cell divisions are usually rapid with no growth so the daughter cells are half the size of the mother cell and the whole embryo stays about the same size. They are called cleavage divisions.
Mouse epiblast primordial germ cells (see Figure: “The initial stages of human embryogenesis”) undergo extensive epigenetic reprogramming. This process involves genome-wide DNA demethylation, chromatin reorganization and epigenetic imprint erasure leading to totipotency. DNA demethylation is carried out by a process that utilizes the DNA base excision repair pathway.
Morphogenetic movements convert the cell mass into a three layered structure consisting of multicellular sheets called ectoderm, mesoderm and endoderm. These sheets are known as germ layers. This is the process of gastrulation. During cleavage and gastrulation the first regional specification events occur. In addition to the formation of the three germ layers themselves, these often generate extraembryonic structures, such as the mammalian placenta, needed for support and nutrition of the embryo, and also establish differences of commitment along the anteroposterior axis (head, trunk and tail).
Regional specification is initiated by the presence of cytoplasmic determinants in one part of the zygote. The cells that contain the determinant become a signaling center and emit an inducing factor. Because the inducing factor is produced in one place, diffuses away, and decays, it forms a concentration gradient, high near the source cells and low further away. The remaining cells of the embryo, which do not contain the determinant, are competent to respond to different concentrations by upregulating specific developmental control genes. This results in a series of zones becoming set up, arranged at progressively greater distance from the signaling center. In each zone a different combination of developmental control genes is upregulated. These genes encode transcription factors which upregulate new combinations of gene activity in each region. Among other functions, these transcription factors control expression of genes conferring specific adhesive and motility properties on the cells in which they are active. Because of these different morphogenetic properties, the cells of each germ layer move to form sheets such that the ectoderm ends up on the outside, mesoderm in the middle, and endoderm on the inside.
Morphogenetic movements not only change the shape and structure of the embryo, but by bringing cell sheets into new spatial relationships they also make possible new phases of signaling and response between them. In addition, first morphogenetic movements of embryogenesis, such as gastrulation, epiboly and twisting, directly activate pathways involved in endomesoderm specification through mechanotransduction processes. This property was suggested to be evolutionary inherited from endomesoderm specification as mechanically stimulated by marine environmental hydrodynamic flow in first animal organisms (first metazoa). Twisting along the body axis by a left-handed chirality is found in all chordates (including vertebrates) and is addressed by the axial twist theory.
Growth in embryos is mostly autonomous. For each territory of cells the growth rate is controlled by the combination of genes that are active. Free-living embryos do not grow in mass as they have no external food supply. But embryos fed by a placenta or extraembryonic yolk supply can grow very fast, and changes to relative growth rate between parts in these organisms help to produce the final overall anatomy.
The whole process needs to be coordinated in time and how this is controlled is not understood. There may be a master clock able to communicate with all parts of the embryo that controls the course of events, or timing may depend simply on local causal sequences of events.
Metamorphosis
Developmental processes are very evident during the process of metamorphosis. This occurs in various types of animal. Well-known examples are seen in frogs, which usually hatch as a tadpole and metamorphoses to an adult frog, and certain insects which hatch as a larva and then become remodeled to the adult form during a pupal stage.
All the developmental processes listed above occur during metamorphosis. Examples that have been especially well studied include tail loss and other changes in the tadpole of the frog Xenopus, and the biology of the imaginal discs, which generate the adult body parts of the fly Drosophila melanogaster.
Plant development
Plant development is the process by which structures originate and mature as a plant grows. It is studied in plant anatomy and plant physiology as well as plant morphology.
Plants constantly produce new tissues and structures throughout their life from meristems located at the tips of organs, or between mature tissues. Thus, a living plant always has embryonic tissues. By contrast, an animal embryo will very early produce all of the body parts that it will ever have in its life. When the animal is born (or hatches from its egg), it has all its body parts and from that point will only grow larger and more mature.
The properties of organization seen in a plant are emergent properties which are more than the sum of the individual parts. "The assembly of these tissues and functions into an integrated multicellular organism yields not only the characteristics of the separate parts and processes but also quite a new set of characteristics which would not have been predictable on the basis of examination of the separate parts."
Growth
A vascular plant begins from a single celled zygote, formed by fertilisation of an egg cell by a sperm cell. From that point, it begins to divide to form a plant embryo through the process of embryogenesis. As this happens, the resulting cells will organize so that one end becomes the first root, while the other end forms the tip of the shoot. In seed plants, the embryo will develop one or more "seed leaves" (cotyledons). By the end of embryogenesis, the young plant will have all the parts necessary to begin its life.
Once the embryo germinates from its seed or parent plant, it begins to produce additional organs (leaves, stems, and roots) through the process of organogenesis. New roots grow from root meristems located at the tip of the root, and new stems and leaves grow from shoot meristems located at the tip of the shoot. Branching occurs when small clumps of cells left behind by the meristem, and which have not yet undergone cellular differentiation to form a specialized tissue, begin to grow as the tip of a new root or shoot. Growth from any such meristem at the tip of a root or shoot is termed primary growth and results in the lengthening of that root or shoot. Secondary growth results in widening of a root or shoot from divisions of cells in a cambium.
In addition to growth by cell division, a plant may grow through cell elongation. This occurs when individual cells or groups of cells grow longer. Not all plant cells will grow to the same length. When cells on one side of a stem grow longer and faster than cells on the other side, the stem will bend to the side of the slower growing cells as a result. This directional growth can occur via a plant's response to a particular stimulus, such as light (phototropism), gravity (gravitropism), water, (hydrotropism), and physical contact (thigmotropism).
Plant growth and development are mediated by specific plant hormones and plant growth regulators (PGRs) (Ross et al. 1983). Endogenous hormone levels are influenced by plant age, cold hardiness, dormancy, and other metabolic conditions; photoperiod, drought, temperature, and other external environmental conditions; and exogenous sources of PGRs, e.g., externally applied and of rhizospheric origin.
Morphological variation
Plants exhibit natural variation in their form and structure. While all organisms vary from individual to individual, plants exhibit an additional type of variation. Within a single individual, parts are repeated which may differ in form and structure from other similar parts. This variation is most easily seen in the leaves of a plant, though other organs such as stems and flowers may show similar variation. There are three primary causes of this variation: positional effects, environmental effects, and juvenility.
Evolution of plant morphology
Transcription factors and transcriptional regulatory networks play key roles in plant morphogenesis and their evolution. During plant landing, many novel transcription factor families emerged and are preferentially wired into the networks of multicellular development, reproduction, and organ development, contributing to more complex morphogenesis of land plants.
Most land plants share a common ancestor, multicellular algae. An example of the evolution of plant morphology is seen in charophytes. Studies have shown that charophytes have traits that are homologous to land plants. There are two main theories of the evolution of plant morphology, these theories are the homologous theory and the antithetic theory. The commonly accepted theory for the evolution of plant morphology is the antithetic theory. The antithetic theory states that the multiple mitotic divisions that take place before meiosis, cause the development of the sporophyte. Then the sporophyte will development as an independent organism.
Developmental model organisms
Much of developmental biology research in recent decades has focused on the use of a small number of model organisms. It has turned out that there is much conservation of developmental mechanisms across the animal kingdom. In early development different vertebrate species all use essentially the same inductive signals and the same genes encoding regional identity. Even invertebrates use a similar repertoire of signals and genes although the body parts formed are significantly different. Model organisms each have some particular experimental advantages which have enabled them to become popular among researchers. In one sense they are "models" for the whole animal kingdom, and in another sense they are "models" for human development, which is difficult to study directly for both ethical and practical reasons. Model organisms have been most useful for elucidating the broad nature of developmental mechanisms. The more detail is sought, the more they differ from each other and from humans.
Plants
Thale cress (Arabidopsis thaliana)
Vertebrates
Frog: Xenopus (X. laevis and X. tropicalis). Good embryo supply. Especially suitable for microsurgery.
Zebrafish: Danio rerio. Good embryo supply. Well developed genetics.
Chicken: Gallus gallus. Early stages similar to mammal, but microsurgery easier. Low cost.
Mouse: Mus musculus. A mammal with well developed genetics.
Invertebrates
Fruit fly: Drosophila melanogaster. Good embryo supply. Well developed genetics.
Nematode: Caenorhabditis elegans. Good embryo supply. Well developed genetics. Low cost.
Unicellular
Algae: Chlamydomonas
Yeast: Saccharomyces
Others
Also popular for some purposes have been sea urchins and ascidians. For studies of regeneration urodele amphibians such as the axolotl Ambystoma mexicanum are used, and also planarian worms such as Schmidtea mediterranea. Organoids have also been demonstrated as an efficient model for development. Plant development has focused on the thale cress Arabidopsis thaliana as a model organism.
See also
References
Further reading
External links
Society for Developmental Biology
Collaborative resources
Developmental Biology - 10th edition
Essential Developmental Biology 3rd edition
Embryo Project Encyclopedia
Philosophy of biology | 0.788173 | 0.992775 | 0.782479 |
Enzyme | Enzymes are proteins that act as biological catalysts by accelerating chemical reactions. The molecules upon which enzymes may act are called substrates, and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. Metabolic pathways depend upon enzymes to catalyze individual steps. The study of enzymes is called enzymology and the field of pseudoenzyme analysis recognizes that during evolution, some enzymes have lost the ability to carry out biological catalysis, which is often reflected in their amino acid sequences and unusual 'pseudocatalytic' properties.
Enzymes are known to catalyze more than 5,000 biochemical reaction types.
Other biocatalysts are catalytic RNA molecules, also called ribozymes. They are sometimes described as a type of enzyme rather than being like an enzyme, but even in the decades since ribozymes' discovery in 1980–1982, the word enzyme alone often means the protein type specifically (as is used in this article).
An enzyme's specificity comes from its unique three-dimensional structure.
Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5'-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme's activity decreases markedly outside its optimal temperature and pH, and many enzymes are (permanently) denatured when exposed to excessive heat, losing their structure and catalytic properties.
Some enzymes are used commercially, for example, in the synthesis of antibiotics. Some household products use enzymes to speed up chemical reactions: enzymes in biological washing powders break down protein, starch or fat stains on clothes, and enzymes in meat tenderizer break down proteins into smaller molecules, making the meat easier to chew.
Etymology and history
By the late 17th and early 18th centuries, the digestion of meat by stomach secretions and the conversion of starch to sugars by plant extracts and saliva were known but the mechanisms by which these occurred had not been identified.
French chemist Anselme Payen was the first to discover an enzyme, diastase, in 1833. A few decades later, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that this fermentation was caused by a vital force contained within the yeast cells called "ferments", which were thought to function only within living organisms. He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells."
In 1877, German physiologist Wilhelm Kühne (1837–1900) first used the term enzyme, which comes , to describe this process. The word enzyme was used later to refer to nonliving substances such as pepsin, and the word ferment was used to refer to chemical activity produced by living organisms.
Eduard Buchner submitted his first paper on the study of yeast extracts in 1897. In a series of experiments at the University of Berlin, he found that sugar was fermented by yeast extracts even when there were no living yeast cells in the mixture. He named the enzyme that brought about the fermentation of sucrose "zymase". In 1907, he received the Nobel Prize in Chemistry for "his discovery of cell-free fermentation". Following Buchner's example, enzymes are usually named according to the reaction they carry out: the suffix -ase is combined with the name of the substrate (e.g., lactase is the enzyme that cleaves lactose) or to the type of reaction (e.g., DNA polymerase forms DNA polymers).
The biochemical identity of enzymes was still unknown in the early 1900s. Many scientists observed that enzymatic activity was associated with proteins, but others (such as Nobel laureate Richard Willstätter) argued that proteins were merely carriers for the true enzymes and that proteins per se were incapable of catalysis. In 1926, James B. Sumner showed that the enzyme urease was a pure protein and crystallized it; he did likewise for the enzyme catalase in 1937. The conclusion that pure proteins can be enzymes was definitively demonstrated by John Howard Northrop and Wendell Meredith Stanley, who worked on the digestive enzymes pepsin (1930), trypsin and chymotrypsin. These three scientists were awarded the 1946 Nobel Prize in Chemistry.
The discovery that enzymes could be crystallized eventually allowed their structures to be solved by x-ray crystallography. This was first done for lysozyme, an enzyme found in tears, saliva and egg whites that digests the coating of some bacteria; the structure was solved by a group led by David Chilton Phillips and published in 1965. This high-resolution structure of lysozyme marked the beginning of the field of structural biology and the effort to understand how enzymes work at an atomic level of detail.
Classification and nomenclature
Enzymes can be classified by two main criteria: either amino acid sequence similarity (and thus evolutionary relationship) or enzymatic activity.
Enzyme activity. An enzyme's name is often derived from its substrate or the chemical reaction it catalyzes, with the word ending in -ase. Examples are lactase, alcohol dehydrogenase and DNA polymerase. Different enzymes that catalyze the same chemical reaction are called isozymes.
The International Union of Biochemistry and Molecular Biology have developed a nomenclature for enzymes, the EC numbers (for "Enzyme Commission"). Each enzyme is described by "EC" followed by a sequence of four numbers which represent the hierarchy of enzymatic activity (from very general to very specific). That is, the first number broadly classifies the enzyme based on its mechanism while the other digits add more and more specificity.
The top-level classification is:
EC 1, Oxidoreductases: catalyze oxidation/reduction reactions
EC 2, Transferases: transfer a functional group (e.g. a methyl or phosphate group)
EC 3, Hydrolases: catalyze the hydrolysis of various bonds
EC 4, Lyases: cleave various bonds by means other than hydrolysis and oxidation
EC 5, Isomerases: catalyze isomerization changes within a single molecule
EC 6, Ligases: join two molecules with covalent bonds.
EC 7, Translocases: catalyze the movement of ions or molecules across membranes, or their separation within membranes.
These sections are subdivided by other features such as the substrate, products, and chemical mechanism. An enzyme is fully specified by four numerical designations. For example, hexokinase (EC 2.7.1.1) is a transferase (EC 2) that adds a phosphate group (EC 2.7) to a hexose sugar, a molecule containing an alcohol group (EC 2.7.1).
Sequence similarity. EC categories do not reflect sequence similarity. For instance, two ligases of the same EC number that catalyze exactly the same reaction can have completely different sequences. Independent of their function, enzymes, like any other proteins, have been classified by their sequence similarity into numerous families. These families have been documented in dozens of different protein and protein family databases such as Pfam.
Non-homologous isofunctional enzymes. Unrelated enzymes that have the same enzymatic activity have been called non-homologous isofunctional enzymes. Horizontal gene transfer may spread these genes to unrelated species, especially bacteria where they can replace endogenous genes of the same function, leading to hon-homologous gene displacement.
Structure
Enzymes are generally globular proteins, acting alone or in larger complexes. The sequence of the amino acids specifies the structure which in turn determines the catalytic activity of the enzyme. Although structure determines function, a novel enzymatic activity cannot yet be predicted from structure alone. Enzyme structures unfold (denature) when heated or exposed to chemical denaturants and this disruption to the structure typically causes a loss of activity. Enzyme denaturation is normally linked to temperatures above a species' normal level; as a result, enzymes from bacteria living in volcanic environments such as hot springs are prized by industrial users for their ability to function at high temperatures, allowing enzyme-catalysed reactions to be operated at a very high rate.
Enzymes are usually much larger than their substrates. Sizes range from just 62 amino acid residues, for the monomer of 4-oxalocrotonate tautomerase, to over 2,500 residues in the animal fatty acid synthase. Only a small portion of their structure (around 2–4 amino acids) is directly involved in catalysis: the catalytic site. This catalytic site is located next to one or more binding sites where residues orient the substrates. The catalytic site and binding site together compose the enzyme's active site. The remaining majority of the enzyme structure serves to maintain the precise orientation and dynamics of the active site.
In some enzymes, no amino acids are directly involved in catalysis; instead, the enzyme contains sites to bind and orient catalytic cofactors. Enzyme structures may also contain allosteric sites where the binding of a small molecule causes a conformational change that increases or decreases activity.
A small number of RNA-based biological catalysts called ribozymes exist, which again can act alone or in complex with proteins. The most common of these is the ribosome which is a complex of protein and catalytic RNA components.
Mechanism
Substrate binding
Enzymes must bind their substrates before they can catalyse any chemical reaction. Enzymes are usually very specific as to what substrates they bind and then the chemical reaction catalysed. Specificity is achieved by binding pockets with complementary shape, charge and hydrophilic/hydrophobic characteristics to the substrates. Enzymes can therefore distinguish between very similar substrate molecules to be chemoselective, regioselective and stereospecific.
Some of the enzymes showing the highest specificity and accuracy are involved in the copying and expression of the genome. Some of these enzymes have "proof-reading" mechanisms. Here, an enzyme such as DNA polymerase catalyzes a reaction in a first step and then checks that the product is correct in a second step. This two-step process results in average error rates of less than 1 error in 100 million reactions in high-fidelity mammalian polymerases. Similar proofreading mechanisms are also found in RNA polymerase, aminoacyl tRNA synthetases and ribosomes.
Conversely, some enzymes display enzyme promiscuity, having broad specificity and acting on a range of different physiologically relevant substrates. Many enzymes possess small side activities which arose fortuitously (i.e. neutrally), which may be the starting point for the evolutionary selection of a new function.
"Lock and key" model
To explain the observed specificity of enzymes, in 1894 Emil Fischer proposed that both the enzyme and the substrate possess specific complementary geometric shapes that fit exactly into one another. This is often referred to as "the lock and key" model. This early model explains enzyme specificity, but fails to explain the stabilization of the transition state that enzymes achieve.
Induced fit model
In 1958, Daniel Koshland suggested a modification to the lock and key model: since enzymes are rather flexible structures, the active site is continuously reshaped by interactions with the substrate as the substrate interacts with the enzyme. As a result, the substrate does not simply bind to a rigid active site; the amino acid side-chains that make up the active site are molded into the precise positions that enable the enzyme to perform its catalytic function. In some cases, such as glycosidases, the substrate molecule also changes shape slightly as it enters the active site. The active site continues to change until the substrate is completely bound, at which point the final shape and charge distribution is determined.
Induced fit may enhance the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism.
Catalysis
Enzymes can accelerate reactions in several ways, all of which lower the activation energy (ΔG‡, Gibbs free energy)
By stabilizing the transition state:
Creating an environment with a charge distribution complementary to that of the transition state to lower its energy
By providing an alternative reaction pathway:
Temporarily reacting with the substrate, forming a covalent intermediate to provide a lower energy transition state
By destabilizing the substrate ground state:
Distorting bound substrate(s) into their transition state form to reduce the energy required to reach the transition state
By orienting the substrates into a productive arrangement to reduce the reaction entropy change (the contribution of this mechanism to catalysis is relatively small)
Enzymes may use several of these mechanisms simultaneously. For example, proteases such as trypsin perform covalent catalysis using a catalytic triad, stabilize charge build-up on the transition states using an oxyanion hole, complete hydrolysis using an oriented water substrate.
Dynamics
Enzymes are not rigid, static structures; instead they have complex internal dynamic motions – that is, movements of parts of the enzyme's structure such as individual amino acid residues, groups of residues forming a protein loop or unit of secondary structure, or even an entire protein domain. These motions give rise to a conformational ensemble of slightly different structures that interconvert with one another at equilibrium. Different states within this ensemble may be associated with different aspects of an enzyme's function. For example, different conformations of the enzyme dihydrofolate reductase are associated with the substrate binding, catalysis, cofactor release, and product release steps of the catalytic cycle, consistent with catalytic resonance theory.
Substrate presentation
Substrate presentation is a process where the enzyme is sequestered away from its substrate. Enzymes can be sequestered to the plasma membrane away from a substrate in the nucleus or cytosol. Or within the membrane, an enzyme can be sequestered into lipid rafts away from its substrate in the disordered region. When the enzyme is released it mixes with its substrate. Alternatively, the enzyme can be sequestered near its substrate to activate the enzyme. For example, the enzyme can be soluble and upon activation bind to a lipid in the plasma membrane and then act upon molecules in the plasma membrane.
Allosteric modulation
Allosteric sites are pockets on the enzyme, distinct from the active site, that bind to molecules in the cellular environment. These molecules then cause a change in the conformation or dynamics of the enzyme that is transduced to the active site and thus affects the reaction rate of the enzyme. In this way, allosteric interactions can either inhibit or activate enzymes. Allosteric interactions with metabolites upstream or downstream in an enzyme's metabolic pathway cause feedback regulation, altering the activity of the enzyme according to the flux through the rest of the pathway.
Cofactors
Some enzymes do not need additional components to show full activity. Others require non-protein molecules called cofactors to be bound for activity. Cofactors can be either inorganic (e.g., metal ions and iron–sulfur clusters) or organic compounds (e.g., flavin and heme). These cofactors serve many purposes; for instance, metal ions can help in stabilizing nucleophilic species within the active site. Organic cofactors can be either coenzymes, which are released from the enzyme's active site during the reaction, or prosthetic groups, which are tightly bound to an enzyme. Organic prosthetic groups can be covalently bound (e.g., biotin in enzymes such as pyruvate carboxylase).
An example of an enzyme that contains a cofactor is carbonic anhydrase, which uses a zinc cofactor bound as part of its active site. These tightly bound ions or molecules are usually found in the active site and are involved in catalysis. For example, flavin and heme cofactors are often involved in redox reactions.
Enzymes that require a cofactor but do not have one bound are called apoenzymes or apoproteins. An enzyme together with the cofactor(s) required for activity is called a holoenzyme (or haloenzyme). The term holoenzyme can also be applied to enzymes that contain multiple protein subunits, such as the DNA polymerases; here the holoenzyme is the complete complex containing all the subunits needed for activity.
Coenzymes
Coenzymes are small organic molecules that can be loosely or tightly bound to an enzyme. Coenzymes transport chemical groups from one enzyme to another. Examples include NADH, NADPH and adenosine triphosphate (ATP). Some coenzymes, such as flavin mononucleotide (FMN), flavin adenine dinucleotide (FAD), thiamine pyrophosphate (TPP), and tetrahydrofolate (THF), are derived from vitamins. These coenzymes cannot be synthesized by the body de novo and closely related compounds (vitamins) must be acquired from the diet. The chemical groups carried include:
the hydride ion (H−), carried by NAD or NADP+
the phosphate group, carried by adenosine triphosphate
the acetyl group, carried by coenzyme A
formyl, methenyl or methyl groups, carried by folic acid and
the methyl group, carried by S-adenosylmethionine
Since coenzymes are chemically changed as a consequence of enzyme action, it is useful to consider coenzymes to be a special class of substrates, or second substrates, which are common to many different enzymes. For example, about 1000 enzymes are known to use the coenzyme NADH.
Coenzymes are usually continuously regenerated and their concentrations maintained at a steady level inside the cell. For example, NADPH is regenerated through the pentose phosphate pathway and S-adenosylmethionine by methionine adenosyltransferase. This continuous regeneration means that small amounts of coenzymes can be used very intensively. For example, the human body turns over its own weight in ATP each day.
Thermodynamics
As with all catalysts, enzymes do not alter the position of the chemical equilibrium of the reaction. In the presence of an enzyme, the reaction runs in the same direction as it would without the enzyme, just more quickly. For example, carbonic anhydrase catalyzes its reaction in either direction depending on the concentration of its reactants:
The rate of a reaction is dependent on the activation energy needed to form the transition state which then decays into products. Enzymes increase reaction rates by lowering the energy of the transition state. First, binding forms a low energy enzyme-substrate complex (ES). Second, the enzyme stabilises the transition state such that it requires less energy to achieve compared to the uncatalyzed reaction (ES‡). Finally the enzyme-product complex (EP) dissociates to release the products.
Enzymes can couple two or more reactions, so that a thermodynamically favorable reaction can be used to "drive" a thermodynamically unfavourable one so that the combined energy of the products is lower than the substrates. For example, the hydrolysis of ATP is often used to drive other chemical reactions.
Kinetics
Enzyme kinetics is the investigation of how enzymes bind substrates and turn them into products. The rate data used in kinetic analyses are commonly obtained from enzyme assays. In 1913 Leonor Michaelis and Maud Leonora Menten proposed a quantitative theory of enzyme kinetics, which is referred to as Michaelis–Menten kinetics. The major contribution of Michaelis and Menten was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis–Menten complex in their honor. The enzyme then catalyzes the chemical step in the reaction and releases the product. This work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely used today.
Enzyme rates depend on solution conditions and substrate concentration. To find the maximum speed of an enzymatic reaction, the substrate concentration is increased until a constant rate of product formation is seen. This is shown in the saturation curve on the right. Saturation happens because, as substrate concentration increases, more and more of the free enzyme is converted into the substrate-bound ES complex. At the maximum reaction rate (Vmax) of the enzyme, all the enzyme active sites are bound to substrate, and the amount of ES complex is the same as the total amount of enzyme.
Vmax is only one of several important kinetic parameters. The amount of substrate needed to achieve a given rate of reaction is also important. This is given by the Michaelis–Menten constant (Km), which is the substrate concentration required for an enzyme to reach one-half its maximum reaction rate; generally, each enzyme has a characteristic KM for a given substrate. Another useful constant is kcat, also called the turnover number, which is the number of substrate molecules handled by one active site per second.
The efficiency of an enzyme can be expressed in terms of kcat/Km. This is also called the specificity constant and incorporates the rate constants for all steps in the reaction up to and including the first irreversible step. Because the specificity constant reflects both affinity and catalytic ability, it is useful for comparing different enzymes against each other, or the same enzyme with different substrates. The theoretical maximum for the specificity constant is called the diffusion limit and is about 108 to 109 (M−1 s−1). At this point every collision of the enzyme with its substrate will result in catalysis, and the rate of product formation is not limited by the reaction rate but by the diffusion rate. Enzymes with this property are called catalytically perfect or kinetically perfect. Example of such enzymes are triose-phosphate isomerase, carbonic anhydrase, acetylcholinesterase, catalase, fumarase, β-lactamase, and superoxide dismutase. The turnover of such enzymes can reach several million reactions per second. But most enzymes are far from perfect: the average values of and are about and , respectively.
Michaelis–Menten kinetics relies on the law of mass action, which is derived from the assumptions of free diffusion and thermodynamically driven random collision. Many biochemical or cellular processes deviate significantly from these conditions, because of macromolecular crowding and constrained molecular movement. More recent, complex extensions of the model attempt to correct for these effects.
Inhibition
Enzyme reaction rates can be decreased by various types of enzyme inhibitors.
Types of inhibition
Competitive
A competitive inhibitor and substrate cannot bind to the enzyme at the same time. Often competitive inhibitors strongly resemble the real substrate of the enzyme. For example, the drug methotrexate is a competitive inhibitor of the enzyme dihydrofolate reductase, which catalyzes the reduction of dihydrofolate to tetrahydrofolate. The similarity between the structures of dihydrofolate and this drug are shown in the accompanying figure. This type of inhibition can be overcome with high substrate concentration. In some cases, the inhibitor can bind to a site other than the binding-site of the usual substrate and exert an allosteric effect to change the shape of the usual binding-site.
Non-competitive
A non-competitive inhibitor binds to a site other than where the substrate binds. The substrate still binds with its usual affinity and hence Km remains the same. However the inhibitor reduces the catalytic efficiency of the enzyme so that Vmax is reduced. In contrast to competitive inhibition, non-competitive inhibition cannot be overcome with high substrate concentration.
Uncompetitive
An uncompetitive inhibitor cannot bind to the free enzyme, only to the enzyme-substrate complex; hence, these types of inhibitors are most effective at high substrate concentration. In the presence of the inhibitor, the enzyme-substrate complex is inactive. This type of inhibition is rare.
Mixed
A mixed inhibitor binds to an allosteric site and the binding of the substrate and the inhibitor affect each other. The enzyme's function is reduced but not eliminated when bound to the inhibitor. This type of inhibitor does not follow the Michaelis–Menten equation.
Irreversible
An irreversible inhibitor permanently inactivates the enzyme, usually by forming a covalent bond to the protein. Penicillin and aspirin are common drugs that act in this manner.
Functions of inhibitors
In many organisms, inhibitors may act as part of a feedback mechanism. If an enzyme produces too much of one substance in the organism, that substance may act as an inhibitor for the enzyme at the beginning of the pathway that produces it, causing production of the substance to slow down or stop when there is sufficient amount. This is a form of negative feedback. Major metabolic pathways such as the citric acid cycle make use of this mechanism.
Since inhibitors modulate the function of enzymes they are often used as drugs. Many such drugs are reversible competitive inhibitors that resemble the enzyme's native substrate, similar to methotrexate above; other well-known examples include statins used to treat high cholesterol, and protease inhibitors used to treat retroviral infections such as HIV. A common example of an irreversible inhibitor that is used as a drug is aspirin, which inhibits the COX-1 and COX-2 enzymes that produce the inflammation messenger prostaglandin. Other enzyme inhibitors are poisons. For example, the poison cyanide is an irreversible enzyme inhibitor that combines with the copper and iron in the active site of the enzyme cytochrome c oxidase and blocks cellular respiration.
Factors affecting enzyme activity
As enzymes are made up of proteins, their actions are sensitive to change in many physio chemical factors such as pH, temperature, substrate concentration, etc.
The following table shows pH optima for various enzymes.
Biological function
Enzymes serve a wide variety of functions inside living organisms. They are indispensable for signal transduction and cell regulation, often via kinases and phosphatases. They also generate movement, with myosin hydrolyzing adenosine triphosphate (ATP) to generate muscle contraction, and also transport cargo around the cell as part of the cytoskeleton. Other ATPases in the cell membrane are ion pumps involved in active transport. Enzymes are also involved in more exotic functions, such as luciferase generating light in fireflies. Viruses can also contain enzymes for infecting cells, such as the HIV integrase and reverse transcriptase, or for viral release from cells, like the influenza virus neuraminidase.
An important function of enzymes is in the digestive systems of animals. Enzymes such as amylases and proteases break down large molecules (starch or proteins, respectively) into smaller ones, so they can be absorbed by the intestines. Starch molecules, for example, are too large to be absorbed from the intestine, but enzymes hydrolyze the starch chains into smaller molecules such as maltose and eventually glucose, which can then be absorbed. Different enzymes digest different food substances. In ruminants, which have herbivorous diets, microorganisms in the gut produce another enzyme, cellulase, to break down the cellulose cell walls of plant fiber.
Metabolism
Several enzymes can work together in a specific order, creating metabolic pathways. In a metabolic pathway, one enzyme takes the product of another enzyme as a substrate. After the catalytic reaction, the product is then passed on to another enzyme. Sometimes more than one enzyme can catalyze the same reaction in parallel; this can allow more complex regulation: with, for example, a low constant activity provided by one enzyme but an inducible high activity from a second enzyme.
Enzymes determine what steps occur in these pathways. Without enzymes, metabolism would neither progress through the same steps and could not be regulated to serve the needs of the cell. Most central metabolic pathways are regulated at a few key steps, typically through enzymes whose activity involves the hydrolysis of ATP. Because this reaction releases so much energy, other reactions that are thermodynamically unfavorable can be coupled to ATP hydrolysis, driving the overall series of linked metabolic reactions.
Control of activity
There are five main ways that enzyme activity is controlled in the cell.
Regulation
Enzymes can be either activated or inhibited by other molecules. For example, the end product(s) of a metabolic pathway are often inhibitors for one of the first enzymes of the pathway (usually the first irreversible step, called committed step), thus regulating the amount of end product made by the pathways. Such a regulatory mechanism is called a negative feedback mechanism, because the amount of the end product produced is regulated by its own concentration. Negative feedback mechanism can effectively adjust the rate of synthesis of intermediate metabolites according to the demands of the cells. This helps with effective allocations of materials and energy economy, and it prevents the excess manufacture of end products. Like other homeostatic devices, the control of enzymatic action helps to maintain a stable internal environment in living organisms.
Post-translational modification
Examples of post-translational modification include phosphorylation, myristoylation and glycosylation. For example, in the response to insulin, the phosphorylation of multiple enzymes, including glycogen synthase, helps control the synthesis or degradation of glycogen and allows the cell to respond to changes in blood sugar. Another example of post-translational modification is the cleavage of the polypeptide chain. Chymotrypsin, a digestive protease, is produced in inactive form as chymotrypsinogen in the pancreas and transported in this form to the stomach where it is activated. This stops the enzyme from digesting the pancreas or other tissues before it enters the gut. This type of inactive precursor to an enzyme is known as a zymogen or proenzyme.
Quantity
Enzyme production (transcription and translation of enzyme genes) can be enhanced or diminished by a cell in response to changes in the cell's environment. This form of gene regulation is called enzyme induction. For example, bacteria may become resistant to antibiotics such as penicillin because enzymes called beta-lactamases are induced that hydrolyse the crucial beta-lactam ring within the penicillin molecule. Another example comes from enzymes in the liver called cytochrome P450 oxidases, which are important in drug metabolism. Induction or inhibition of these enzymes can cause drug interactions. Enzyme levels can also be regulated by changing the rate of enzyme degradation. The opposite of enzyme induction is enzyme repression.
Subcellular distribution
Enzymes can be compartmentalized, with different metabolic pathways occurring in different cellular compartments. For example, fatty acids are synthesized by one set of enzymes in the cytosol, endoplasmic reticulum and Golgi and used by a different set of enzymes as a source of energy in the mitochondrion, through β-oxidation. In addition, trafficking of the enzyme to different compartments may change the degree of protonation (e.g., the neutral cytoplasm and the acidic lysosome) or oxidative state (e.g., oxidizing periplasm or reducing cytoplasm) which in turn affects enzyme activity. In contrast to partitioning into membrane bound organelles, enzyme subcellular localisation may also be altered through polymerisation of enzymes into macromolecular cytoplasmic filaments.
Organ specialization
In multicellular eukaryotes, cells in different organs and tissues have different patterns of gene expression and therefore have different sets of enzymes (known as isozymes) available for metabolic reactions. This provides a mechanism for regulating the overall metabolism of the organism. For example, hexokinase, the first enzyme in the glycolysis pathway, has a specialized form called glucokinase expressed in the liver and pancreas that has a lower affinity for glucose yet is more sensitive to glucose concentration. This enzyme is involved in sensing blood sugar and regulating insulin production.
Involvement in disease
Since the tight control of enzyme activity is essential for homeostasis, any malfunction (mutation, overproduction, underproduction or deletion) of a single critical enzyme can lead to a genetic disease. The malfunction of just one type of enzyme out of the thousands of types present in the human body can be fatal. An example of a fatal genetic disease due to enzyme insufficiency is Tay–Sachs disease, in which patients lack the enzyme hexosaminidase.
One example of enzyme deficiency is the most common type of phenylketonuria. Many different single amino acid mutations in the enzyme phenylalanine hydroxylase, which catalyzes the first step in the degradation of phenylalanine, result in build-up of phenylalanine and related products. Some mutations are in the active site, directly disrupting binding and catalysis, but many are far from the active site and reduce activity by destabilising the protein structure, or affecting correct oligomerisation. This can lead to intellectual disability if the disease is untreated. Another example is pseudocholinesterase deficiency, in which the body's ability to break down choline ester drugs is impaired.
Oral administration of enzymes can be used to treat some functional enzyme deficiencies, such as pancreatic insufficiency and lactose intolerance.
Another way enzyme malfunctions can cause disease comes from germline mutations in genes coding for DNA repair enzymes. Defects in these enzymes cause cancer because cells are less able to repair mutations in their genomes. This causes a slow accumulation of mutations and results in the development of cancers. An example of such a hereditary cancer syndrome is xeroderma pigmentosum, which causes the development of skin cancers in response to even minimal exposure to ultraviolet light.
Evolution
Similar to any other protein, enzymes change over time through mutations and sequence divergence. Given their central role in metabolism, enzyme evolution plays a critical role in adaptation. A key question is therefore whether and how enzymes can change their enzymatic activities alongside. It is generally accepted that many new enzyme activities have evolved through gene duplication and mutation of the duplicate copies although evolution can also happen without duplication. One example of an enzyme that has changed its activity is the ancestor of methionyl aminopeptidase (MAP) and creatine amidinohydrolase (creatinase) which are clearly homologous but catalyze very different reactions (MAP removes the amino-terminal methionine in new proteins while creatinase hydrolyses creatine to sarcosine and urea). In addition, MAP is metal-ion dependent while creatinase is not, hence this property was also lost over time. Small changes of enzymatic activity are extremely common among enzymes. In particular, substrate binding specificity (see above) can easily and quickly change with single amino acid changes in their substrate binding pockets. This is frequently seen in the main enzyme classes such as kinases.
Artificial (in vitro) evolution is now commonly used to modify enzyme activity or specificity for industrial applications (see below).
Industrial applications
Enzymes are used in the chemical industry and other industrial applications when extremely specific catalysts are required. Enzymes in general are limited in the number of reactions they have evolved to catalyze and also by their lack of stability in organic solvents and at high temperatures. As a consequence, protein engineering is an active area of research and involves attempts to create new enzymes with novel properties, either through rational design or in vitro evolution. These efforts have begun to be successful, and a few enzymes have now been designed "from scratch" to catalyze reactions that do not occur in nature.
See also
Industrial enzymes
List of enzymes
Molecular machine
Enzyme databases
BRENDA
ExPASy
IntEnz
KEGG
MetaCyc
References
Further reading
General
, A biochemistry textbook available free online through NCBI Bookshelf.
Etymology and history
, A history of early enzymology.
Enzyme structure and mechanism
Kinetics and inhibition
External links
Biomolecules
Catalysis
Metabolism
Process chemicals | 0.782793 | 0.999326 | 0.782265 |
Enzyme catalysis | Enzyme catalysis is the increase in the rate of a process by an "enzyme", a biological molecule. Most enzymes are proteins, and most such processes are chemical reactions. Within the enzyme, generally catalysis occurs at a localized site, called the active site.
Most enzymes are made predominantly of proteins, either a single protein chain or many such chains in a multi-subunit complex. Enzymes often also incorporate non-protein components, such as metal ions or specialized organic molecules known as cofactor (e.g. adenosine triphosphate). Many cofactors are vitamins, and their role as vitamins is directly linked to their use in the catalysis of biological process within metabolism. Catalysis of biochemical reactions in the cell is vital since many but not all metabolically essential reactions have very low rates when uncatalysed. One driver of protein evolution is the optimization of such catalytic activities, although only the most crucial enzymes operate near catalytic efficiency limits, and many enzymes are far from optimal. Important factors in enzyme catalysis include general acid and base catalysis, orbital steering, entropic restriction, orientation effects (i.e. lock and key catalysis), as well as motional effects involving protein dynamics
Mechanisms of enzyme catalysis vary, but are all similar in principle to other types of chemical catalysis in that the crucial factor is a reduction of energy barrier(s) separating the reactants (or substrates) from the products. The reduction of activation energy (Ea) increases the fraction of reactant molecules that can overcome this barrier and form the product. An important principle is that since they only reduce energy barriers between products and reactants, enzymes always catalyze reactions in both directions, and cannot drive a reaction forward or affect the equilibrium position – only the speed with which is it achieved. As with other catalysts, the enzyme is not consumed or changed by the reaction (as a substrate is) but is recycled such that a single enzyme performs many rounds of catalysis.
Enzymes are often highly specific and act on only certain substrates. Some enzymes are absolutely specific meaning that they act on only one substrate, while others show group specificity and can act on similar but not identical chemical groups such as the peptide bond in different molecules. Many enzymes have stereochemical specificity and act on one stereoisomer but not another.
Induced fit
The classic model for the enzyme-substrate interaction is the induced fit model. This model proposes that the initial interaction between enzyme and substrate is relatively weak, but that these weak interactions rapidly induce conformational changes in the enzyme that strengthen binding.
The advantages of the induced fit mechanism arise due to the stabilizing effect of strong enzyme binding. There are two different mechanisms of substrate binding: uniform binding, which has strong substrate binding, and differential binding, which has strong transition state binding. The stabilizing effect of uniform binding increases both substrate and transition state binding affinity, while differential binding increases only transition state binding affinity. Both are used by enzymes and have been evolutionarily chosen to minimize the activation energy of the reaction. Enzymes that are saturated, that is, have a high affinity substrate binding, require differential binding to reduce the energy of activation, whereas small substrate unbound enzymes may use either differential or uniform binding.
These effects have led to most proteins using the differential binding mechanism to reduce the energy of activation, so most substrates have high affinity for the enzyme while in the transition state. Differential binding is carried out by the induced fit mechanism – the substrate first binds weakly, then the enzyme changes conformation increasing the affinity to the transition state and stabilizing it, so reducing the activation energy to reach it.
It is important to clarify, however, that the induced fit concept cannot be used to rationalize catalysis. That is, the chemical catalysis is defined as the reduction of Ea‡ (when the system is already in the ES‡) relative to Ea‡ in the uncatalyzed reaction in water (without the enzyme). The induced fit only suggests that the barrier is lower in the closed form of the enzyme but does not tell us what the reason for the barrier reduction is.
Induced fit may be beneficial to the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism.
Mechanisms of an alternative reaction route
These conformational changes also bring catalytic residues in the active site close to the chemical bonds in the substrate that will be altered in the reaction. After binding takes place, one or more mechanisms of catalysis lowers the energy of the reaction's transition state, by providing an alternative chemical pathway for the reaction. There are six possible mechanisms of "over the barrier" catalysis as well as a "through the barrier" mechanism:
Proximity and orientation
Enzyme-substrate interactions align the reactive chemical groups and hold them close together in an optimal geometry, which increases the rate of the reaction. This reduces the entropy of the reactants and thus makes addition or transfer reactions less unfavorable, since a reduction in the overall entropy when two reactants become a single product. However this is a general effect and is seen in non-addition or transfer reactions where it occurs due to an increase in the "effective concentration" of the reagents. This is understood when considering how increases in concentration leads to increases in reaction rate: essentially when the reactants are more concentrated, they collide more often and so react more often. In enzyme catalysis, the binding of the reagents to the enzyme restricts the conformational space of the reactants, holding them in the 'proper orientation' and close to each other, so that they collide more frequently, and with the correct geometry, to facilitate the desired reaction. The "effective concentration" is the concentration the reactant would have to be, free in solution, to experiences the same collisional frequency. Often such theoretical effective concentrations are unphysical and impossible to realize in reality – which is a testament to the great catalytic power of many enzymes, with massive rate increases over the uncatalyzed state.
However, the situation might be more complex, since modern computational studies have established that traditional examples of proximity effects cannot be related directly to enzyme entropic effects. Also, the original entropic proposal has been found to largely overestimate the contribution of orientation entropy to catalysis.
Proton donors or acceptors
Proton donors and acceptors, i.e. acids and base may donate and accept protons in order to stabilize developing charges in the transition state. This is related to the overall principle of catalysis, that of reducing energy barriers, since in general transition states are high energy states, and by stabilizing them this high energy is reduced, lowering the barrier. A key feature of enzyme catalysis over many non-biological catalysis, is that both acid and base catalysis can be combined in the same reaction. In many abiotic systems, acids (large [H+]) or bases ( large concentration H+ sinks, or species with electron pairs) can increase the rate of the reaction; but of course the environment can only have one overall pH (measure of acidity or basicity (alkalinity)). However, since enzymes are large molecules, they can position both acid groups and basic groups in their active site to interact with their substrates, and employ both modes independent of the bulk pH.
Often general acid or base catalysis is employed to activate nucleophile and/or electrophile groups, or to stabilize leaving groups. Many amino acids with acidic or basic groups are this employed in the active site, such as the glutamic and aspartic acid, histidine, cystine, tyrosine, lysine and arginine, as well as serine and threonine. In addition, the peptide backbone, with carbonyl and amide N groups is often employed. Cystine and Histidine are very commonly involved, since they both have a pKa close to neutral pH and can therefore both accept and donate protons.
Many reaction mechanisms involving acid/base catalysis assume a substantially altered pKa. This alteration of pKa is possible through the local environment of the residue.
pKa can also be influenced significantly by the surrounding environment, to the extent that residues which are basic in solution may act as proton donors, and vice versa.
The modification of the pKa's is a pure part of the electrostatic mechanism. The catalytic effect of the above example is mainly associated with the reduction of the pKa of the oxyanion and the increase in the pKa of the histidine, while the proton transfer from the serine to the histidine is not catalyzed significantly, since it is not the rate determining barrier. Note that in the example shown, the histidine conjugate acid acts as a general acid catalyst for the subsequent loss of the amine from a tetrahedral intermediate. Evidence supporting this proposed mechanism (Figure 4 in Ref. 13) has, however been controverted.
Electrostatic catalysis
Stabilization of charged transition states can also be by residues in the active site forming ionic bonds (or partial ionic charge interactions) with the intermediate. These bonds can either come from acidic or basic side chains found on amino acids such as lysine, arginine, aspartic acid or glutamic acid or come from metal cofactors such as zinc. Metal ions are particularly effective and can reduce the pKa of water enough to make it an effective nucleophile.
Systematic computer simulation studies established that electrostatic effects give, by far, the largest contribution to catalysis. It can increase the rate of reaction by a factor of up to 107. In particular, it has been found that enzyme provides an environment which is more polar than water, and that the ionic transition states are stabilized by fixed dipoles. This is very different from transition state stabilization in water, where the water molecules must pay with "reorganization energy". In order to stabilize ionic and charged states. Thus, the catalysis is associated with the fact that the enzyme polar groups are preorganized
The magnitude of the electrostatic field exerted by an enzyme's active site has been shown to be highly correlated with the enzyme's catalytic rate enhancement.
Binding of substrate usually excludes water from the active site, thereby lowering the local dielectric constant to that of an organic solvent. This strengthens the electrostatic interactions between the charged/polar substrates and the active sites. In addition, studies have shown that the charge distributions about the active sites are arranged so as to stabilize the transition states of the catalyzed reactions. In several enzymes, these charge distributions apparently serve to guide polar substrates toward their binding sites so that the rates of these enzymatic reactions are greater than their apparent diffusion-controlled limits.
Covalent catalysis
Covalent catalysis involves the substrate forming a transient covalent bond with residues in the enzyme active site or with a cofactor. This adds an additional covalent intermediate to the reaction, and helps to reduce the energy of later transition states of the reaction. The covalent bond must, at a later stage in the reaction, be broken to regenerate the enzyme. This mechanism is utilised by the catalytic triad of enzymes such as proteases like chymotrypsin and trypsin, where an acyl-enzyme intermediate is formed. An alternative mechanism is schiff base formation using the free amine from a lysine residue, as seen in the enzyme aldolase during glycolysis.
Some enzymes utilize non-amino acid cofactors such as pyridoxal phosphate (PLP) or thiamine pyrophosphate (TPP) to form covalent intermediates with reactant molecules. Such covalent intermediates function to reduce the energy of later transition states, similar to how covalent intermediates formed with active site amino acid residues allow stabilization, but the capabilities of cofactors allow enzymes to carryout reactions that amino acid side residues alone could not. Enzymes utilizing such cofactors include the PLP-dependent enzyme aspartate transaminase and the TPP-dependent enzyme pyruvate dehydrogenase.
Rather than lowering the activation energy for a reaction pathway, covalent catalysis provides an alternative pathway for the reaction (via to the covalent intermediate) and so is distinct from true catalysis. For example, the energetics of the covalent bond to the serine molecule in chymotrypsin should be compared to the well-understood covalent bond to the nucleophile in the uncatalyzed solution reaction. A true proposal of a covalent catalysis (where the barrier is lower than the corresponding barrier in solution) would require, for example, a partial covalent bond to the transition state by an enzyme group (e.g., a very strong hydrogen bond), and such effects do not contribute significantly to catalysis.
Metal ion catalysis
A metal ion in the active site participates in catalysis by coordinating charge stabilization and shielding. Because of a metal's positive charge, only negative charges can be stabilized through metal ions. However, metal ions are advantageous in biological catalysis because they are not affected by changes in pH. Metal ions can also act to ionize water by acting as a Lewis acid. Metal ions may also be agents of oxidation and reduction.
Bond strain
This is the principal effect of induced fit binding, where the affinity of the enzyme to the transition state is greater than to the substrate itself. This induces structural rearrangements which strain substrate bonds into a position closer to the conformation of the transition state, so lowering the energy difference between the substrate and transition state and helping catalyze the reaction.
However, the strain effect is, in fact, a ground state destabilization effect, rather than transition state stabilization effect. Furthermore, enzymes are very flexible and they cannot apply large strain effect.
In addition to bond strain in the substrate, bond strain may also be induced within the enzyme itself to activate residues in the active site.
Quantum tunneling
These traditional "over the barrier" mechanisms have been challenged in some cases by models and observations of "through the barrier" mechanisms (quantum tunneling). Some enzymes operate with kinetics which are faster than what would be predicted by the classical ΔG‡. In "through the barrier" models, a proton or an electron can tunnel through activation barriers. Quantum tunneling for protons has been observed in tryptamine oxidation by aromatic amine dehydrogenase.
Quantum tunneling does not appear to provide a major catalytic advantage, since the tunneling contributions are similar in the catalyzed and the uncatalyzed reactions in solution. However, the tunneling contribution (typically enhancing rate constants by a factor of ~1000 compared to the rate of reaction for the classical 'over the barrier' route) is likely crucial to the viability of biological organisms. This emphasizes the general importance of tunneling reactions in biology.
In 1971-1972 the first quantum-mechanical model of enzyme catalysis was formulated.
Active enzyme
The binding energy of the enzyme-substrate complex cannot be considered as an external energy which is necessary for the substrate activation. The enzyme of high energy content may firstly transfer some specific energetic group X1 from catalytic site of the enzyme to the final place of the first bound reactant, then another group X2 from the second bound reactant (or from the second group of the single reactant) must be transferred to active site to finish substrate conversion to product and enzyme regeneration.
We can present the whole enzymatic reaction as a two coupling reactions:
It may be seen from reaction that the group X1 of the active enzyme appears in the product due to possibility of the exchange reaction inside enzyme to avoid both electrostatic inhibition and repulsion of atoms. So we represent the active enzyme as a powerful reactant of the enzymatic reaction. The reaction shows incomplete conversion of the substrate because its group X2 remains inside enzyme. This approach as idea had formerly proposed relying on the hypothetical extremely high enzymatic conversions (catalytically perfect enzyme).
The crucial point for the verification of the present approach is that the catalyst must be a complex of the enzyme with the transfer group of the reaction. This chemical aspect is supported by the well-studied mechanisms of the several enzymatic reactions. Consider the reaction of peptide bond hydrolysis catalyzed by a pure protein α-chymotrypsin (an enzyme acting without a cofactor), which is a well-studied member of the serine proteases family, see.
We present the experimental results for this reaction as two chemical steps:
where S1 is a polypeptide, P1 and P2 are products. The first chemical step includes the formation of a covalent acyl-enzyme intermediate. The second step is the deacylation step. It is important to note that the group H+, initially found on the enzyme, but not in water, appears in the product before the step of hydrolysis, therefore it may be considered as an additional group of the enzymatic reaction.
Thus, the reaction shows that the enzyme acts as a powerful reactant of the reaction. According to the proposed concept, the H transport from the enzyme promotes the first reactant conversion, breakdown of the first initial chemical bond (between groups P1 and P2). The step of hydrolysis leads to a breakdown of the second chemical bond and regeneration of the enzyme.
The proposed chemical mechanism does not depend on the concentration of the substrates or products in the medium. However, a shift in their concentration mainly causes free energy changes in the first and final steps of the reactions and due to the changes in the free energy content of every molecule, whether S or P, in water solution.
This approach is in accordance with the following mechanism of muscle contraction. The final step of ATP hydrolysis in skeletal muscle is the product release caused by the association of myosin heads with actin. The closing of the actin-binding cleft during the association reaction is structurally coupled with the opening of the nucleotide-binding pocket on the myosin active site.
Notably, the final steps of ATP hydrolysis include the fast release of phosphate and the slow release of ADP.
The release of a phosphate anion from bound ADP anion into water solution may be considered as an exergonic reaction because the phosphate anion has low molecular mass.
Thus, we arrive at the conclusion that the primary release of the inorganic phosphate H2PO4− leads to transformation of a significant part of the free energy of ATP hydrolysis into the kinetic energy of the solvated phosphate, producing active streaming. This assumption of a local mechano-chemical transduction is in accord with Tirosh's mechanism of muscle contraction, where the muscle force derives from an integrated action of active streaming created by ATP hydrolysis.
Examples of catalytic mechanisms
In reality, most enzyme mechanisms involve a combination of several different types of catalysis.
Triose phosphate isomerase
Triose phosphate isomerase catalyses the reversible interconversion of the two triose phosphates isomers dihydroxyacetone phosphate and D-glyceraldehyde 3-phosphate.
Trypsin
Trypsin is a serine protease that cleaves protein substrates after lysine or arginine residues using a catalytic triad to perform covalent catalysis, and an oxyanion hole to stabilise charge-buildup on the transition states.
Aldolase
Aldolase catalyses the breakdown of fructose 1,6-bisphosphate (F-1,6-BP) into glyceraldehyde 3-phosphate and dihydroxyacetone phosphate (DHAP).
Enzyme diffusivity
The advent of single-molecule studies in the 2010s led to the observation that the movement of untethered enzymes increases with increasing substrate concentration and increasing reaction enthalpy. Subsequent observations suggest that this increase in diffusivity is driven by transient displacement of the enzyme's center of mass, resulting in a "recoil effect that propels the enzyme".
Reaction similarity
Similarity between enzymatic reactions (EC) can be calculated by using bond changes, reaction centres or substructure metrics (EC-BLAST ).
See also
Catalytic triad
Enzyme assay
Enzyme inhibitor
Enzyme kinetics
Enzyme promiscuity
Protein dynamics
Pseudoenzymes, whose ubiquity despite their catalytic inactivity suggests omic implications
Quantum tunnelling
The Proteolysis Map
Time resolved crystallography
References
Further reading
External links
Articles containing video clips
es:Catálisis enzimática | 0.791632 | 0.988156 | 0.782256 |
Electromagnetism | In physics, electromagnetism is an interaction that occurs between particles with electric charge via electromagnetic fields. The electromagnetic force is one of the four fundamental forces of nature. It is the dominant force in the interactions of atoms and molecules. Electromagnetism can be thought of as a combination of electrostatics and magnetism, which are distinct but closely intertwined phenomena. Electromagnetic forces occur between any two charged particles. Electric forces cause an attraction between particles with opposite charges and repulsion between particles with the same charge, while magnetism is an interaction that occurs between charged particles in relative motion. These two forces are described in terms of electromagnetic fields. Macroscopic charged objects are described in terms of Coulomb's law for electricity and Ampère's force law for magnetism; the Lorentz force describes microscopic charged particles.
The electromagnetic force is responsible for many of the chemical and physical phenomena observed in daily life. The electrostatic attraction between atomic nuclei and their electrons holds atoms together. Electric forces also allow different atoms to combine into molecules, including the macromolecules such as proteins that form the basis of life. Meanwhile, magnetic interactions between the spin and angular momentum magnetic moments of electrons also play a role in chemical reactivity; such relationships are studied in spin chemistry. Electromagnetism also plays several crucial roles in modern technology: electrical energy production, transformation and distribution; light, heat, and sound production and detection; fiber optic and wireless communication; sensors; computation; electrolysis; electroplating; and mechanical motors and actuators.
Electromagnetism has been studied since ancient times. Many ancient civilizations, including the Greeks and the Mayans, created wide-ranging theories to explain lightning, static electricity, and the attraction between magnetized pieces of iron ore. However, it was not until the late 18th century that scientists began to develop a mathematical basis for understanding the nature of electromagnetic interactions. In the 18th and 19th centuries, prominent scientists and mathematicians such as Coulomb, Gauss and Faraday developed namesake laws which helped to explain the formation and interaction of electromagnetic fields. This process culminated in the 1860s with the discovery of Maxwell's equations, a set of four partial differential equations which provide a complete description of classical electromagnetic fields. Maxwell's equations provided a sound mathematical basis for the relationships between electricity and magnetism that scientists had been exploring for centuries, and predicted the existence of self-sustaining electromagnetic waves. Maxwell postulated that such waves make up visible light, which was later shown to be true. Gamma-rays, x-rays, ultraviolet, visible, infrared radiation, microwaves and radio waves were all determined to be electromagnetic radiation differing only in their range of frequencies.
In the modern era, scientists continue to refine the theory of electromagnetism to account for the effects of modern physics, including quantum mechanics and relativity. The theoretical implications of electromagnetism, particularly the requirement that observations remain consistent when viewed from various moving frames of reference (relativistic electromagnetism) and the establishment of the speed of light based on properties of the medium of propagation (permeability and permittivity), helped inspire Einstein's theory of special relativity in 1905. Quantum electrodynamics (QED) modifies Maxwell's equations to be consistent with the quantized nature of matter. In QED, changes in the electromagnetic field are expressed in terms of discrete excitations, particles known as photons, the quanta of light.
History
Ancient world
Investigation into electromagnetic phenomena began about 5,000 years ago. There is evidence that the ancient Chinese, Mayan, and potentially even Egyptian civilizations knew that the naturally magnetic mineral magnetite had attractive properties, and many incorporated it into their art and architecture. Ancient people were also aware of lightning and static electricity, although they had no idea of the mechanisms behind these phenomena. The Greek philosopher Thales of Miletus discovered around 600 B.C.E. that amber could acquire an electric charge when it was rubbed with cloth, which allowed it to pick up light objects such as pieces of straw. Thales also experimented with the ability of magnetic rocks to attract one other, and hypothesized that this phenomenon might be connected to the attractive power of amber, foreshadowing the deep connections between electricity and magnetism that would be discovered over 2,000 years later. Despite all this investigation, ancient civilizations had no understanding of the mathematical basis of electromagnetism, and often analyzed its impacts through the lens of religion rather than science (lightning, for instance, was considered to be a creation of the gods in many cultures).
19th century
Electricity and magnetism were originally considered to be two separate forces. This view changed with the publication of James Clerk Maxwell's 1873 A Treatise on Electricity and Magnetism in which the interactions of positive and negative charges were shown to be mediated by one force. There are four main effects resulting from these interactions, all of which have been clearly demonstrated by experiments:
Electric charges or one another with a force inversely proportional to the square of the distance between them: opposite charges attract, like charges repel.
Magnetic poles (or states of polarization at individual points) attract or repel one another in a manner similar to positive and negative charges and always exist as pairs: every north pole is yoked to a south pole.
An electric current inside a wire creates a corresponding circumferential magnetic field outside the wire. Its direction (clockwise or counter-clockwise) depends on the direction of the current in the wire.
A current is induced in a loop of wire when it is moved toward or away from a magnetic field, or a magnet is moved towards or away from it; the direction of current depends on that of the movement.
In April 1820, Hans Christian Ørsted observed that an electrical current in a wire caused a nearby compass needle to move. At the time of discovery, Ørsted did not suggest any satisfactory explanation of the phenomenon, nor did he try to represent the phenomenon in a mathematical framework. However, three months later he began more intensive investigations. Soon thereafter he published his findings, proving that an electric current produces a magnetic field as it flows through a wire. The CGS unit of magnetic induction (oersted) is named in honor of his contributions to the field of electromagnetism.
His findings resulted in intensive research throughout the scientific community in electrodynamics. They influenced French physicist André-Marie Ampère's developments of a single mathematical form to represent the magnetic forces between current-carrying conductors. Ørsted's discovery also represented a major step toward a unified concept of energy.
This unification, which was observed by Michael Faraday, extended by James Clerk Maxwell, and partially reformulated by Oliver Heaviside and Heinrich Hertz, is one of the key accomplishments of 19th-century mathematical physics. It has had far-reaching consequences, one of which was the understanding of the nature of light. Unlike what was proposed by the electromagnetic theory of that time, light and other electromagnetic waves are at present seen as taking the form of quantized, self-propagating oscillatory electromagnetic field disturbances called photons. Different frequencies of oscillation give rise to the different forms of electromagnetic radiation, from radio waves at the lowest frequencies, to visible light at intermediate frequencies, to gamma rays at the highest frequencies.
Ørsted was not the only person to examine the relationship between electricity and magnetism. In 1802, Gian Domenico Romagnosi, an Italian legal scholar, deflected a magnetic needle using a Voltaic pile. The factual setup of the experiment is not completely clear, nor if current flowed across the needle or not. An account of the discovery was published in 1802 in an Italian newspaper, but it was largely overlooked by the contemporary scientific community, because Romagnosi seemingly did not belong to this community.
An earlier (1735), and often neglected, connection between electricity and magnetism was reported by a Dr. Cookson. The account stated:A tradesman at Wakefield in Yorkshire, having put up a great number of knives and forks in a large box ... and having placed the box in the corner of a large room, there happened a sudden storm of thunder, lightning, &c. ... The owner emptying the box on a counter where some nails lay, the persons who took up the knives, that lay on the nails, observed that the knives took up the nails. On this the whole number was tried, and found to do the same, and that, to such a degree as to take up large nails, packing needles, and other iron things of considerable weight ... E. T. Whittaker suggested in 1910 that this particular event was responsible for lightning to be "credited with the power of magnetizing steel; and it was doubtless this which led Franklin in 1751 to attempt to magnetize a sewing-needle by means of the discharge of Leyden jars."
A fundamental force
The electromagnetic force is the second strongest of the four known fundamental forces and has unlimited range.
All other forces, known as non-fundamental forces. (e.g., friction, contact forces) are derived from the four fundamental forces. At high energy, the weak force and electromagnetic force are unified as a single interaction called the electroweak interaction.
Most of the forces involved in interactions between atoms are explained by electromagnetic forces between electrically charged atomic nuclei and electrons. The electromagnetic force is also involved in all forms of chemical phenomena.
Electromagnetism explains how materials carry momentum despite being composed of individual particles and empty space. The forces we experience when "pushing" or "pulling" ordinary material objects result from intermolecular forces between individual molecules in our bodies and in the objects.
The effective forces generated by the momentum of electrons' movement is a necessary part of understanding atomic and intermolecular interactions. As electrons move between interacting atoms, they carry momentum with them. As a collection of electrons becomes more confined, their minimum momentum necessarily increases due to the Pauli exclusion principle. The behavior of matter at the molecular scale, including its density, is determined by the balance between the electromagnetic force and the force generated by the exchange of momentum carried by the electrons themselves.
Classical electrodynamics
In 1600, William Gilbert proposed, in his De Magnete, that electricity and magnetism, while both capable of causing attraction and repulsion of objects, were distinct effects. Mariners had noticed that lightning strikes had the ability to disturb a compass needle. The link between lightning and electricity was not confirmed until Benjamin Franklin's proposed experiments in 1752 were conducted on 10May 1752 by Thomas-François Dalibard of France using a iron rod instead of a kite and he successfully extracted electrical sparks from a cloud.
One of the first to discover and publish a link between human-made electric current and magnetism was Gian Romagnosi, who in 1802 noticed that connecting a wire across a voltaic pile deflected a nearby compass needle. However, the effect did not become widely known until 1820, when Ørsted performed a similar experiment. Ørsted's work influenced Ampère to conduct further experiments, which eventually gave rise to a new area of physics: electrodynamics. By determining a force law for the interaction between elements of electric current, Ampère placed the subject on a solid mathematical foundation.
A theory of electromagnetism, known as classical electromagnetism, was developed by several physicists during the period between 1820 and 1873, when James Clerk Maxwell's treatise was published, which unified previous developments into a single theory, proposing that light was an electromagnetic wave propagating in the luminiferous ether. In classical electromagnetism, the behavior of the electromagnetic field is described by a set of equations known as Maxwell's equations, and the electromagnetic force is given by the Lorentz force law.
One of the peculiarities of classical electromagnetism is that it is difficult to reconcile with classical mechanics, but it is compatible with special relativity. According to Maxwell's equations, the speed of light in vacuum is a universal constant that is dependent only on the electrical permittivity and magnetic permeability of free space. This violates Galilean invariance, a long-standing cornerstone of classical mechanics. One way to reconcile the two theories (electromagnetism and classical mechanics) is to assume the existence of a luminiferous aether through which the light propagates. However, subsequent experimental efforts failed to detect the presence of the aether. After important contributions of Hendrik Lorentz and Henri Poincaré, in 1905, Albert Einstein solved the problem with the introduction of special relativity, which replaced classical kinematics with a new theory of kinematics compatible with classical electromagnetism. (For more information, see History of special relativity.)
In addition, relativity theory implies that in moving frames of reference, a magnetic field transforms to a field with a nonzero electric component and conversely, a moving electric field transforms to a nonzero magnetic component, thus firmly showing that the phenomena are two sides of the same coin. Hence the term "electromagnetism". (For more information, see Classical electromagnetism and special relativity and Covariant formulation of classical electromagnetism.)
Today few problems in electromagnetism remain unsolved. These include: the lack of magnetic monopoles, Abraham–Minkowski controversy, the location in space of the electromagnetic field energy, and the mechanism by which some organisms can sense electric and magnetic fields.
Extension to nonlinear phenomena
The Maxwell equations are linear, in that a change in the sources (the charges and currents) results in a proportional change of the fields. Nonlinear dynamics can occur when electromagnetic fields couple to matter that follows nonlinear dynamical laws. This is studied, for example, in the subject of magnetohydrodynamics, which combines Maxwell theory with the Navier–Stokes equations. Another branch of electromagnetism dealing with nonlinearity is nonlinear optics.
Quantities and units
Here is a list of common units related to electromagnetism:
ampere (electric current, SI unit)
coulomb (electric charge)
farad (capacitance)
henry (inductance)
ohm (resistance)
siemens (conductance)
tesla (magnetic flux density)
volt (electric potential)
watt (power)
weber (magnetic flux)
In the electromagnetic CGS system, electric current is a fundamental quantity defined via Ampère's law and takes the permeability as a dimensionless quantity (relative permeability) whose value in vacuum is unity. As a consequence, the square of the speed of light appears explicitly in some of the equations interrelating quantities in this system.
Formulas for physical laws of electromagnetism (such as Maxwell's equations) need to be adjusted depending on what system of units one uses. This is because there is no one-to-one correspondence between electromagnetic units in SI and those in CGS, as is the case for mechanical units. Furthermore, within CGS, there are several plausible choices of electromagnetic units, leading to different unit "sub-systems", including Gaussian, "ESU", "EMU", and Heaviside–Lorentz. Among these choices, Gaussian units are the most common today, and in fact the phrase "CGS units" is often used to refer specifically to CGS-Gaussian units.
Applications
The study of electromagnetism informs electric circuits, magnetic circuits, and semiconductor devices' construction.
See also
Abraham–Lorentz force
Aeromagnetic surveys
Computational electromagnetics
Double-slit experiment
Electrodynamic droplet deformation
Electromagnet
Electromagnetic induction
Electromagnetic wave equation
Electromagnetic scattering
Electromechanics
Geophysics
Introduction to electromagnetism
Magnetostatics
Magnetoquasistatic field
Optics
Relativistic electromagnetism
Wheeler–Feynman absorber theory
References
Further reading
Web sources
Textbooks
General coverage
External links
Magnetic Field Strength Converter
Electromagnetic Force – from Eric Weisstein's World of Physics
Fundamental interactions | 0.783048 | 0.998973 | 0.782243 |
Gel electrophoresis | Gel electrophoresis is a method for separation and analysis of biomacromolecules (DNA, RNA, proteins, etc.) and their fragments, based on their size and charge. It is used in clinical chemistry to separate proteins by charge or size (IEF agarose, essentially size independent) and in biochemistry and molecular biology to separate a mixed population of DNA and RNA fragments by length, to estimate the size of DNA and RNA fragments or to separate proteins by charge.
Nucleic acid molecules are separated by applying an electric field to move the negatively charged molecules through a matrix of agarose or other substances. Shorter molecules move faster and migrate farther than longer ones because shorter molecules migrate more easily through the pores of the gel. This phenomenon is called sieving. Proteins are separated by the charge in agarose because the pores of the gel are too large to sieve proteins. Gel electrophoresis can also be used for the separation of nanoparticles.
Gel electrophoresis uses a gel as an anticonvective medium or sieving medium during electrophoresis, the movement of a charged particle in an electric current. Gels suppress the thermal convection caused by the application of the electric field, and can also act as a sieving medium, slowing the passage of molecules; gels can also simply serve to maintain the finished separation so that a post electrophoresis stain can be applied. DNA gel electrophoresis is usually performed for analytical purposes, often after amplification of DNA via polymerase chain reaction (PCR), but may be used as a preparative technique prior to use of other methods such as mass spectrometry, RFLP, PCR, cloning, DNA sequencing, or Southern blotting for further characterization.
Physical basis
Electrophoresis is a process that enables the sorting of molecules based on charge, size, or shape. Using an electric field, molecules (such as DNA) can be made to move through a gel made of agarose or polyacrylamide. The electric field consists of a negative charge at one end which pushes the molecules through the gel, and a positive charge at the other end that pulls the molecules through the gel. The molecules being sorted are dispensed into a well in the gel material. The gel is placed in an electrophoresis chamber, which is then connected to a power source. When the electric field is applied, the larger molecules move more slowly through the gel while the smaller molecules move faster. The different sized molecules form distinct bands on the gel.
The term "gel" in this instance refers to the matrix used to contain, then separate the target molecules. In most cases, the gel is a crosslinked polymer whose composition and porosity are chosen based on the specific weight and composition of the target to be analyzed. When separating proteins or small nucleic acids (DNA, RNA, or oligonucleotides) the gel is usually composed of different concentrations of acrylamide and a cross-linker, producing different sized mesh networks of polyacrylamide. When separating larger nucleic acids (greater than a few hundred bases), the preferred matrix is purified agarose. In both cases, the gel forms a solid, yet porous matrix. Acrylamide, in contrast to polyacrylamide, is a neurotoxin and must be handled using appropriate safety precautions to avoid poisoning. Agarose is composed of long unbranched chains of uncharged carbohydrates without cross-links resulting in a gel with large pores allowing for the separation of macromolecules and macromolecular complexes.
Electrophoresis refers to the electromotive force (EMF) that is used to move the molecules through the gel matrix. By placing the molecules in wells in the gel and applying an electric field, the molecules will move through the matrix at different rates, determined largely by their mass when the charge-to-mass ratio (Z) of all species is uniform. However, when charges are not all uniform the electrical field generated by the electrophoresis procedure will cause the molecules to migrate differentially according to charge. Species that are net positively charged will migrate towards the cathode which is negatively charged (because this is an electrolytic rather than galvanic cell), whereas species that are net negatively charged will migrate towards the positively charged anode. Mass remains a factor in the speed with which these non-uniformly charged molecules migrate through the matrix toward their respective electrodes.
If several samples have been loaded into adjacent wells in the gel, they will run parallel in individual lanes. Depending on the number of different molecules, each lane shows the separation of the components from the original mixture as one or more distinct bands, one band per component. Incomplete separation of the components can lead to overlapping bands, or indistinguishable smears representing multiple unresolved components. Bands in different lanes that end up at the same distance from the top contain molecules that passed through the gel at the same speed, which usually means they are approximately the same size. There are molecular weight size markers available that contain a mixture of molecules of known sizes. If such a marker was run on one lane in the gel parallel to the unknown samples, the bands observed can be compared to those of the unknown to determine their size. The distance a band travels is approximately inversely proportional to the logarithm of the size of the molecule (alternatively, this can be stated as the distance traveled is inversely proportional to the log of samples's molecular weight).
There are limits to electrophoretic techniques. Since passing a current through a gel causes heating, gels may melt during electrophoresis. Electrophoresis is performed in buffer solutions to reduce pH changes due to the electric field, which is important because the charge of DNA and RNA depends on pH, but running for too long can exhaust the buffering capacity of the solution. There are also limitations in determining the molecular weight by SDS-PAGE, especially when trying to find the MW of an unknown protein. Certain biological variables are difficult or impossible to minimize and can affect electrophoretic migration. Such factors include protein structure, post-translational modifications, and amino acid composition. For example, tropomyosin is an acidic protein that migrates abnormally on SDS-PAGE gels. This is because the acidic residues are repelled by the negatively charged SDS, leading to an inaccurate mass-to-charge ratio and migration. Further, different preparations of genetic material may not migrate consistently with each other, for morphological or other reasons.
Types of gel
The types of gel most typically used are agarose and polyacrylamide gels. Each type of gel is well-suited to different types and sizes of the analyte. Polyacrylamide gels are usually used for proteins and have very high resolving power for small fragments of DNA (5-500 bp). Agarose gels, on the other hand, have lower resolving power for DNA but have a greater range of separation, and are therefore used for DNA fragments of usually 50–20,000 bp in size, but the resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). Polyacrylamide gels are run in a vertical configuration while agarose gels are typically run horizontally in a submarine mode. They also differ in their casting methodology, as agarose sets thermally, while polyacrylamide forms in a chemical polymerization reaction.
Agarose
Agarose gels are made from the natural polysaccharide polymers extracted from seaweed.
Agarose gels are easily cast and handled compared to other matrices because the gel setting is a physical rather than chemical change. Samples are also easily recovered. After the experiment is finished, the resulting gel can be stored in a plastic bag in a refrigerator.
Agarose gels do not have a uniform pore size, but are optimal for electrophoresis of proteins that are larger than 200 kDa. Agarose gel electrophoresis can also be used for the separation of DNA fragments ranging from 50 base pair to several megabases (millions of bases), the largest of which require specialized apparatus. The distance between DNA bands of different lengths is influenced by the percent agarose in the gel, with higher percentages requiring longer run times, sometimes days. Instead high percentage agarose gels should be run with a pulsed field electrophoresis (PFE), or field inversion electrophoresis.
"Most agarose gels are made with between 0.7% (good separation or resolution of large 5–10kb DNA fragments) and 2% (good resolution for small 0.2–1kb fragments) agarose dissolved in electrophoresis buffer. Up to 3% can be used for separating very tiny fragments but a vertical polyacrylamide gel is more appropriate in this case. Low percentage gels are very weak and may break when you try to lift them. High percentage gels are often brittle and do not set evenly. 1% gels are common for many applications."
Polyacrylamide
Polyacrylamide gel electrophoresis (PAGE) is used for separating proteins ranging in size from 5 to 2,000 kDa due to the uniform pore size provided by the polyacrylamide gel. Pore size is controlled by modulating the concentrations of acrylamide and bis-acrylamide powder used in creating a gel. Care must be used when creating this type of gel, as acrylamide is a potent neurotoxin in its liquid and powdered forms.
Traditional DNA sequencing techniques such as Maxam-Gilbert or Sanger methods used polyacrylamide gels to separate DNA fragments differing by a single base-pair in length so the sequence could be read. Most modern DNA separation methods now use agarose gels, except for particularly small DNA fragments. It is currently most often used in the field of immunology and protein analysis, often used to separate different proteins or isoforms of the same protein into separate bands. These can be transferred onto a nitrocellulose or PVDF membrane to be probed with antibodies and corresponding markers, such as in a western blot.
Typically resolving gels are made in 6%, 8%, 10%, 12% or 15%. Stacking gel (5%) is poured on top of the resolving gel and a gel comb (which forms the wells and defines the lanes where proteins, sample buffer, and ladders will be placed) is inserted. The percentage chosen depends on the size of the protein that one wishes to identify or probe in the sample. The smaller the known weight, the higher the percentage that should be used. Changes in the buffer system of the gel can help to further resolve proteins of very small sizes.
Starch
Partially hydrolysed potato starch makes for another non-toxic medium for protein electrophoresis. The gels are slightly more opaque than acrylamide or agarose. Non-denatured proteins can be separated according to charge and size. They are visualised using Napthal Black or Amido Black staining. Typical starch gel concentrations are 5% to 10%.
Gel conditions
Denaturing
Denaturing gels are run under conditions that disrupt the natural structure of the analyte, causing it to unfold into a linear chain. Thus, the mobility of each macromolecule depends only on its linear length and its mass-to-charge ratio. Thus, the secondary, tertiary, and quaternary levels of biomolecular structure are disrupted, leaving only the primary structure to be analyzed.
Nucleic acids are often denatured by including urea in the buffer, while proteins are denatured using sodium dodecyl sulfate, usually as part of the SDS-PAGE process. For full denaturation of proteins, it is also necessary to reduce the covalent disulfide bonds that stabilize their tertiary and quaternary structure, a method called reducing PAGE. Reducing conditions are usually maintained by the addition of beta-mercaptoethanol or dithiothreitol. For a general analysis of protein samples, reducing PAGE is the most common form of protein electrophoresis.
Denaturing conditions are necessary for proper estimation of molecular weight of RNA. RNA is able to form more intramolecular interactions than DNA which may result in change of its electrophoretic mobility. Urea, DMSO and glyoxal are the most often used denaturing agents to disrupt RNA structure. Originally, highly toxic methylmercury hydroxide was often used in denaturing RNA electrophoresis, but it may be method of choice for some samples.
Denaturing gel electrophoresis is used in the DNA and RNA banding pattern-based methods temperature gradient gel electrophoresis (TGGE) and denaturing gradient gel electrophoresis (DGGE).
Native
Native gels are run in non-denaturing conditions so that the analyte's natural structure is maintained. This allows the physical size of the folded or assembled complex to affect the mobility, allowing for analysis of all four levels of the biomolecular structure. For biological samples, detergents are used only to the extent that they are necessary to lyse lipid membranes in the cell. Complexes remain—for the most part—associated and folded as they would be in the cell. One downside, however, is that complexes may not separate cleanly or predictably, as it is difficult to predict how the molecule's shape and size will affect its mobility. Addressing and solving this problem is a major aim of preparative native PAGE.
Unlike denaturing methods, native gel electrophoresis does not use a charged denaturing agent. The molecules being separated (usually proteins or nucleic acids) therefore differ not only in molecular mass and intrinsic charge, but also the cross-sectional area, and thus experience different electrophoretic forces dependent on the shape of the overall structure. For proteins, since they remain in the native state they may be visualized not only by general protein staining reagents but also by specific enzyme-linked staining.
A specific experiment example of an application of native gel electrophoresis is to check for enzymatic activity to verify the presence of the enzyme in the sample during protein purification. For example, for the protein alkaline phosphatase, the staining solution is a mixture of 4-chloro-2-2methylbenzenediazonium salt with 3-phospho-2-naphthoic acid-2'-4'-dimethyl aniline in Tris buffer. This stain is commercially sold as a kit for staining gels. If the protein is present, the mechanism of the reaction takes place in the following order: it starts with the de-phosphorylation of 3-phospho-2-naphthoic acid-2'-4'-dimethyl aniline by alkaline phosphatase (water is needed for the reaction). The phosphate group is released and replaced by an alcohol group from water. The electrophile 4- chloro-2-2 methylbenzenediazonium (Fast Red TR Diazonium salt) displaces the alcohol group forming the final product Red Azo dye. As its name implies, this is the final visible-red product of the reaction. In undergraduate academic experimentation of protein purification, the gel is usually run next to commercial purified samples to visualize the results and conclude whether or not purification was successful.
Native gel electrophoresis is typically used in proteomics and metallomics. However, native PAGE is also used to scan genes (DNA) for unknown mutations as in single-strand conformation polymorphism.
Buffers
Buffers in gel electrophoresis are used to provide ions that carry a current and to maintain the pH at a relatively constant value.
These buffers have plenty of ions in them, which is necessary for the passage of electricity through them. Something like distilled water or benzene contains few ions, which is not ideal for the use in electrophoresis. There are a number of buffers used for electrophoresis. The most common being, for nucleic acids Tris/Acetate/EDTA (TAE), Tris/Borate/EDTA (TBE). Many other buffers have been proposed, e.g. lithium borate, which is rarely used, based on Pubmed citations (LB), isoelectric histidine, pK matched goods buffers, etc.; in most cases the purported rationale is lower current (less heat) matched ion mobilities, which leads to longer buffer life. Borate is problematic; Borate can polymerize, or interact with cis diols such as those found in RNA. TAE has the lowest buffering capacity but provides the best resolution for larger DNA. This means a lower voltage and more time, but a better product. LB is relatively new and is ineffective in resolving fragments larger than 5 kbp; However, with its low conductivity, a much higher voltage could be used (up to 35 V/cm), which means a shorter analysis time for routine electrophoresis. As low as one base pair size difference could be resolved in 3% agarose gel with an extremely low conductivity medium (1 mM Lithium borate).
Most SDS-PAGE protein separations are performed using a "discontinuous" (or DISC) buffer system that significantly enhances the sharpness of the bands within the gel. During electrophoresis in a discontinuous gel system, an ion gradient is formed in the early stage of electrophoresis that causes all of the proteins to focus on a single sharp band in a process called isotachophoresis. Separation of the proteins by size is achieved in the lower, "resolving" region of the gel. The resolving gel typically has a much smaller pore size, which leads to a sieving effect that now determines the electrophoretic mobility of the proteins.
Visualization
After the electrophoresis is complete, the molecules in the gel can be stained to make them visible. DNA may be visualized using ethidium bromide which, when intercalated into DNA, fluoresce under ultraviolet light, while protein may be visualised using silver stain or Coomassie brilliant blue dye. Other methods may also be used to visualize the separation of the mixture's components on the gel. If the molecules to be separated contain radioactivity, for example in a DNA sequencing gel, an autoradiogram can be recorded of the gel. Photographs can be taken of gels, often using a Gel Doc system. Gels are then commonly labelled for presentation and scientific records on the popular figure-creation website, SciUGo.
Downstream processing
After separation, an additional separation method may then be used, such as isoelectric focusing or SDS-PAGE. The gel will then be physically cut, and the protein complexes extracted from each portion separately. Each extract may then be analysed, such as by peptide mass fingerprinting or de novo peptide sequencing after in-gel digestion. This can provide a great deal of information about the identities of the proteins in a complex.
Applications
Estimation of the size of DNA molecules following restriction enzyme digestion, e.g. in restriction mapping of cloned DNA.
Analysis of PCR products, e.g. in molecular genetic diagnosis or genetic fingerprinting
Separation of restricted genomic DNA prior to Southern transfer, or of RNA prior to Northern transfer.
Gel electrophoresis is used in forensics, molecular biology, genetics, microbiology and biochemistry. The results can be analyzed quantitatively by visualizing the gel with UV light and a gel imaging device. The image is recorded with a computer-operated camera, and the intensity of the band or spot of interest is measured and compared against standard or markers loaded on the same gel. The measurement and analysis are mostly done with specialized software.
Depending on the type of analysis being performed, other techniques are often implemented in conjunction with the results of gel electrophoresis, providing a wide range of field-specific applications.
Nucleic acids
In the case of nucleic acids, the direction of migration, from negative to positive electrodes, is due to the naturally occurring negative charge carried by their sugar-phosphate backbone.
Double-stranded DNA fragments naturally behave as long rods, so their migration through the gel is relative to their size or, for cyclic fragments, their radius of gyration. Circular DNA such as plasmids, however, may show multiple bands, the speed of migration may depend on whether it is relaxed or supercoiled. Single-stranded DNA or RNA tends to fold up into molecules with complex shapes and migrate through the gel in a complicated manner based on their tertiary structure. Therefore, agents that disrupt the hydrogen bonds, such as sodium hydroxide or formamide, are used to denature the nucleic acids and cause them to behave as long rods again.
Gel electrophoresis of large DNA or RNA is usually done by agarose gel electrophoresis. See the "chain termination method" page for an example of a polyacrylamide DNA sequencing gel. Characterization through ligand interaction of nucleic acids or fragments may be performed by mobility shift affinity electrophoresis.
Electrophoresis of RNA samples can be used to check for genomic DNA contamination and also for RNA degradation. RNA from eukaryotic organisms shows distinct bands of 28s and 18s rRNA, the 28s band being approximately twice as intense as the 18s band. Degraded RNA has less sharply defined bands, has a smeared appearance, and the intensity ratio is less than 2:1.
Proteins
Proteins, unlike nucleic acids, can have varying charges and complex shapes, therefore they may not migrate into the polyacrylamide gel at similar rates, or all when placing a negative to positive EMF on the sample. Proteins, therefore, are usually denatured in the presence of a detergent such as sodium dodecyl sulfate (SDS) that coats the proteins with a negative charge. Generally, the amount of SDS bound is relative to the size of the protein (usually 1.4g SDS per gram of protein), so that the resulting denatured proteins have an overall negative charge, and all the proteins have a similar charge-to-mass ratio. Since denatured proteins act like long rods instead of having a complex tertiary shape, the rate at which the resulting SDS coated proteins migrate in the gel is relative only to their size and not their charge or shape.
Proteins are usually analyzed by sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE), by native gel electrophoresis, by preparative native gel electrophoresis (QPNC-PAGE), or by 2-D electrophoresis.
Characterization through ligand interaction may be performed by electroblotting or by affinity electrophoresis in agarose or by capillary electrophoresis as for estimation of binding constants and determination of structural features like glycan content through lectin binding.
Nanoparticles
A novel application for gel electrophoresis is the separation or characterization of metal or metal oxide nanoparticles (e.g. Au, Ag, ZnO, SiO2) regarding the size, shape, or surface chemistry of the nanoparticles. The scope is to obtain a more homogeneous sample (e.g. narrower particle size distribution), which then can be used in further products/processes (e.g. self-assembly processes). For the separation of nanoparticles within a gel, the key parameter is the ratio of the particle size to the mesh size, whereby two migration mechanisms were identified: the unrestricted mechanism, where the particle size << mesh size, and the restricted mechanism, where particle size is similar to mesh size.
History
1930s – first reports of the use of sucrose for gel electrophoresis; moving-boundary electrophoresis (Tiselius)
1950 – introduction of "zone electrophoresis" (Tiselius); paper electrophoresis
1955 – introduction of starch gels, mediocre separation (Smithies)
1959 – introduction of acrylamide gels; discontinuous electrophoresis (Ornstein and Davis); accurate control of parameters such as pore size and stability; and (Raymond and Weintraub)
1965 – introduction of free-flow electrophoresis (Hannig)
1966 – first use of agar gels
1969 – introduction of denaturing agents especially SDS separation of protein subunit (Weber and Osborn)
1970 – Lämmli separated 28 components of T4 phage using a stacking gel and SDS
1972 – agarose gels with ethidium bromide stain
1975 – 2-dimensional gels (O’Farrell); isoelectric focusing, then SDS gel electrophoresis
1977 – sequencing gels (Sanger)
1981 – introduction of capillary electrophoresis (Jorgenson and Lukacs)
1984 – pulsed-field gel electrophoresis enables separation of large DNA molecules (Schwartz and Cantor)
2004 – introduction of a standardized polymerization time for acrylamide gel solutions to optimize gel properties, in particular gel stability (Kastenholz)
A 1959 book on electrophoresis by Milan Bier cites references from the 1800s. However, Oliver Smithies made significant contributions. Bier states: "The method of Smithies ... is finding wide application because of its unique separatory power." Taken in context, Bier clearly implies that Smithies' method is an improvement.
See also
History of electrophoresis
Electrophoretic mobility shift assay
Gel extraction
Isoelectric focusing
Pulsed field gel electrophoresis
Nonlinear frictiophoresis
Two-dimensional gel electrophoresis
SDD-AGE
QPNC-PAGE
Zymography
Fast parallel proteolysis
Free-flow electrophoresis
References
External links
Biotechniques Laboratory electrophoresis demonstration, from the University of Utah's Genetic Science Learning Center
Discontinuous native protein gel electrophoresis
Drinking straw electrophoresis
How to run a DNA or RNA gel
Animation of gel analysis of DNA restriction
Step by step photos of running a gel and extracting DNA
A typical method from wikiversity
Protein methods
Molecular biology
Laboratory techniques
Electrophoresis
Polymerase chain reaction
electrophoresis | 0.785883 | 0.99535 | 0.782229 |
Acetyl group | In organic chemistry, an acetyl group is a functional group denoted by the chemical formula and the structure . It is sometimes represented by the symbol Ac (not to be confused with the element actinium). In IUPAC nomenclature, an acetyl group is called an ethanoyl group.
An acetyl group contains a methyl group that is single-bonded to a carbonyl, making it an acyl group. The carbonyl center of an acyl radical has one non-bonded electron with which it forms a chemical bond to the remainder (denoted with the letter R) of the molecule.
The acetyl moiety is a component of many organic compounds, including acetic acid, the neurotransmitter acetylcholine, acetyl-CoA, acetylcysteine, acetaminophen (also known as paracetamol), and acetylsalicylic acid (also known as aspirin).
Acetylation
The process of adding an acetyl group into a molecule is called acetylation. An example of an acetylation reaction is the conversion of glycine to N-acetylglycine:
H2NCH2CO2H + (CH3CO)2O -> CH3C(O)NHCH2CO2H + CH3CO2H
In biology
Enzymes which perform acetylation on proteins or other biomolecules are known as acetyltransferases. In biological organisms, acetyl groups are commonly transferred from acetyl-CoA to other organic molecules. Acetyl-CoA is an intermediate in the biological synthesis and in the breakdown of many organic molecules. Acetyl-CoA is also created during the second stage of cellular respiration (pyruvate decarboxylation) by the action of pyruvate dehydrogenase on pyruvic acid.
Proteins are often modified via acetylation, for various purposes. For example, acetylation of histones by histone acetyltransferases (HATs) results in an expansion of local chromatin structure, allowing transcription to occur by enabling RNA polymerase to access DNA. However, removal of the acetyl group by histone deacetylases (HDACs) condenses the local chromatin structure, thereby preventing transcription.
In synthetic organic and pharmaceutical chemistry
Acetylation can be achieved by chemists using a variety of methods, most commonly with the use of acetic anhydride or acetyl chloride, often in the presence of a tertiary or aromatic amine base.
Pharmacology
Acetylated organic molecules exhibit increased ability to cross the selectively permeable blood–brain barrier. Acetylation helps a given drug reach the brain more quickly, making the drug's effects more intense and increasing the effectiveness of a given dose. The acetyl group in acetylsalicylic acid (aspirin) enhances its effectiveness relative to the natural anti-inflammatant salicylic acid. In similar manner, acetylation converts the natural painkiller morphine into the far more potent heroin (diacetylmorphine).
There is some evidence that acetyl-L-carnitine may be more effective for some applications than L-carnitine. Acetylation of resveratrol holds promise as one of the first anti-radiation medicines for human populations.
Etymology
The term "acetyl" was coined by the German chemist Justus von Liebig in 1839 CE to describe what he incorrectly believed to be the radical of acetic acid (the main component of vinegar, aside from water), which is now known as the vinyl group (coined in 1851 CE); "acetyl" is derived from the Latin acētum, meaning "vinegar." When it was shown that Liebig's theory was wrong and acetic acid had a different radical, his name was carried over to the correct one, but the name of acetylene (coined in 1860 CE) was retained.
See also
Acetaldehyde
Acetoxy group
Histone acetylation and deacetylation
Polyoxymethylene plastic, a.k.a. acetal resin, a thermoplastic
References
Acyl groups | 0.787863 | 0.992837 | 0.782219 |
CRC Handbook of Chemistry and Physics | The CRC Handbook of Chemistry and Physics is a comprehensive one-volume reference resource for science research. First published in 1914, it is currently in its 104th edition, published in 2023. It is known colloquially among chemists as the "Rubber Bible", as CRC originally stood for "Chemical Rubber Company".
As late as the 1962–1963 edition (3604 pages), the Handbook contained myriad information for every branch of science and engineering. Sections in that edition include: Mathematics, Properties and Physical Constants, Chemical Tables, Properties of Matter, Heat, Hygrometric and Barometric Tables, Sound, Quantities and Units, and Miscellaneous. Mathematical Tables from Handbook of Chemistry and Physics was originally published as a supplement to the handbook up to the 9th edition (1952); afterwards, the 10th edition (1956) was published separately as CRC Standard Mathematical Tables. Earlier editions included sections such as "Antidotes of Poisons", "Rules for Naming Organic Compounds", "Surface Tension of Fused Salts", "Percent Composition of Anti-Freeze Solutions", "Spark-gap Voltages", "Greek Alphabet", "Musical Scales", "Pigments and Dyes", "Comparison of Tons and Pounds", "Twist Drill and Steel Wire Gauges" and "Properties of the Earth's Atmosphere at Elevations up to 160 Kilometers". Later editions focus almost exclusively on chemistry and physics topics and eliminated much of the more "common" information.
CRC Press is a leading publisher of engineering handbooks and references and textbooks across virtually all scientific disciplines.
Contents by edition
7th edition
Mathematical Tables
General Chemical Tables
Properties of Matter
Heat
Hygrometric and Barometric Tables
Sound
Electricity and Magnetism
Light
Miscellaneous Tables
Definitions and Formulae
Laboratory Arts and Recipes
Photographic Formulae
Measures and Units
Wire Tables
Apparatus Lists
Problems
Index
22nd–44th editions
Section A: Mathematical Tables
Section B: Properties and Physical Constants
Section C: General Chemical Tables/Specific Gravity and Properties of Matter
Section D: Heat and Hygrometry/Sound/Electricity and Magnetism/Light
Section E: Quantities and Units/Miscellaneous
Index
45th–70th editions
Section A: Mathematical Tables
Section B: Elements and Inorganic Compounds
Section C: Organic Compounds
Section D: General Chemical
Section E: General Physical Constants
Section F: Miscellaneous
Index
71st–102nd editions
Section 1: Basic Constants, Units, and Conversion Factors
Section 2: Symbols, Terminology, and Nomenclature
Section 3: Physical Constants of Organic Compounds
Section 4: Properties of the Elements and Inorganic Compounds
Section 5: Thermochemistry, Electrochemistry, and Kinetics (or Thermo, Electro & Solution Chemistry)
Section 6: Fluid Properties
Section 7: Biochemistry
Section 8: Analytical Chemistry
Section 9: Molecular Structure and Spectroscopy
Section 10: Atomic, Molecular, and Optical Physics
Section 11: Nuclear and Particle Physics
Section 12: Properties of Solids
Section 13: Polymer Properties
Section 14: Geophysics, Astronomy, and Acoustics
Section 15: Practical Laboratory Data
Section 16: Health and Safety Information
Appendix A: Mathematical Tables
Appendix B: CAS Registry Numbers and Molecular Formulas of Inorganic Substances (72nd–75th)
Appendix C: Sources of Physical and Chemical Data (83rd–)
Index
See also
CRC Standard Mathematical Tables
References
External links
PDF copy of the 8th edition, published in 1920
Handbook of Chemistry and Physics online
Tables Relocated or Removed from CRC Handbook of Chemistry and Physics, 71st through 87th Editions
Handbooks and manuals
Chemistry books
Physics books
Encyclopedias of science
CRC Press books
1914 non-fiction books | 0.789087 | 0.991166 | 0.782116 |
Receptor (biochemistry) | In biochemistry and pharmacology, receptors are chemical structures, composed of protein, that receive and transduce signals that may be integrated into biological systems. These signals are typically chemical messengers which bind to a receptor and produce physiological responses such as change in the electrical activity of a cell. For example, GABA, an inhibitory neurotransmitter, inhibits electrical activity of neurons by binding to GABA receptors. There are three main ways the action of the receptor can be classified: relay of signal, amplification, or integration. Relaying sends the signal onward, amplification increases the effect of a single ligand, and integration allows the signal to be incorporated into another biochemical pathway.
Receptor proteins can be classified by their location. Cell surface receptors, also known as transmembrane receptors, include ligand-gated ion channels, G protein-coupled receptors, and enzyme-linked hormone receptors. Intracellular receptors are those found inside the cell, and include cytoplasmic receptors and nuclear receptors. A molecule that binds to a receptor is called a ligand and can be a protein, peptide (short protein), or another small molecule, such as a neurotransmitter, hormone, pharmaceutical drug, toxin, calcium ion or parts of the outside of a virus or microbe. An endogenously produced substance that binds to a particular receptor is referred to as its endogenous ligand. E.g. the endogenous ligand for the nicotinic acetylcholine receptor is acetylcholine, but it can also be activated by nicotine and blocked by curare. Receptors of a particular type are linked to specific cellular biochemical pathways that correspond to the signal. While numerous receptors are found in most cells, each receptor will only bind with ligands of a particular structure. This has been analogously compared to how locks will only accept specifically shaped keys. When a ligand binds to a corresponding receptor, it activates or inhibits the receptor's associated biochemical pathway, which may also be highly specialised.
Receptor proteins can be also classified by the property of the ligands. Such classifications include chemoreceptors, mechanoreceptors, gravitropic receptors, photoreceptors, magnetoreceptors and gasoreceptors.
Structure
The structures of receptors are very diverse and include the following major categories, among others:
Type 1: Ligand-gated ion channels (ionotropic receptors) – These receptors are typically the targets of fast neurotransmitters such as acetylcholine (nicotinic) and GABA; activation of these receptors results in changes in ion movement across a membrane. They have a heteromeric structure in that each subunit consists of the extracellular ligand-binding domain and a transmembrane domain which includes four transmembrane alpha helices. The ligand-binding cavities are located at the interface between the subunits.
Type 2: G protein-coupled receptors (metabotropic receptors) – This is the largest family of receptors and includes the receptors for several hormones and slow transmitters e.g. dopamine, metabotropic glutamate. They are composed of seven transmembrane alpha helices. The loops connecting the alpha helices form extracellular and intracellular domains. The binding-site for larger peptide ligands is usually located in the extracellular domain whereas the binding site for smaller non-peptide ligands is often located between the seven alpha helices and one extracellular loop. The aforementioned receptors are coupled to different intracellular effector systems via G proteins. G proteins are heterotrimers made up of 3 subunits: α (alpha), β (beta), and γ (gamma). In the inactive state, the three subunits associate together and the α-subunit binds GDP. G protein activation causes a conformational change, which leads to the exchange of GDP for GTP. GTP-binding to the α-subunit causes dissociation of the β- and γ-subunits. Furthermore, the three subunits, α, β, and γ have additional four main classes based on their primary sequence. These include Gs, Gi, Gq and G12.
Type 3: Kinase-linked and related receptors (see "Receptor tyrosine kinase" and "Enzyme-linked receptor") – They are composed of an extracellular domain containing the ligand binding site and an intracellular domain, often with enzymatic-function, linked by a single transmembrane alpha helix. The insulin receptor is an example.
Type 4: Nuclear receptors – While they are called nuclear receptors, they are actually located in the cytoplasm and migrate to the nucleus after binding with their ligands. They are composed of a C-terminal ligand-binding region, a core DNA-binding domain (DBD) and an N-terminal domain that contains the AF1(activation function 1) region. The core region has two zinc fingers that are responsible for recognizing the DNA sequences specific to this receptor. The N terminus interacts with other cellular transcription factors in a ligand-independent manner; and, depending on these interactions, it can modify the binding/activity of the receptor. Steroid and thyroid-hormone receptors are examples of such receptors.
Membrane receptors may be isolated from cell membranes by complex extraction procedures using solvents, detergents, and/or affinity purification.
The structures and actions of receptors may be studied by using biophysical methods such as X-ray crystallography, NMR, circular dichroism, and dual polarisation interferometry. Computer simulations of the dynamic behavior of receptors have been used to gain understanding of their mechanisms of action.
Binding and activation
Ligand binding is an equilibrium process. Ligands bind to receptors and dissociate from them according to the law of mass action in the following equation, for a ligand L and receptor, R. The brackets around chemical species denote their concentrations.
One measure of how well a molecule fits a receptor is its binding affinity, which is inversely related to the dissociation constant Kd. A good fit corresponds with high affinity and low Kd. The final biological response (e.g. second messenger cascade, muscle-contraction), is only achieved after a significant number of receptors are activated.
Affinity is a measure of the tendency of a ligand to bind to its receptor. Efficacy is the measure of the bound ligand to activate its receptor.
Agonists versus antagonists
Not every ligand that binds to a receptor also activates that receptor. The following classes of ligands exist:
(Full) agonists are able to activate the receptor and result in a strong biological response. The natural endogenous ligand with the greatest efficacy for a given receptor is by definition a full agonist (100% efficacy).
Partial agonists do not activate receptors with maximal efficacy, even with maximal binding, causing partial responses compared to those of full agonists (efficacy between 0 and 100%).
Antagonists bind to receptors but do not activate them. This results in a receptor blockade, inhibiting the binding of agonists and inverse agonists. Receptor antagonists can be competitive (or reversible), and compete with the agonist for the receptor, or they can be irreversible antagonists that form covalent bonds (or extremely high affinity non-covalent bonds) with the receptor and completely block it. The proton pump inhibitor omeprazole is an example of an irreversible antagonist. The effects of irreversible antagonism can only be reversed by synthesis of new receptors.
Inverse agonists reduce the activity of receptors by inhibiting their constitutive activity (negative efficacy).
Allosteric modulators: They do not bind to the agonist-binding site of the receptor but instead on specific allosteric binding sites, through which they modify the effect of the agonist. For example, benzodiazepines (BZDs) bind to the BZD site on the GABAA receptor and potentiate the effect of endogenous GABA.
Note that the idea of receptor agonism and antagonism only refers to the interaction between receptors and ligands and not to their biological effects.
Constitutive activity
A receptor which is capable of producing a biological response in the absence of a bound ligand is said to display "constitutive activity". The constitutive activity of a receptor may be blocked by an inverse agonist. The anti-obesity drugs rimonabant and taranabant are inverse agonists at the cannabinoid CB1 receptor and though they produced significant weight loss, both were withdrawn owing to a high incidence of depression and anxiety, which are believed to relate to the inhibition of the constitutive activity of the cannabinoid receptor.
The GABAA receptor has constitutive activity and conducts some basal current in the absence of an agonist. This allows beta carboline to act as an inverse agonist and reduce the current below basal levels.
Mutations in receptors that result in increased constitutive activity underlie some inherited diseases, such as precocious puberty (due to mutations in luteinizing hormone receptors) and hyperthyroidism (due to mutations in thyroid-stimulating hormone receptors).
Theories of drug-receptor interaction
Occupation
Early forms of the receptor theory of pharmacology stated that a drug's effect is directly proportional to the number of receptors that are occupied. Furthermore, a drug effect ceases as a drug-receptor complex dissociates.
Ariëns & Stephenson introduced the terms "affinity" & "efficacy" to describe the action of ligands bound to receptors.
Affinity: The ability of a drug to combine with a receptor to create a drug-receptor complex.
Efficacy: The ability of drug to initiate a response after the formation of drug-receptor complex.
Rate
In contrast to the accepted Occupation Theory, Rate Theory proposes that the activation of receptors is directly proportional to the total number of encounters of a drug with its receptors per unit time. Pharmacological activity is directly proportional to the rates of dissociation and association, not the number of receptors occupied:
Agonist: A drug with a fast association and a fast dissociation.
Partial-agonist: A drug with an intermediate association and an intermediate dissociation.
Antagonist: A drug with a fast association & slow dissociation
Induced-fit
As a drug approaches a receptor, the receptor alters the conformation of its binding site to produce drug—receptor complex.
Spare Receptors
In some receptor systems (e.g. acetylcholine at the neuromuscular junction in smooth muscle), agonists are able to elicit maximal response at very low levels of receptor occupancy (<1%). Thus, that system has spare receptors or a receptor reserve. This arrangement produces an economy of neurotransmitter production and release.
Receptor regulation
Cells can increase (upregulate) or decrease (downregulate) the number of receptors to a given hormone or neurotransmitter to alter their sensitivity to different molecules. This is a locally acting feedback mechanism.
Change in the receptor conformation such that binding of the agonist does not activate the receptor. This is seen with ion channel receptors.
Uncoupling of the receptor effector molecules is seen with G protein-coupled receptors.
Receptor sequestration (internalization), e.g. in the case of hormone receptors.
Examples and ligands
The ligands for receptors are as diverse as their receptors. GPCRs (7TMs) are a particularly vast family, with at least 810 members. There are also LGICs for at least a dozen endogenous ligands, and many more receptors possible through different subunit compositions. Some common examples of ligands and receptors include:
Ion channels and G protein coupled receptors
Some example ionotropic (LGIC) and metabotropic (specifically, GPCRs) receptors are shown in the table below. The chief neurotransmitters are glutamate and GABA; other neurotransmitters are neuromodulatory. This list is by no means exhaustive.
Enzyme linked receptors
Enzyme linked receptors include Receptor tyrosine kinases (RTKs), serine/threonine-specific protein kinase, as in bone morphogenetic protein and guanylate cyclase, as in atrial natriuretic factor receptor. Of the RTKs, 20 classes have been identified, with 58 different RTKs as members. Some examples are shown below:
Intracellular Receptors
Receptors may be classed based on their mechanism or on their position in the cell. 4 examples of intracellular LGIC are shown below:
Role in health and disease
In genetic disorders
Many genetic disorders involve hereditary defects in receptor genes. Often, it is hard to determine whether the receptor is nonfunctional or the hormone is produced at decreased level; this gives rise to the "pseudo-hypo-" group of endocrine disorders, where there appears to be a decreased hormonal level while in fact it is the receptor that is not responding sufficiently to the hormone.
In the immune system
The main receptors in the immune system are pattern recognition receptors (PRRs), toll-like receptors (TLRs), killer activated and killer inhibitor receptors (KARs and KIRs), complement receptors, Fc receptors, B cell receptors and T cell receptors.
See also
Ki Database
Ion channel linked receptors
Neuropsychopharmacology
Schild regression for ligand receptor inhibition
Signal transduction
Stem cell marker
List of MeSH codes (D12.776)
Receptor theory
Notes
References
External links
IUPHAR GPCR Database and Ion Channels Compendium
Human plasma membrane receptome
Cell biology
Cell signaling
Membrane biology | 0.786347 | 0.994567 | 0.782075 |
Ordinary differential equation | In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives of those functions. The term "ordinary" is used in contrast with partial differential equations (PDEs) which may be with respect to one independent variable, and, less commonly, in contrast with stochastic differential equations (SDEs) where the progression is random.
Differential equations
A linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form
where , ..., and are arbitrary differentiable functions that do not need to be linear, and are the successive derivatives of the unknown function of the variable .
Among ordinary differential equations, linear differential equations play a prominent role for several reasons. Most elementary and special functions that are encountered in physics and applied mathematics are solutions of linear differential equations (see Holonomic function). When physical phenomena are modeled with non-linear equations, they are generally approximated by linear differential equations for an easier solution. The few non-linear ODEs that can be solved explicitly are generally solved by transforming the equation into an equivalent linear ODE (see, for example Riccati equation).
Some ODEs can be solved explicitly in terms of known functions and integrals. When that is not possible, the equation for computing the Taylor series of the solutions may be useful. For applied problems, numerical methods for ordinary differential equations can supply an approximation of the solution.
Background
Ordinary differential equations (ODEs) arise in many contexts of mathematics and social and natural sciences. Mathematical descriptions of change use differentials and derivatives. Various differentials, derivatives, and functions become related via equations, such that a differential equation is a result that describes dynamically changing phenomena, evolution, and variation. Often, quantities are defined as the rate of change of other quantities (for example, derivatives of displacement with respect to time), or gradients of quantities, which is how they enter differential equations.
Specific mathematical fields include geometry and analytical mechanics. Scientific fields include much of physics and astronomy (celestial mechanics), meteorology (weather modeling), chemistry (reaction rates), biology (infectious diseases, genetic variation), ecology and population modeling (population competition), economics (stock trends, interest rates and the market equilibrium price changes).
Many mathematicians have studied differential equations and contributed to the field, including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert, and Euler.
A simple example is Newton's second law of motion—the relationship between the displacement x and the time t of an object under the force F, is given by the differential equation
which constrains the motion of a particle of constant mass m. In general, F is a function of the position x(t) of the particle at time t. The unknown function x(t) appears on both sides of the differential equation, and is indicated in the notation F(x(t)).
Definitions
In what follows, y is a dependent variable representing an unknown function of the independent variable x. The notation for differentiation varies depending upon the author and upon which notation is most useful for the task at hand. In this context, the Leibniz's notation is more useful for differentiation and integration, whereas Lagrange's notation is more useful for representing higher-order derivatives compactly, and Newton's notation is often used in physics for representing derivatives of low order with respect to time.
General definition
Given F, a function of x, y, and derivatives of y. Then an equation of the form
is called an explicit ordinary differential equation of order n.
More generally, an implicit ordinary differential equation of order n takes the form:
There are further classifications:
System of ODEs
A number of coupled differential equations form a system of equations. If y is a vector whose elements are functions; y(x) = [y1(x), y2(x),..., ym(x)], and F is a vector-valued function of y and its derivatives, then
is an explicit system of ordinary differential equations of order n and dimension m. In column vector form:
These are not necessarily linear. The implicit analogue is:
where 0 = (0, 0, ..., 0) is the zero vector. In matrix form
For a system of the form , some sources also require that the Jacobian matrix be non-singular in order to call this an implicit ODE [system]; an implicit ODE system satisfying this Jacobian non-singularity condition can be transformed into an explicit ODE system. In the same sources, implicit ODE systems with a singular Jacobian are termed differential algebraic equations (DAEs). This distinction is not merely one of terminology; DAEs have fundamentally different characteristics and are generally more involved to solve than (nonsingular) ODE systems. Presumably for additional derivatives, the Hessian matrix and so forth are also assumed non-singular according to this scheme, although note that any ODE of order greater than one can be (and usually is) rewritten as system of ODEs of first order, which makes the Jacobian singularity criterion sufficient for this taxonomy to be comprehensive at all orders.
The behavior of a system of ODEs can be visualized through the use of a phase portrait.
Solutions
Given a differential equation
a function , where I is an interval, is called a solution or integral curve for F, if u is n-times differentiable on I, and
Given two solutions and , u is called an extension of v if and
A solution that has no extension is called a maximal solution. A solution defined on all of R is called a global solution.
A general solution of an nth-order equation is a solution containing n arbitrary independent constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. A singular solution is a solution that cannot be obtained by assigning definite values to the arbitrary constants in the general solution.
In the context of linear ODE, the terminology particular solution can also refer to any solution of the ODE (not necessarily satisfying the initial conditions), which is then added to the homogeneous solution (a general solution of the homogeneous ODE), which then forms a general solution of the original ODE. This is the terminology used in the guessing method section in this article, and is frequently used when discussing the method of undetermined coefficients and variation of parameters.
Solutions of finite duration
For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they will be non-Lipschitz functions at their ending time, they are not included in the uniqueness theorem of solutions of Lipschitz differential equations.
As example, the equation:
Admits the finite duration solution:
Theories
Singular solutions
The theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century has it received special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (from 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.
Reduction to quadratures
The primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the nth degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that complex differential equations require complex numbers. Hence, analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter, the real question was no longer whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and, if so, what are the characteristic properties.
Fuchsian theory
Two memoirs by Fuchs inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869. His method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve that remains unchanged under a rational transformation, Clebsch proposed to classify the transcendent functions defined by differential equations according to the invariant properties of the corresponding surfaces f = 0 under rational one-to-one transformations.
Lie's theory
From 1870, Sophus Lie's work put the theory of differential equations on a better foundation. He showed that the integration theories of the older mathematicians can, using Lie groups, be referred to a common source, and that ordinary differential equations that admit the same infinitesimal transformations present comparable integration difficulties. He also emphasized the subject of transformations of contact.
Lie's group theory of differential equations has been certified, namely: (1) that it unifies the many ad hoc methods known for solving differential equations, and (2) that it provides powerful new ways to find solutions. The theory has applications to both ordinary and partial differential equations.
A general solution approach uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras, and differential geometry are used to understand the structure of linear and non-linear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform, and finally finding exact analytic solutions to DE.
Symmetry methods have been applied to differential equations that arise in mathematics, physics, engineering, and other disciplines.
Sturm–Liouville theory
Sturm–Liouville theory is a theory of a special type of second-order linear ordinary differential equation. Their solutions are based on eigenvalues and corresponding eigenfunctions of linear operators defined via second-order homogeneous linear equations. The problems are identified as Sturm–Liouville problems (SLP) and are named after J. C. F. Sturm and J. Liouville, who studied them in the mid-1800s. SLPs have an infinite number of eigenvalues, and the corresponding eigenfunctions form a complete, orthogonal set, which makes orthogonal expansions possible. This is a key idea in applied mathematics, physics, and engineering. SLPs are also useful in the analysis of certain partial differential equations.
Existence and uniqueness of solutions
There are several theorems that establish existence and uniqueness of solutions to initial value problems involving ODEs both locally and globally. The two main theorems are
{| class="wikitable"
|-
! Theorem
! Assumption
! Conclusion
|-
|Peano existence theorem
||F continuous
||local existence only
|-
|Picard–Lindelöf theorem
||F Lipschitz continuous
||local existence and uniqueness
|-
|}
In their basic form both of these theorems only guarantee local results, though the latter can be extended to give a global result, for example, if the conditions of Grönwall's inequality are met.
Also, uniqueness theorems like the Lipschitz one above do not apply to DAE systems, which may have multiple solutions stemming from their (non-linear) algebraic part alone.
Local existence and uniqueness theorem simplified
The theorem can be stated simply as follows. For the equation and initial value problem:
if F and ∂F/∂y are continuous in a closed rectangle
in the x-y plane, where a and b are real (symbolically: ) and denotes the Cartesian product, square brackets denote closed intervals, then there is an interval
for some where the solution to the above equation and initial value problem can be found. That is, there is a solution and it is unique. Since there is no restriction on F to be linear, this applies to non-linear equations that take the form F(x, y), and it can also be applied to systems of equations.
Global uniqueness and maximum domain of solution
When the hypotheses of the Picard–Lindelöf theorem are satisfied, then local existence and uniqueness can be extended to a global result. More precisely:
For each initial condition (x0, y0) there exists a unique maximum (possibly infinite) open interval
such that any solution that satisfies this initial condition is a restriction of the solution that satisfies this initial condition with domain .
In the case that , there are exactly two possibilities
explosion in finite time:
leaves domain of definition:
where Ω is the open set in which F is defined, and is its boundary.
Note that the maximum domain of the solution
is always an interval (to have uniqueness)
may be smaller than
may depend on the specific choice of (x0, y0).
Example.
This means that F(x, y) = y2, which is C1 and therefore locally Lipschitz continuous, satisfying the Picard–Lindelöf theorem.
Even in such a simple setting, the maximum domain of solution cannot be all since the solution is
which has maximum domain:
This shows clearly that the maximum interval may depend on the initial conditions. The domain of y could be taken as being but this would lead to a domain that is not an interval, so that the side opposite to the initial condition would be disconnected from the initial condition, and therefore not uniquely determined by it.
The maximum domain is not because
which is one of the two possible cases according to the above theorem.
Reduction of order
Differential equations are usually easier to solve if the order of the equation can be reduced.
Reduction to a first-order system
Any explicit differential equation of order n,
can be written as a system of n first-order differential equations by defining a new family of unknown functions
for i = 1, 2, ..., n. The n-dimensional system of first-order coupled differential equations is then
more compactly in vector notation:
where
Summary of exact solutions
Some differential equations have solutions that can be written in an exact and closed form. Several important classes are given here.
In the table below, , , , , and , are any integrable functions of , ; and are real given constants; are arbitrary constants (complex in general). The differential equations are in their equivalent and alternative forms that lead to the solution through integration.
In the integral solutions, λ and ε are dummy variables of integration (the continuum analogues of indices in summation), and the notation just means to integrate with respect to , then after the integration substitute , without adding constants (explicitly stated).
Separable equations
General first-order equations
General second-order equations
Linear to the nth order equations
The guessing method
When all other methods for solving an ODE fail, or in the cases where we have some intuition about what the solution to a DE might look like, it is sometimes possible to solve a DE simply by guessing the solution and validating it is correct. To use this method, we simply guess a solution to the differential equation, and then plug the solution into the differential equation to validate if it satisfies the equation. If it does then we have a particular solution to the DE, otherwise we start over again and try another guess. For instance we could guess that the solution to a DE has the form: since this is a very common solution that physically behaves in a sinusoidal way.
In the case of a first order ODE that is non-homogeneous we need to first find a solution to the homogeneous portion of the DE, otherwise known as the associated homogeneous equation, and then find a solution to the entire non-homogeneous equation by guessing. Finally, we add both of these solutions together to obtain the general solution to the ODE, that is:
Software for ODE solving
Maxima, an open-source computer algebra system.
COPASI, a free (Artistic License 2.0) software package for the integration and analysis of ODEs.
MATLAB, a technical computing application (MATrix LABoratory)
GNU Octave, a high-level language, primarily intended for numerical computations.
Scilab, an open source application for numerical computation.
Maple, a proprietary application for symbolic calculations.
Mathematica, a proprietary application primarily intended for symbolic calculations.
SymPy, a Python package that can solve ODEs symbolically
Julia (programming language), a high-level language primarily intended for numerical computations.
SageMath, an open-source application that uses a Python-like syntax with a wide range of capabilities spanning several branches of mathematics.
SciPy, a Python package that includes an ODE integration module.
Chebfun, an open-source package, written in MATLAB, for computing with functions to 15-digit accuracy.
GNU R, an open source computational environment primarily intended for statistics, which includes packages for ODE solving.
See also
Boundary value problem
Examples of differential equations
Laplace transform applied to differential equations
List of dynamical systems and differential equations topics
Matrix differential equation
Method of undetermined coefficients
Recurrence relation
Notes
References
.
Polyanin, A. D. and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition), Chapman & Hall/CRC Press, Boca Raton, 2003.
Bibliography
W. Johnson, A Treatise on Ordinary and Partial Differential Equations, John Wiley and Sons, 1913, in University of Michigan Historical Math Collection
Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications,
.
A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002.
D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.
External links
EqWorld: The World of Mathematical Equations, containing a list of ordinary differential equations with their solutions.
Online Notes / Differential Equations by Paul Dawkins, Lamar University.
Differential Equations, S.O.S. Mathematics.
A primer on analytical solution of differential equations from the Holistic Numerical Methods Institute, University of South Florida.
Ordinary Differential Equations and Dynamical Systems lecture notes by Gerald Teschl.
Notes on Diffy Qs: Differential Equations for Engineers An introductory textbook on differential equations by Jiri Lebl of UIUC.
Modeling with ODEs using Scilab A tutorial on how to model a physical system described by ODE using Scilab standard programming language by Openeering team.
Solving an ordinary differential equation in Wolfram|Alpha
Differential calculus | 0.783498 | 0.997986 | 0.78192 |
Voltammetry | Voltammetry is a category of electroanalytical methods used in analytical chemistry and various industrial processes. In voltammetry, information about an analyte is obtained by measuring the current as the potential is varied. The analytical data for a voltammetric experiment comes in the form of a voltammogram, which plots the current produced by the analyte versus the potential of the working electrode.
Theory
Voltammetry is the study of current as a function of applied potential. Voltammetric methods involve electrochemical cells, and investigate the reactions occurring at electrode/electrolyte interfaces. The reactivity of analytes in these half-cells is used to determine their concentration. It is considered a dynamic electrochemical method as the applied potential is varied over time and the corresponding changes in current are measured. Most experiments control the potential (volts) of an electrode in contact with the analyte while measuring the resulting current (amperes).
Electrochemical cells
Electrochemical cells are used in voltammetric experiments to drive the redox reaction of the analyte. Like other electrochemical cells, two half-cells are required, one to facilitate reduction and the other oxidation. The cell consists of an analyte solution, an ionic electrolyte, and two or three electrodes, with oxidation and reduction reactions occurring at the electrode/electrolyte interfaces. As a species is oxidized, the electrons produced pass through an external electric circuit and generate a current, acting as an electron source for reduction. The generated currents are Faradaic currents, which follow Faraday's law. As Faraday's law states that the number of moles of a substance, m, produced or consumed during an electrode process is proportional to the electric charge passed through the electrode, the faradaic currents allow analyte concentrations to be determined. Whether the analyte is reduced or oxidized depends on the analyte, but its reaction always occurs at the working/indicator electrode. Therefore, the working electrode potential varies as a function of the analyte concentration. A second auxiliary electrode completes the electric circuit. A third reference electrode provides a constant, baseline potential reading for the other two electrode potentials to be compared to.
Three electrode system
Voltammetry experiments investigate the half-cell reactivity of an analyte. Voltammetry is the study of current as a function of applied potential.
These curves I = f(E) are called voltammograms.
The potential is varied arbitrarily, either step by step or continuously, and the resulting current value is measured as the dependent variable.
The opposite, i.e., amperometry, is also possible but not common.
The shape of the curves depends on the speed of potential variation, (nature of driving force) and whether the solution is stirred or quiescent (mass transfer).
Most experiments control the potential (volts) of an electrode in contact with the analyte while measuring the resulting current (amperes).
To conduct such an experiment, at least two electrodes are required. The working electrode, which makes contact with the analyte, must apply the desired potential in a controlled way and facilitate the transfer of charge to and from the analyte. A second electrode acts as the other half of the cell. This second electrode must have a known potential to gauge the potential of the working electrode from; furthermore it must balance the charge added or removed by the working electrode. While this is a viable setup, it has a number of shortcomings. Most significantly, it is extremely difficult for an electrode to maintain a constant potential while passing current to counter redox events at the working electrode.
To solve this problem, the roles of supplying electrons and providing a reference potential are divided between two separate electrodes. The reference electrode is a half cell with a known reduction potential. Its only role is to act as reference for measuring and controlling the working electrode's potential and it does not pass any current. The auxiliary electrode passes the current required to balance the observed current at the working electrode. To achieve this current, the auxiliary will often swing to extreme potentials at the edges of the solvent window, where it oxidizes or reduces the solvent or supporting electrolyte. These electrodes, the working, reference, and auxiliary make up the modern three-electrode system.
There are many systems which have more electrodes, but their design principles are similar to the three-electrode system. For example, the rotating ring-disk electrode has two distinct and separate working electrodes, a disk, and a ring, which can be used to scan or hold potentials independently of each other. Both of these electrodes are balanced by a single reference and auxiliary combination for an overall four-electrode design. More complicated experiments may add working electrodes, reference, or auxiliary electrodes as required.
In practice it can be important to have a working electrode with known dimensions and surface characteristics. As a result, it is common to clean and polish working electrodes regularly. The auxiliary electrode can be almost anything as long as it doesn't react with the bulk of the analyte solution and conducts well. A common voltammetry method, polarography, uses mercury as a working electrode e.g. DME and HMDE, and as an auxiliary electrode. The reference is the most complex of the three electrodes; there are a variety of standards used. For non-aqueous work, IUPAC recommends the use of the ferrocene/ferrocenium couple as an internal standard. In most voltammetry experiments, a bulk electrolyte (also known as a supporting electrolyte) is used to minimize solution resistance. It is possible to run an experiment without a bulk electrolyte, but the added resistance greatly reduces the accuracy of the results. With room temperature ionic liquids, the solvent can act as the electrolyte.
Voltammograms
A voltammogram is a graph that measures the current of an electrochemical cell as a function of the potential applied. This graph is used to determine the concentration and the standard potential of the analyte. To determine the concentration, values such as the limiting or peak current are read from the graph and applied to various mathematical models. After determining the concentration, the applied standard potential can be identified using the Nernst equation.
There are three main shapes for voltammograms. The first shape is dependent on the diffusion layer. If the analyte is continuously stirred, the diffusion layer will be a constant width and produce a voltammogram that reaches a constant current. The graph takes this shape as the current increases from the background residual to reach the limiting current (il). If the mixture is not stirred, the width of the diffusion layer eventually increases. This can be observed by the maximum peak current (ip), and is identified by the highest point on the graph. The third common shape for a voltammogram measures the sample for change in current rather than current applied. A maximum current is still observed, but represents the maximum change in current (ip).
Mathematical models
To determine analyte concentrations, mathematical models are required to link the applied potential and current measured over time. The Nernst equation relates electrochemical cell potential to the concentration ratio of the reduced and oxidized species in a logarithmic relationship. The Nernst equation is as follows:
Where:
: Reduction potential
: Standard potential
: Universal gas constant
: Temperature in kelvin
: Ion charge (moles of electrons)
: Faraday constant
: Reaction quotient
This equation describes how the changes in applied potential will alter the concentration ratio. However, the Nernst equation is limited, as it is modeled without a time component and voltammetric experiments vary applied potential as a function of time. Other mathematical models, primarily the Butler-Volmer equation, the Tafel equation, and Fick's law address the time dependence.
The Butler–Volmer equation relates concentration, potential, and current as a function of time. It describes the non-linear relationship between the electrode and electrolyte voltage difference and the electrical current. It helps make predictions about how the forward and backward redox reactions affect potential and influence the reactivity of the cell. This function includes a rate constant which accounts for the kinetics of the reaction. A compact version of the Butler-Volmer equation is as follows:
Where:
: electrode current density, A/m2 (defined as j = I/S)
: exchange current density, A/m2
: electrode potential, V
: equilibrium potential, V
: absolute temperature, K
: number of electrons involved in the electrode reaction
: Faraday constant
: universal gas constant
: so-called cathodic charge transfer coefficient, dimensionless
: so-called anodic charge transfer coefficient, dimensionless
: activation overpotential (defined as ).
At high overpotentials, the Butler–Volmer equation simplifies to the Tafel equation. The Tafel equation relates the electrochemical currents to the overpotential exponentially, and is used to calculate the reaction rate. The overpotential is calculated at each electrode separately, and related to the voltammogram data to determine reaction rates. The Tafel equation for a single electrode is:
Where:
the plus sign under the exponent refers to an anodic reaction, and a minus sign to a cathodic reaction
: overpotential, V
A: "Tafel slope", V
: current density, A/m2
: "exchange current density", A/m2.
As the redox species are oxidized and reduced at the electrodes, material accumulates at the electrode/electrolyte interface. Material accumulation creates a concentration gradient between the interface and the bulk solution. Fick's laws of diffusion is used to relate the diffusion of oxidized and reduced species to the faradaic current used to describe redox processes. Fick's law is most commonly written in terms of moles, and is as follows:
Where:
J: diffusion flux (in amount of substance per unit area per unit time)
D: diffusion coefficient or diffusivity. (in area per unit time)
φ: concentration (in amount of substance per unit volume)
x: position (in length)
Types of voltammetry
History
The beginning of voltammetry was facilitated by the discovery of polarography in 1922 by the Nobel Prize–winning Czech chemist Jaroslav Heyrovský. Early voltammetric techniques had many problems, limiting their viability for everyday use in analytical chemistry. In polarography, these problems included the fact that mercury is oxidized at a potential that is more positive than +0.2 Volt, making it harder to analyze the results for the analytes in the positive region of the potential. Another problem included the residual current obtained from the charging of the large capacitance of the electrode surface. When Heyrovsky first recorded the first dependence on the current flowing through the dropping mercury electrode on the applied potential in 1922, he took point-by-point measurements and plotted a current-voltage curve. This was considered to be the first polarogram. In order to facilitate this process, he constructed what is now known as a polarograph with M. Shikata, which enabled him to record photographically the same curve in a matter of hours. He gave recognition to the importance of potential and its control and also recognized the opportunities of measuring the limiting currents. He was also an important part of the introduction of dropping mercury electrode as a modern-day tool.
In 1942, the English electrochemist Archie Hickling (University of Leicester) built the first three electrodes potentiostat, which was an advancement for the field of electrochemistry. He used this potentiostat to control the voltage of an electrode. In the meantime, in the late 1940s, the American biophysicist Kenneth Stewart Cole invented an electronic circuit which he called a voltage clamp. The voltage clamp was used to analyze the ionic conduction in nerves.
The 1960s and 1970s saw many advances in the theory, instrumentation, and the introduction of computer aided and controlled systems. Modern polarographic and voltammetric methods on mercury electrodes came about in three sections.
The first section includes the development of the mercury electrodes. The following electrodes were produced: dropping mercury electrode, mercury steaming electrode, hanging mercury drop electrode, static mercury drop electrode, mercury film electrode, mercury amalgam electrodes, mercury microelectrodes, chemically modified mercury electrodes, controlled growth mercury electrodes, and contractible mercury drop electrodes.
There was also an advancement of the measuring techniques used. These measuring techniques include: classical DC polarography, oscillopolarography, Kaloussek's switcher, AC polarography, tast polarography, normal pulse polarography, differential pulse polarography, square-wave voltammetry, cyclic voltammetry, anodic stripping voltammetry, convolution techniques, and elimination methods.
Lastly, there was also an advancement of preconcentration techniques that produced an increase in the sensitivity of the mercury electrodes. This came about through the development of anodic stripping voltammetry, cathodic stripping voltammetry and adsorptive stripping voltammetry.
These advancements improved sensitivity and created new analytical methods, which prompted the industry to respond with the production of cheaper potentiostat, electrodes, and cells that could be effectively used in routine analytical work.
Applications
Voltammetric sensors
A number of voltammetric systems are produced commercially for the determination of species that are of interest in industry and research. These devices are sometimes called electrodes but are actually complete voltammetric cells, which are better referred to as sensors. These sensors can be employed for the analysis of organic and inorganic analytes in various matrices.
The oxygen electrode
The determination of dissolved oxygen in a variety of aqueous environments, such as sea water, blood, sewage, effluents from chemical plants, and soils is of tremendous importance to industry, biomedical and environmental research, and clinical medicine. One of the most common and convenient methods for making such measurements is with the Clark oxygen sensor, which was patented by L.C. Clark, Jr. in 1956.
See also
Current–voltage characteristic
Neopolarogram
References
Further reading
External links
http://new.ametek.com/content-manager/files/PAR/App%20Note%20E-4%20-%20Electrochemical%20Analysis%20Techniques1.pdf
Electroanalytical methods | 0.796194 | 0.982046 | 0.7819 |
Biochemist | Biochemists are scientists who are trained in biochemistry. They study chemical processes and chemical transformations in living organisms. Biochemists study DNA, proteins and cell parts. The word "biochemist" is a portmanteau of "biological chemist."
Biochemists also research how certain chemical reactions happen in cells and tissues and observe and record the effects of products in food additives and medicines.
Biochemist researchers focus on playing and constructing research experiments, mainly for developing new products, updating existing products and analyzing said products. It is also the responsibility of a biochemist to present their research findings and create grant proposals to obtain funds for future research.
Biochemists study aspects of the immune system, the expressions of genes, isolating, analyzing, and synthesizing different products, mutations that lead to cancers, and manage laboratory teams and monitor laboratory work. Biochemists also have to have the capabilities of designing and building laboratory equipment and devise new methods of producing correct results for products.
The most common industry role is the development of biochemical products and processes. Identifying substances' chemical and physical properties in biological systems is of great importance, and can be carried out by doing various types of analysis. Biochemists must also prepare technical reports after collecting, analyzing and summarizing the information and trends found.
In biochemistry, researchers often break down complicated biological systems into their component parts. They study the effects of foods, drugs, allergens and other substances on living tissues; they research molecular biology, the study of life at the molecular level and the study of genes and gene expression; and they study chemical reactions in metabolism, growth, reproduction, and heredity, and apply techniques drawn from biotechnology and genetic engineering to help them in their research. About 75% work in either basic or applied research; those in applied research take basic research and employ it for the benefit of medicine, agriculture, veterinary science, environmental science, and manufacturing. Each of these fields allows specialization; for example, clinical biochemists can work in hospital laboratories to understand and treat diseases, and industrial biochemists can be involved in analytical research work, such as checking the purity of food and beverages.
Biochemists in the field of agriculture research the interactions between herbicides with plants. They examine the relationships of compounds, determining their ability to inhibit growth, and evaluate the toxicological effects surrounding life.
Biochemists also prepare pharmaceutical compounds for commercial distribution.
Modern biochemistry is considered a sub-discipline of the biological sciences, due to its increased reliance on, and training, in accord with modern molecular biology. Historically, even before the term biochemist was formally recognized, initial studies were performed by those trained in basic chemistry, but also by those trained as physicians.
Training
Some of the job skills and abilities that one needs to attain to be successful in this field of work include science, mathematics, reading comprehension, writing, and critical thinking. These skills are critical because of the nature of the experimental techniques of the occupation. One will also need to convey trends found in research in written and oral forms.
A degree in biochemistry or a related science such as chemistry is the minimum requirement for any work in this field. This is sufficient for a position as a technical assistant in industry or in academic settings. A Ph.D. (or equivalent) is generally required to pursue or direct independent research. To advance further in commercial environments, one may need to acquire skills in management.
Biochemists must pass a qualifying exam or a preliminary exam to continue their studies when receiving a Ph.D. in biochemistry.
Biochemistry requires an understanding of organic and inorganic chemistry. All types of chemistry are required, with emphasis on biochemistry, organic chemistry and physical chemistry. Basic classes in biology, including microbiology, molecular biology, molecular genetics, cell biology, and genomics, are focused on. Some instruction in experimental techniques and quantification is also part of most curricula.
In the private industries for businesses, it is imperative to possess strong business management skills as well as communication skills. Biochemists must also be familiar with regulatory rules and management techniques.
Biochemistry Blog publishes high quality research articles, papers, posts and jobs related to biochemistry. Biochemistry 2019, biochemistry papers latest.
Due to the reliance on most principles of the basic science of Biochemistry, early contemporary physicians were informally qualified to perform research on their own in mainly this (today also related biomedical sciences) field.
Employment
Biochemists are typically employed in the life sciences, where they work in the pharmaceutical or biotechnology industry in a research role. They are also employed in academic institutes, where in addition to pursuing their research, they may also be involved with teaching undergraduates, training graduate students, and collaborating with post-doctoral fellows.
The U.S. Bureau of Labor Statistics (BLS) estimates that jobs in the biochemist, combined with the statistics of biophysicists, field would increase by 31% between 2004 and 2014 because of the demand in medical research and development of new drugs and products, and the preservation of the environment.
Because of a biochemists' background in both biology and chemistry, they may also be employed in the medical, industrial, governmental, and environmental fields. Slightly more than half of the biological scientists are employed by the Federal State and local governments. The field of medicine includes nutrition, genetics, biophysics, and pharmacology; industry includes beverage and food technology, toxicology, and vaccine production; while the governmental and environmental fields includes forensic science, wildlife management, marine biology, and viticulture.
The average income of a biochemist was $82,150 in 2017. The range of the salaries begin around 44,640 to 153,810, reported in 2017. The Federal Government in 2005 reported the average salaries in different fields associated with biochemistry and being a biochemist. General biological scientists in nonsupervisory, supervisory, and managerial positions earned an average salary of $69,908; microbiologists, $80,798; ecologists, $72,021; physiologists, $93,208; geneticists, $85,170; zoologists, $101,601; and botanists, $62,207.
See also
List of biochemists
References
External links
Biochemist Career Profile
Science occupations | 0.791131 | 0.988253 | 0.781838 |
Chemical biology | Chemical biology is a scientific discipline between the fields of chemistry and biology. The discipline involves the application of chemical techniques, analysis, and often small molecules produced through synthetic chemistry, to the study and manipulation of biological systems. Although often confused with biochemistry, which studies the chemistry of biomolecules and regulation of biochemical pathways within and between cells, chemical biology remains distinct by focusing on the application of chemical tools to address biological questions.
History
Although considered a relatively new scientific field, the term "chemical biology" has been in use since the early 20th century, and has roots in scientific discovery from the early 19th century. The term 'chemical biology' can be traced back to an early appearance in a book published by Alonzo E. Taylor in 1907 titled "On Fermentation", and was subsequently used in John B. Leathes' 1930 article titled "The Harveian Oration on The Birth of Chemical Biology". However, it is unclear when the term was first used.
Friedrich Wöhler's 1828 synthesis of urea is an early example of the application of synthetic chemistry to advance biology. It showed that biological compounds could be synthesized with inorganic starting materials and weakened the previous notion of vitalism, or that a 'living' source was required to produce organic compounds. Wöhler's work is often considered to be instrumental in the development of organic chemistry and natural product synthesis, both of which play a large part in modern chemical biology.
Friedrich Miescher's work during the late 19th century investigating the cellular contents of human leukocytes led to the discovery of 'nuclein', which would later be renamed DNA. After isolating the nuclein from the nucleus of leukocytes through protease digestion, Miescher used chemical techniques such as elemental analysis and solubility tests to determine the composition of nuclein. This work would lay the foundations for Watson and Crick's discovery of the double-helix structure of DNA.
The rising interest in chemical biology has led to several journals dedicated to the field. Nature Chemical Biology, created in 2005, and ACS Chemical Biology, created in 2006, are two of the most well-known journals in this field, with impact factors of 14.8 and 4.0 respectively.
Nobel laureates in chemical biology
Research areas
Glycobiology
Glycobiology is the study of the structure and function of carbohydrates. While DNA, RNA, and proteins are encoded at the genetic level, carbohydrates are not encoded directly from the genome, and thus require different tools for their study. By applying chemical principles to glycobiology, novel methods for analyzing and synthesizing carbohydrates can be developed. For example, cells can be supplied with synthetic variants of natural sugars to probe their function. Carolyn Bertozzi's research group has developed methods for site-specifically reacting molecules at the surface of cells via synthetic sugars.
Combinatorial chemistry
Combinatorial chemistry involves simultaneously synthesizing a large number of related compounds for high-throughput analysis. Chemical biologists are able to use principles from combinatorial chemistry in synthesizing active drug compounds and maximizing screening efficiency. Similarly, these principles can be used in areas of agriculture and food research, specifically in the syntheses of unnatural products and in generating novel enzyme inhibitors.
Peptide synthesis
Chemical synthesis of proteins is a valuable tool in chemical biology as it allows for the introduction of non-natural amino acids as well as residue-specific incorporation of "posttranslational modifications" such as phosphorylation, glycosylation, acetylation, and even ubiquitination. These properties are valuable for chemical biologists as non-natural amino acids can be used to probe and alter the functionality of proteins, while post-translational modifications are widely known to regulate the structure and activity of proteins. Although strictly biological techniques have been developed to achieve these ends, the chemical synthesis of peptides often has a lower technical and practical barrier to obtaining small amounts of the desired protein.
To make protein-sized polypeptide chains with the small peptide fragments made by synthesis, chemical biologists can use the process of native chemical ligation. Native chemical ligation involves the coupling of a C-terminal thioester and an N-terminal cysteine residue, ultimately resulting in formation of a "native" amide bond. Other strategies that have been used for the ligation of peptide fragments using the acyl transfer chemistry first introduced with native chemical ligation include expressed protein ligation, sulfurization/desulfurization techniques, and use of removable thiol auxiliaries.
Enrichment techniques for proteomics
Chemical biologists work to improve proteomics through the development of enrichment strategies, chemical affinity tags, and new probes. Samples for proteomics often contain many peptide sequences and the sequence of interest may be highly represented or of low abundance, which creates a barrier for their detection. Chemical biology methods can reduce sample complexity by selective enrichment using affinity chromatography. This involves targeting a peptide with a distinguishing feature like a biotin label or a post translational modification. Methods have been developed that include the use of antibodies, lectins to capture glycoproteins, and immobilized metal ions to capture phosphorylated peptides and enzyme substrates to capture select enzymes.
Enzyme probes
To investigate enzymatic activity as opposed to total protein, activity-based reagents have been developed to label the enzymatically active form of proteins (see Activity-based proteomics). For example, serine hydrolase- and cysteine protease-inhibitors have been converted to suicide inhibitors. This strategy enhances the ability to selectively analyze low abundance constituents through direct targeting. Enzyme activity can also be monitored through converted substrate. Identification of enzyme substrates is a problem of significant difficulty in proteomics and is vital to the understanding of signal transduction pathways in cells. A method that has been developed uses "analog-sensitive" kinases to label substrates using an unnatural ATP analog, facilitating visualization and identification through a unique handle.
Employing biology
Many research programs are also focused on employing natural biomolecules to perform biological tasks or to support a new chemical method. In this regard, chemical biology researchers have shown that DNA can serve as a template for synthetic chemistry, self-assembling proteins can serve as a structural scaffold for new materials, and RNA can be evolved in vitro to produce new catalytic function. Additionally, heterobifunctional (two-sided) synthetic small molecules such as dimerizers or PROTACs bring two proteins together inside cells, which can synthetically induce important new biological functions such as targeted protein degradation.
Directed evolution
A primary goal of protein engineering is the design of novel peptides or proteins with a desired structure and chemical activity. Because our knowledge of the relationship between primary sequence, structure, and function of proteins is limited, rational design of new proteins with engineered activities is extremely challenging. In directed evolution, repeated cycles of genetic diversification followed by a screening or selection process, can be used to mimic natural selection in the laboratory to design new proteins with a desired activity.
Several methods exist for creating large libraries of sequence variants. Among the most widely used are subjecting DNA to UV radiation or chemical mutagens, error-prone PCR, degenerate codons, or recombination. Once a large library of variants is created, selection or screening techniques are used to find mutants with a desired attribute. Common selection/screening techniques include FACS, mRNA display, phage display, and in vitro compartmentalization. Once useful variants are found, their DNA sequence is amplified and subjected to further rounds of diversification and selection.
The development of directed evolution methods was honored in 2018 with the awarding of the Nobel Prize in Chemistry to Frances Arnold for evolution of enzymes, and George Smith and Gregory Winter for phage display.
Bioorthogonal reactions
Successful labeling of a molecule of interest requires specific functionalization of that molecule to react chemospecifically with an optical probe. For a labeling experiment to be considered robust, that functionalization must minimally perturb the system. Unfortunately, these requirements are often hard to meet. Many of the reactions normally available to organic chemists in the laboratory are unavailable in living systems. Water- and redox- sensitive reactions would not proceed, reagents prone to nucleophilic attack would offer no chemospecificity, and any reactions with large kinetic barriers would not find enough energy in the relatively low-heat environment of a living cell. Thus, chemists have recently developed a panel of bioorthogonal chemistry that proceed chemospecifically, despite the milieu of distracting reactive materials in vivo.
The coupling of a probe to a molecule of interest must occur within a reasonably short time frame; therefore, the kinetics of the coupling reaction should be highly favorable. Click chemistry is well suited to fill this niche, since click reactions are rapid, spontaneous, selective, and high-yielding. Unfortunately, the most famous "click reaction," a [3+2] cycloaddition between an azide and an acyclic alkyne, is copper-catalyzed, posing a serious problem for use in vivo due to copper's toxicity. To bypass the necessity for a catalyst, Carolyn R. Bertozzi's lab introduced inherent strain into the alkyne species by using a cyclic alkyne. In particular, cyclooctyne reacts with azido-molecules with distinctive vigor.
Discovery of biomolecules through metagenomics
The advances in modern sequencing technologies in the late 1990s allowed scientists to investigate DNA of communities of organisms in their natural environments ("eDNA"), without culturing individual species in the lab. This metagenomic approach enabled scientists to study a wide selection of organisms that were previously not characterized due in part to an incompetent growth condition. Sources of eDNA include soils, ocean, subsurface, hot springs, hydrothermal vents, polar ice caps, hypersaline habitats, and extreme pH environments. Of the many applications of metagenomics, researchers such as Jo Handelsman, Jon Clardy, and Robert M. Goodman, explored metagenomic approaches toward the discovery of biologically active molecules such as antibiotics.
Functional or homology screening strategies have been used to identify genes that produce small bioactive molecules. Functional metagenomic studies are designed to search for specific phenotypes that are associated with molecules with specific characteristics. Homology metagenomic studies, on the other hand, are designed to examine genes to identify conserved sequences that are previously associated with the expression of biologically active molecules.
Functional metagenomic studies enable the discovery of novel genes that encode biologically active molecules. These assays include top agar overlay assays where antibiotics generate zones of growth inhibition against test microbes, and pH assays that can screen for pH change due to newly synthesized molecules using pH indicator on an agar plate. Substrate-induced gene expression screening (SIGEX), a method to screen for the expression of genes that are induced by chemical compounds, has also been used to search for genes with specific functions. Homology-based metagenomic studies have led to a fast discovery of genes that have homologous sequences as the previously known genes that are responsible for the biosynthesis of biologically active molecules. As soon as the genes are sequenced, scientists can compare thousands of bacterial genomes simultaneously. The advantage over functional metagenomic assays is that homology metagenomic studies do not require a host organism system to express the metagenomes, thus this method can potentially save the time spent on analyzing nonfunctional genomes. These also led to the discovery of several novel proteins and small molecules. In addition, an in silico examination from the Global Ocean Metagenomic Survey found 20 new lantibiotic cyclases.
Kinases
Posttranslational modification of proteins with phosphate groups by kinases is a key regulatory step throughout all biological systems. Phosphorylation events, either phosphorylation by protein kinases or dephosphorylation by phosphatases, result in protein activation or deactivation. These events have an impact on the regulation of physiological pathways, which makes the ability to dissect and study these pathways integral to understanding the details of cellular processes. There exist a number of challenges—namely the sheer size of the phosphoproteome, the fleeting nature of phosphorylation events and related physical limitations of classical biological and biochemical techniques—that have limited the advancement of knowledge in this area.
Through the use of small molecule modulators of protein kinases, chemical biologists have gained a better understanding of the effects of protein phosphorylation. For example, nonselective and selective kinase inhibitors, such as a class of pyridinylimidazole compounds are potent inhibitors useful in the dissection of MAP kinase signaling pathways. These pyridinylimidazole compounds function by targeting the ATP binding pocket. Although this approach, as well as related approaches, with slight modifications, has proven effective in a number of cases, these compounds lack adequate specificity for more general applications. Another class of compounds, mechanism-based inhibitors, combines knowledge of the kinase enzymology with previously utilized inhibition motifs. For example, a "bisubstrate analog" inhibits kinase action by binding both the conserved ATP binding pocket and a protein/peptide recognition site on the specific kinase. Research groups also utilized ATP analogs as chemical probes to study kinases and identify their substrates.
The development of novel chemical means of incorporating phosphomimetic amino acids into proteins has provided important insight into the effects of phosphorylation events. Phosphorylation events have typically been studied by mutating an identified phosphorylation site (serine, threonine or tyrosine) to an amino acid, such as alanine, that cannot be phosphorylated. However, these techniques come with limitations and chemical biologists have developed improved ways of investigating protein phosphorylation. By installing phospho-serine, phospho-threonine or analogous phosphonate mimics into native proteins, researchers are able to perform in vivo studies to investigate the effects of phosphorylation by extending the amount of time a phosphorylation event occurs while minimizing the often-unfavorable effects of mutations. Expressed protein ligation, has proven to be successful techniques for synthetically producing proteins that contain phosphomimetic molecules at either terminus. In addition, researchers have used unnatural amino acid mutagenesis at targeted sites within a peptide sequence.
Advances in chemical biology have also improved upon classical techniques of imaging kinase action. For example, the development of peptide biosensors—peptides containing incorporated fluorophores improved temporal resolution of in vitro binding assays. One of the most useful techniques to study kinase action is Fluorescence Resonance Energy Transfer (FRET). To utilize FRET for phosphorylation studies, fluorescent proteins are coupled to both a phosphoamino acid binding domain and a peptide that can be phosphorylated. Upon phosphorylation or dephosphorylation of a substrate peptide, a conformational change occurs that results in a change in fluorescence. FRET has also been used in tandem with Fluorescence Lifetime Imaging Microscopy (FLIM) or fluorescently conjugated antibodies and flow cytometry to provide quantitative results with excellent temporal and spatial resolution.
Biological fluorescence
Chemical biologists often study the functions of biological macromolecules using fluorescence techniques. The advantage of fluorescence versus other techniques resides in its high sensitivity, non-invasiveness, safe detection, and ability to modulate the fluorescence signal. In recent years, the discovery of green fluorescent protein (GFP) by Roger Y. Tsien and others, hybrid systems and quantum dots have enabled assessing protein location and function more precisely. Three main types of fluorophores are used: small organic dyes, green fluorescent proteins, and quantum dots. Small organic dyes usually are less than 1 kDa, and have been modified to increase photostability and brightness, and reduce self-quenching. Quantum dots have very sharp wavelengths, high molar absorptivity and quantum yield. Both organic dyes and quantum dyes do not have the ability to recognize the protein of interest without the aid of antibodies, hence they must use immunolabeling. Fluorescent proteins are genetically encoded and can be fused to your protein of interest. Another genetic tagging technique is the tetracysteine biarsenical system, which requires modification of the targeted sequence that includes four cysteines, which binds membrane-permeable biarsenical molecules, the green and the red dyes "FlAsH" and "ReAsH", with picomolar affinity. Both fluorescent proteins and biarsenical tetracysteine can be expressed in live cells, but present major limitations in ectopic expression and might cause a loss of function.
Fluorescent techniques have been used to assess a number of protein dynamics including protein tracking, conformational changes, protein–protein interactions, protein synthesis and turnover, and enzyme activity, among others. Three general approaches for measuring protein net redistribution and diffusion are single-particle tracking, correlation spectroscopy and photomarking methods. In single-particle tracking, the individual molecule must be both bright and sparse enough to be tracked from one video to the other. Correlation spectroscopy analyzes the intensity fluctuations resulting from migration of fluorescent objects into and out of a small volume at the focus of a laser. In photomarking, a fluorescent protein can be dequenched in a subcellular area with the use of intense local illumination and the fate of the marked molecule can be imaged directly. Michalet and coworkers used quantum dots for single-particle tracking using biotin-quantum dots in HeLa cells. One of the best ways to detect conformational changes in proteins is to label the protein of interest with two fluorophores within close proximity. FRET will respond to internal conformational changes result from reorientation of one fluorophore with respect to the other. One can also use fluorescence to visualize enzyme activity, typically by using a quenched activity-based proteomics (qABP). Covalent binding of a qABP to the active site of the targeted enzyme will provide direct evidence concerning if the enzyme is responsible for the signal upon release of the quencher and regain of fluorescence.
Education in chemical biology
Undergraduate education
Despite an increase in biological research within chemistry departments, attempts at integrating chemical biology into undergraduate curricula are lacking. For example, although the American Chemical Society (ACS) requires for foundational courses in a Chemistry Bachelor's degree to include biochemistry, no other biology-related chemistry course is required.
Although a chemical biology course is often not required for an undergraduate degree in Chemistry, many universities now provide introductory chemical biology courses for their undergraduate students. The University of British Columbia, for example, offers a fourth-year course in synthetic chemical biology.
See also
Chemoproteomics
Chemical genetics
Chemogenomics
References
Further reading
Journals
ACS Chemical Biology – The new Chemical Biology journal from the American Chemical Society.
Bioorganic & Medicinal Chemistry – The Tetrahedron Journal for Research at the Interface of Chemistry and Biology
ChemBioChem – A European Journal of Chemical Biology
Chemical Biology – A point of access to chemical biology news and research from across RSC Publishing
Cell Chemical Biology – An interdisciplinary journal that publishes papers of exceptional interest in all areas at the interface between chemistry and biology. chembiol.com
Journal of Chemical Biology – A new journal publishing novel work and reviews at the interface between biology and the physical sciences, published by Springer. link
Journal of the Royal Society Interface – A cross-disciplinary publication promoting research at the interface between the physical and life sciences
Molecular BioSystems – Chemical biology journal with a particular focus on the interface between chemistry and the -omic sciences and systems biology.
Nature Chemical Biology – A monthly multidisciplinary journal providing an international forum for the timely publication of significant new research at the interface between chemistry and biology.
Wiley Encyclopedia of Chemical Biology link
Chemistry
Branches of biology
Induced stem cells | 0.795174 | 0.983155 | 0.78178 |
Biomagnification | Biomagnification, also known as bioamplification or biological magnification, is the increase in concentration of a substance, e.g a pesticide, in the tissues of organisms at successively higher levels in a food chain. This increase can occur as a result of:
Persistence – where the substance cannot be broken down by environmental processes.
Food chain energetics – where the substance's concentration increases progressively as it moves up a food chain.
Low or non-existent rate of internal degradation or excretion of the substance – mainly due to water-insolubility.
Biological magnification often refers to the process whereby substances such as pesticides or heavy metals work their way into lakes, rivers and the ocean, and then move up the food chain in progressively greater concentrations as they are incorporated into the diet of aquatic organisms such as zooplankton, which in turn are eaten perhaps by fish, which then may be eaten by bigger fish, large birds, animals, or humans. The substances become increasingly concentrated in tissues or internal organs as they move up the chain. Bioaccumulants are substances that increase in concentration in living organisms as they take in contaminated air, water, or food because the substances are very slowly metabolized or excreted.
Processes
Although sometimes used interchangeably with "bioaccumulation", an important distinction is drawn between the two, and with bioconcentration.
Bioaccumulation occurs within a trophic level, and is the increase in the concentration of a substance in certain tissues of organisms' bodies due to absorption from food and the environment.
Bioconcentration is defined as occurring when uptake from the water is greater than excretion.
Thus, bioconcentration and bioaccumulation occur within an organism, and biomagnification occurs across trophic (food chain) levels.
Biodilution is also a process that occurs to all trophic levels in an aquatic environment; it is the opposite of biomagnification, thus when a pollutant gets smaller in concentration as it progresses up a food web.
Many chemicals that bioaccumulate are highly soluble in fats (lipophilic) and insoluble in water (hydrophobic).
For example, though mercury is only present in small amounts in seawater, it is absorbed by algae (generally as methylmercury). Methylmercury is one of the most harmful mercury molecules. It is efficiently absorbed, but only very slowly excreted by organisms. Bioaccumulation and bioconcentration result in buildup in the adipose tissue of successive trophic levels: zooplankton, small nekton, larger fish, etc. Anything which eats these fish also consumes the higher level of mercury the fish have accumulated. This process explains why predatory fish such as swordfish and sharks or birds like osprey and eagles have higher concentrations of mercury in their tissue than could be accounted for by direct exposure alone. For example, herring contains mercury at approximately 0.01 parts per million (ppm) and shark contains mercury at greater than 1 ppm.
DDT is a pesticide known to biomagnify, which is one of the most significant reasons it was deemed harmful to the environment by the EPA and other organizations. DDT is one of the least soluble chemicals known and accumulates progressively in adipose tissue, and as the fat is consumed by predators, the amounts of DDT biomagnify. A well known example of the harmful effects of DDT biomagnification is the significant decline in North American populations of predatory birds such as bald eagles and peregrine falcons due to DDT caused eggshell thinning in the 1950s. DDT is now a banned substance in many parts of the world.
Current status
In a review, a large number of studies, Suedel et al. concluded that although biomagnification is probably more limited in occurrence than previously thought, there is good evidence that DDT, DDE, PCBs, toxaphene, and the organic forms of mercury and arsenic do biomagnify in nature. For other contaminants, bioconcentration and bioaccumulation account for their high concentrations in organism tissues. More recently, Gray reached a similar substances remaining in the organisms and not being diluted to non-threatening concentrations. The success of top predatory-bird recovery (bald eagles, peregrine falcons) in North America following the ban on DDT use in agriculture is testament to the importance of recognizing and responding to biomagnification.
Substances that biomagnify
Two common groups that are known to biomagnify are chlorinated hydrocarbons, also known as organochlorines, and inorganic compounds like methylmercury or heavy metals. Both are lipophilic and not easily degraded. Novel organic substances like organochlorines are not easily degraded because organisms lack previous exposure and have thus not evolved specific detoxification and excretion mechanisms, as there has been no selection pressure from them. These substances are consequently known as "persistent organic pollutants" or POPs.
Metals are not degradable because they are chemical elements. Organisms, particularly those subject to naturally high levels of exposure to metals, have mechanisms to sequester and excrete metals. Problems arise when organisms are exposed to higher concentrations than usual, which they cannot excrete rapidly enough to prevent damage. Persistent heavy metals, such as lead, cadmium, mercury, and arsenic, can have a wide variety of adverse health effects across species.
Novel organic substances
DDT (dichlorodiphenyltrichloroethane).
Hexachlorobenzene (HCB).
PCBs (polychlorinated biphenyls).
Toxaphene.
Monomethylmercury.
See also
Mercury in fish
Methylmercury
Dichlorodiphenyldichloroethylene
Toxaphene
References
External links
Fisk AT, Hoekstra PF, Borga K,and DCG Muir, 2003. Biomagnification. Mar. Pollut. Bull. 46 (4): 522-524
Ecotoxicology
Food chains
Pollution | 0.786815 | 0.993463 | 0.781671 |
Reactivity (chemistry) | In chemistry, reactivity is the impulse for which a chemical substance undergoes a chemical reaction, either by itself or with other materials, with an overall release of energy.
Reactivity refers to:
the chemical reactions of a single substance,
the chemical reactions of two or more substances that interact with each other,
the systematic study of sets of reactions of these two kinds,
methodology that applies to the study of reactivity of chemicals of all kinds,
experimental methods that are used to observe these processes, and
theories to predict and to account for these processes.
The chemical reactivity of a single substance (reactant) covers its behavior in which it:
decomposes,
forms new substances by addition of atoms from another reactant or reactants, and
interacts with two or more other reactants to form two or more products.
The chemical reactivity of a substance can refer to the variety of circumstances (conditions that include temperature, pressure, presence of catalysts) in which it reacts, in combination with the:
variety of substances with which it reacts,
equilibrium point of the reaction (i.e., the extent to which all of it reacts), and
rate of the reaction.
The term reactivity is related to the concepts of chemical stability and chemical compatibility.
An alternative point of view
Reactivity is a somewhat vague concept in chemistry. It appears to embody both thermodynamic factors and kinetic factors (i.e., whether or not a substance reacts, and how fast it reacts). Both factors are actually distinct, and both commonly depend on temperature. For example, it is commonly asserted that the reactivity of alkali metals (Na, K, etc.) increases down the group in the periodic table, or that hydrogen's reactivity is evidenced by its reaction with oxygen. In fact, the rate of reaction of alkali metals (as evidenced by their reaction with water for example) is a function not only of position within the group but also of particle size. Hydrogen does not react with oxygen—even though the equilibrium constant is very large—unless a flame initiates the radical reaction, which leads to an explosion.
Restriction of the term to refer to reaction rates leads to a more consistent view. Reactivity then refers to the rate at which a chemical substance tends to undergo a chemical reaction in time. In pure compounds, reactivity is regulated by the physical properties of the sample. For instance, grinding a sample to a higher specific surface area increases its reactivity. In impure compounds, the reactivity is also affected by the inclusion of contaminants. In crystalline compounds, the crystalline form can also affect reactivity. However, in all cases, reactivity is primarily due to the sub-atomic properties of the compound.
Although it is commonplace to make statements that "substance X is reactive," each substance reacts with its own set of reagents. For example, the statement that "sodium metal is reactive" suggests that sodium reacts with many common reagents (including pure oxygen, chlorine, hydrochloric acid, and water), either at room temperature or when using a Bunsen burner.
The concept of stability should not be confused with reactivity. For example, an isolated molecule of an electronically excited state of the oxygen molecule spontaneously emits light after a statistically defined period. The half-life of such a species is another manifestation of its stability, but its reactivity can only be ascertained via its reactions with other species.
Causes of reactivity
The second meaning of reactivity (i.e., whether or not a substance reacts) can be rationalized at the atomic and molecular level using older and simpler valence bond theory and also atomic and molecular orbital theory. Thermodynamically, a chemical reaction occurs because the products (taken as a group) are at a lower free energy than the reactants; the lower energy state is referred to as the "more stable state." Quantum chemistry provides the most in-depth and exact understanding of the reason this occurs. Generally, electrons exist in orbitals that are the result of solving the Schrödinger equation for specific situations.
All things (values of the n and ml quantum numbers) being equal, the order of stability of electrons in a system from least to greatest is unpaired with no other electrons in similar orbitals, unpaired with all degenerate orbitals half-filled and the most stable is a filled set of orbitals. To achieve one of these orders of stability, an atom reacts with another atom to stabilize both. For example, a lone hydrogen atom has a single electron in its 1s orbital. It becomes significantly more stable (as much as 100 kilocalories per mole, or 420 kilojoules per mole) when reacting to form H2.
It is for this same reason that carbon almost always forms four bonds. Its ground-state valence configuration is 2s2 2p2, half-filled. However, the activation energy to go from half-filled to fully-filled p orbitals is negligible, and as such, carbon forms them almost instantaneously. Meanwhile, the process releases a significant amount of energy (exothermic). This four equal bond configuration is called sp3 hybridization.
The above three paragraphs rationalize, albeit very generally, the reactions of some common species, particularly atoms. One approach to generalize the above is the activation strain model of chemical reactivity which provides a causal relationship between, the reactants' rigidity and their electronic structure, and the height of the reaction barrier.
The rate of any given reaction:
Reactants -> Products
is governed by the rate law:
where the rate is the change in the molar concentration in one second in the rate-determining step of the reaction (the slowest step), is the product of the molar concentration of all the reactants raised to the correct order (known as the reaction order), and is the reaction constant, which is constant for one given set of circumstances (generally temperature and pressure) and independent of concentration. The reactivity of a compound is directly proportional to both the value of and the rate. For instance, if
A + B -> C + D,
then
where is the reaction order of , is the reaction order of , is the reaction order of the full reaction, and is the reaction constant.
See also
Thermodynamic activity
Catalysis
Reactivity series
Michaelis–Menten kinetics
Organic chemistry
Chemical kinetics
Transition state theory
Marcus theory
Klopman–Salem equation
References
Chemical properties | 0.791722 | 0.987276 | 0.781647 |
Ontology | Ontology is the philosophical study of being. As one of the most fundamental concepts, being encompasses all of reality and every entity within it. To articulate the basic structure of being, ontology examines what all entities have in common and how they are divided into fundamental classes, known as categories. An influential distinction is between particular and universal entities. Particulars are unique, non-repeatable entities, like the person Socrates. Universals are general, repeatable entities, like the color green. Another contrast is between concrete objects existing in space and time, like a tree, and abstract objects existing outside space and time, like the number 7. Systems of categories aim to provide a comprehensive inventory of reality, employing categories such as substance, property, relation, state of affairs, and event.
Ontologists disagree about which entities exist on the most basic level. Platonic realism asserts that universals have objective existence. Conceptualism says that universals only exist in the mind while nominalism denies their existence. There are similar disputes about mathematical objects, unobservable objects assumed by scientific theories, and moral facts. Materialism says that, fundamentally, there is only matter while dualism asserts that mind and matter are independent principles. According to some ontologists, there are no objective answers to ontological questions but only perspectives shaped by different linguistic practices.
Ontology uses diverse methods of inquiry. They include the analysis of concepts and experience, the use of intuitions and thought experiments, and the integration of findings from natural science. Applied ontology employs ontological theories and principles to study entities belonging to a specific area. It is of particular relevance to information and computer science, which develop conceptual frameworks of limited domains. These frameworks are used to store information in a structured way, such as a college database tracking academic activities. Ontology is closely related to metaphysics and relevant to the fields of logic, theology, and anthropology.
The origins of ontology lie in the ancient period with speculations about the nature of being and the source of the universe, including ancient Indian, Chinese, and Greek philosophy. In the modern period, philosophers conceived ontology as a distinct academic discipline and coined its name.
Definition
Ontology is the study of being. It is the branch of philosophy that investigates the nature of existence, the features all entities have in common, and how they are divided into basic categories of being. It aims to discover the foundational building blocks of the world and characterize reality as a whole in its most general aspects. In this regard, ontology contrasts with individual sciences like biology and astronomy, which restrict themselves to a limited domain of entities, such as living entities and celestial phenomena. In some contexts, the term ontology refers not to the general study of being but to a specific ontological theory within this discipline. It can also mean a conceptual scheme or inventory of a particular domain.
Ontology is closely related to metaphysics but the exact relation of these two disciplines is disputed. According to a traditionally influential characterization, metaphysics is the study of fundamental reality in the widest sense while ontology is the subdiscipline of metaphysics that restricts itself to the most general features of reality. This view sees ontology as general metaphysics, which is to be distinguished from special metaphysics focused on more specific subject matters, like God, mind, and value. A different conception understands ontology as a preliminary discipline that provides a complete inventory of reality while metaphysics examines the features and structure of the entities in this inventory. Another conception says that metaphysics is about real being while ontology examines possible being or the concept of being. It is not universally accepted that there is a clear boundary between metaphysics and ontology. Some philosophers use both terms as synonyms.
The word ontology has its roots in the ancient Greek terms (, meaning ) and (, meaning ), literally, . The ancient Greeks did not use the term ontology, which was coined by philosophers in the 17th century.
Basic concepts
Being
Being, or existence, is the main topic of ontology. It is one of the most general and fundamental concepts, encompassing the whole of reality and every entity within it. In its widest sense, being only contrasts with non-being or nothingness. It is controversial whether a more substantial analysis of the concept or meaning of being is possible. One proposal understands being as a property possessed by every entity. Critics of this view argue that an entity without being cannot have any properties, meaning that being cannot be a property since properties presuppose being. A different suggestion says that all beings share a set of essential features. According to the Eleatic principle, "power is the mark of being", meaning that only entities with a causal influence truly exist. According to a controversial proposal by philosopher George Berkeley, all existence is mental, expressed in his slogan "to be is to be perceived".
Depending on the context, the term being is sometimes used with a more limited meaning to refer only to certain aspects of reality. In one sense, being is unchanging and impermanent and is distinguished from becoming, which implies change. Another contrast is between being, as what truly exists, and phenomena, as what merely appears to exist. In some contexts, being expresses the fact that something is while essence expresses its qualities or what it is like.
Ontologists often divide being into fundamental classes or highest kinds, called categories of being. Proposed categories include substance, property, relation, state of affairs, and event. They can be used to provide systems of categories, which offer a comprehensive inventory of reality in which every entity belongs to exactly one category. Some philosophers, like Aristotle, say that entities belonging to different categories exist in distinct ways. Others, like John Duns Scotus, insist that there are no differences in the mode of being, meaning that everything exists in the same way. A related dispute is whether some entities have a higher degree of being than others, an idea already found in Plato's work. The more common view in contemporary philosophy is that a thing either exists or not with no intermediary states or degrees.
The relation between being and non-being is a frequent topic in ontology. Influential issues include the status of nonexistent objects and why there is something rather than nothing.
Particulars and universals
A central distinction in ontology is between particular and universal entities. Particulars, also called individuals, are unique, non-repeatable entities, like Socrates, the Taj Mahal, and Mars. Universals are general, repeatable entities, like the color green, the form circularity, and the virtue courage. Universals express aspects or features shared by particulars. For example, Mount Everest and Mount Fuji are particulars characterized by the universal mountain.
Universals can take the form of properties or relations. Properties express what entities are like. They are features or qualities possessed by an entity. Properties are often divided into essential and accidental properties. A property is essential if an entity must have it; it is accidental if the entity can exist without it. For instance, having three sides is an essential property of a triangle while being red is an accidental property. Relations are ways how two or more entities stand to one another. Unlike properties, they apply to several entities and characterize them as a group. For example, being a city is a property while being east of is a relation, as in "Kathmandu is a city" and "Kathmandu is east of New Delhi". Relations are often divided into internal and external relations. Internal relations depend only on the properties of the objects they connect, like the relation of resemblance. External relations express characteristics that go beyond what the connected objects are like, such as spatial relations.
Substances play an important role in the history of ontology as the particular entities that underlie and support properties and relations. They are often considered the fundamental building blocks of reality that can exist on their own, while entities like properties and relations cannot exist without substances. Substances persist through changes as they acquire or lose properties. For example, when a tomato ripens, it loses the property green and acquires the property red.
States of affairs are complex particular entities that have several other entities as their components. The state of affairs "Socrates is wise" has two components: the individual Socrates and the property wise. States of affairs that correspond to reality are called facts. Facts are truthmakers of statements, meaning that whether a statement is true or false depends on the underlying facts.
Events are particular entities that occur in time, like the fall of the Berlin Wall and the first moon landing. They usually involve some kind of change, like the lawn becoming dry. In some cases, no change occurs, like the lawn staying wet. Complex events, also called processes, are composed of a sequence of events.
Concrete and abstract objects
Concrete objects are entities that exist in space and time, such as a tree, a car, and a planet. They have causal powers and can affect each other, like when a car hits a tree and both are deformed in the process. Abstract objects, by contrast, are outside space and time, such as the number 7 and the set of integers. They lack causal powers and do not undergo changes. It is controversial whether or in what sense abstract objects exist and how people can know about them.
Concrete objects encountered in everyday life are complex entities composed of various parts. For example, a book is made up of two covers and pages between them. Each of these components is itself constituted of smaller parts, like molecules, atoms, and elementary particles. Mereology studies the relation between parts and wholes. One position in mereology says that every collection of entities forms a whole. According to a different view, this is only the case for collections that fulfill certain requirements, for instance, that the entities in the collection touch one another. The problem of material constitution asks whether or in what sense a whole should be considered a new object in addition to the collection of parts composing it.
Abstract objects are closely related to fictional and intentional objects. Fictional objects are entities invented in works of fiction. They can be things, like the One Ring in J. R. R. Tolkien's book series The Lord of the Rings, and people, like the Monkey King in the novel Journey to the West. Some philosophers say that fictional objects are one type of abstract object, existing outside space and time. Others understand them as artifacts that are created as the works of fiction are written. Intentional objects are entities that exist within mental states, like perceptions, beliefs, and desires. For example, if a person thinks about the Loch Ness Monster then the Loch Ness Monster is the intentional object of this thought. People can think about existing and non-existing objects, making it difficult to assess the ontological status of intentional objects.
Other concepts
Ontological dependence is a relation between entities. An entity depends ontologically on another entity if the first entity cannot exist without the second entity. For instance, the surface of an apple cannot exist without the apple. An entity is ontologically independent if it does not depend on anything else, meaning that it is fundamental and can exist on its own. Ontological dependence plays a central role in ontology and its attempt to describe reality on its most fundamental level. It is closely related to metaphysical grounding, which is the relation between a ground and facts it explains.
An ontological commitment of a person or a theory is an entity that exists according to them. For instance, a person who believes in God has an ontological commitment to God. Ontological commitments can be used to analyze which ontologies people explicitly defend or implicitly assume. They play a central role in contemporary metaphysics when trying to decide between competing theories. For example, the Quine–Putnam indispensability argument defends mathematical Platonism, asserting that numbers exist because the best scientific theories are ontologically committed to numbers.
Possibility and necessity are further topics in ontology. Possibility describes what can be the case, as in "it is possible that extraterrestrial life exists". Necessity describes what must be the case, as in "it is necessary that three plus two equals five". Possibility and necessity contrast with actuality, which describes what is the case, as in "Doha is the capital of Qatar". Ontologists often use the concept of possible worlds to analyze possibility and necessity. A possible world is a complete and consistent way how things could have been. For example, Haruki Murakami was born in 1949 in the actual world but there are possible worlds in which he was born at a different date. Using this idea, possible world semantics says that a sentence is possibly true if it is true in at least one possible world. A sentence is necessarily true if it is true in all possible worlds.
In ontology, identity means that two things are the same. Philosophers distinguish between qualitative and numerical identity. Two entities are qualitatively identical if they have exactly the same features, such as perfect identical twins. This is also called exact similarity and indiscernibility. Numerical identity, by contrast, means that there is only a single entity. For example, if Fatima is the mother of Leila and Hugo then Leila's mother is numerically identical to Hugo's mother. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time. Diachronic identity relates an entity to itself at different times, as in "the woman who bore Leila three years ago is the same woman who bore Hugo this year".
Branches
There are different and sometimes overlapping ways to divide ontology into branches. Pure ontology focuses on the most abstract topics associated with the concept and nature of being. It is not restricted to a specific domain of entities and studies existence and the structure of reality as a whole. Pure ontology contrasts with applied ontology, also called domain ontology. Applied ontology examines the application of ontological theories and principles to specific disciplines and domains, often in the field of science. It considers ontological problems in regard to specific entities such as matter, mind, numbers, God, and cultural artifacts.
Social ontology, a major subfield of applied ontology, studies social kinds, like money, gender, society, and language. It aims to determine the nature and essential features of these concepts while also examining their mode of existence. According to a common view, social kinds are useful constructions to describe the complexities of social life. This means that they are not pure fictions but, at the same time, lack the objective or mind-independent reality of natural phenomena like elementary particles, lions, and stars. In the fields of computer science, information science, and knowledge representation, applied ontology is interested in the development of formal frameworks to encode and store information about a limited domain of entities in a structured way. A related application in genetics is Gene Ontology, which is a comprehensive framework for the standardized representation of gene-related information across species and databases.
Formal ontology is the study of objects in general while focusing on their abstract structures and features. It divides objects into different categories based on the forms they exemplify. Formal ontologists often rely on the tools of formal logic to express their findings in an abstract and general manner. Formal ontology contrasts with material ontology, which distinguishes between different areas of objects and examines the features characteristic of a specific area. Examples are ideal spatial beings in the area of geometry and living beings in the area of biology.
Descriptive ontology aims to articulate the conceptual scheme underlying how people ordinarily think about the world. Prescriptive ontology departs from common conceptions of the structure of reality and seeks to formulate a new and better conceptualization.
Another contrast is between analytic and speculative ontology. Analytic ontology examines the types and categories of being to determine what kinds of things could exist and what features they would have. Speculative ontology aims to determine which entities actually exist, for example, whether there are numbers or whether time is an illusion.
Metaontology studies the underlying concepts, assumptions, and methods of ontology. Unlike other forms of ontology, it does not ask "what exists" but "what does it mean for something to exist" and "how can people determine what exists". It is closely related to fundamental ontology, an approach developed by philosopher Martin Heidegger that seeks to uncover the meaning of being.
Schools of thought
Realism and anti-realism
The term realism is used for various theories that affirm that some kind of phenomenon is real or has mind-independent existence. Ontological realism is the view that there are objective facts about what exists and what the nature and categories of being are. Ontological realists do not make claims about what those facts are, for example, whether elementary particles exist. They merely state that there are mind-independent facts that determine which ontological theories are true. This idea is denied by ontological anti-realists, also called ontological deflationists, who say that there are no substantive facts one way or the other. According to philosopher Rudolf Carnap, for example, ontological statements are relative to language and depend on the ontological framework of the speaker. This means that there are no framework-independent ontological facts since different frameworks provide different views while there is no objectively right or wrong framework.
In a more narrow sense, realism refers to the existence of certain types of entities. Realists about universals say that universals have mind-independent existence. According to Platonic realists, universals exist not only independent of the mind but also independent of particular objects that exemplify them. This means that the universal red could exist by itself even if there were no red objects in the world. Aristotelian realism, also called moderate realism, rejects this idea and says that universals only exist as long as there are objects that exemplify them. Conceptualism, by contrast, is a form of anti-realism, stating that universals only exist in the mind as concepts that people use to understand and categorize the world. Nominalists defend a strong form of anti-realism by saying that universals have no existence. This means that the world is entirely composed of particular objects.
Mathematical realism, a closely related view in the philosophy of mathematics, says that mathematical facts exist independently of human language, thought, and practices and are discovered rather than invented. According to mathematical Platonism, this is the case because of the existence of mathematical objects, like numbers and sets. Mathematical Platonists say that mathematical objects are as real as physical objects, like atoms and stars, even though they are not accessible to empirical observation. Influential forms of mathematical anti-realism include conventionalism, which says that mathematical theories are trivially true simply by how mathematical terms are defined, and game formalism, which understands mathematics not as a theory of reality but as a game governed by rules of string manipulation.
Modal realism is the theory that in addition to the actual world, there are countless possible worlds as real and concrete as the actual world. The primary difference is that the actual world is inhabited by us while other possible worlds are inhabited by our counterparts. Modal anti-realists reject this view and argue that possible worlds do not have concrete reality but exist in a different sense, for example, as abstract or fictional objects.
Scientific realists say that the scientific description of the world is an accurate representation of reality. It is of particular relevance in regard to things that cannot be directly observed by humans but are assumed to exist by scientific theories, like electrons, forces, and laws of nature. Scientific anti-realism says that scientific theories are not descriptions of reality but instruments to predict observations and the outcomes of experiments.
Moral realists claim that there exist mind-independent moral facts. According to them, there are objective principles that determine which behavior is morally right. Moral anti-realists either claim that moral principles are subjective and differ between persons and cultures, a position known as moral relativism, or outright deny the existence of moral facts, a view referred to as moral nihilism.
By number of categories
Monocategorical theories say that there is only one fundamental category, meaning that every single entity belongs to the same universal class. For example, some forms of nominalism state that only concrete particulars exist while some forms of bundle theory state that only properties exist. Polycategorical theories, by contrast, hold that there is more than one basic category, meaning that entities are divided into two or more fundamental classes. They take the form of systems of categories, which list the highest genera of being to provide a comprehensive inventory of everything.
The closely related discussion between monism and dualism is about the most fundamental types that make up reality. According to monism, there is only one kind of thing or substance on the most basic level. Materialism is an influential monist view; it says that everything is material. This means that mental phenomena, such as beliefs, emotions, and consciousness, either do not exist or exist as aspects of matter, like brain states. Idealists take the converse perspective, arguing that everything is mental. They may understand physical phenomena, like rocks, trees, and planets, as ideas or perceptions of conscious minds. Neutral monism occupies a middle ground by saying that both mind and matter are derivative phenomena. Dualists state that mind and matter exist as independent principles, either as distinct substances or different types of properties. In a slightly different sense, monism contrasts with pluralism as a view not about the number of basic types but the number of entities. In this sense, monism is the controversial position that only a single all-encompassing entity exists in all of reality. Pluralism is more commonly accepted and says that several distinct entities exist.
By fundamental categories
The historically influential substance-attribute ontology is a polycategorical theory. It says that reality is at its most fundamental level made up of unanalyzable substances that are characterized by universals, such as the properties an individual substance has or relations that exist between substances. The closely related to substratum theory says that each concrete object is made up of properties and a substratum. The difference is that the substratum is not characterized by properties: it is a featureless or bare particular that merely supports the properties.
Various alternative ontological theories have been proposed that deny the role of substances as the foundational building blocks of reality. Stuff ontologies say that the world is not populated by distinct entities but by continuous stuff that fills space. This stuff may take various forms and is often conceived as infinitely divisible. According to process ontology, processes or events are the fundamental entities. This view usually emphasizes that nothing in reality is static, meaning that being is dynamic and characterized by constant change. Bundle theories state that there are no regular objects but only bundles of co-present properties. For example, a lemon may be understood as a bundle that includes the properties yellow, sour, and round. According to traditional bundle theory, the bundled properties are universals, meaning that the same property may belong to several different bundles. According to trope bundle theory, properties are particular entities that belong to a single bundle.
Some ontologies focus not on distinct objects but on interrelatedness. According to relationalism, all of reality is relational at its most fundamental level. Ontic structural realism agrees with this basic idea and focuses on how these relations form complex structures. Some structural realists state that there is nothing but relations, meaning that individual objects do not exist. Others say that individual objects exist but depend on the structures in which they participate. Fact ontologies present a different approach by focusing on how entities belonging to different categories come together to constitute the world. Facts, also known as states of affairs, are complex entities; for example, the fact that the Earth is a planet consists of the particular object the Earth and the property being a planet. Fact ontologies state that facts are the fundamental constituents of reality, meaning that objects, properties, and relations cannot exist on their own and only form part of reality to the extent that they participate in facts.
In the history of philosophy, various ontological theories based on several fundamental categories have been proposed. One of the first theories of categories was suggested by Aristotle, whose system includes ten categories: substance, quantity, quality, relation, place, date, posture, state, action, and passion. An early influential system of categories in Indian philosophy, first proposed in the Vaisheshika school, distinguishes between six categories: substance, quality, motion, universal, individuator, and inherence. Immanuel Kant's transcendental idealism includes a system of twelve categories, which Kant saw as pure concepts of understanding. They are subdivided into four classes: quantity, quality, relation, and modality. In more recent philosophy, theories of categories were developed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe.
Others
The dispute between constituent and relational ontologies concerns the internal structure of concrete particular objects. Constituent ontologies say that objects have an internal structure with properties as their component parts. Bundle theories are an example of this position: they state that objects are bundles of properties. This view is rejected by relational ontologies, which say that objects have no internal structure, meaning that properties do not inhere in them but are externally related to them. According to one analogy, objects are like pin-cushions and properties are pins that can be stuck to objects and removed again without becoming a real part of objects. Relational ontologies are common in certain forms of nominalism that reject the existence of universal properties.
Hierarchical ontologies state that the world is organized into levels. Entities on all levels are real but low-level entities are more fundamental than high-level entities. This means that they can exist without high-level entities while high-level entities cannot exist without low-level entities. One hierarchical ontology says that elementary particles are more fundamental than the macroscopic objects they compose, like chairs and tables. Other hierarchical theories assert that substances are more fundamental than their properties and that nature is more fundamental than culture. Flat ontologies, by contrast, deny that any entity has a privileged status, meaning that all entities exist on the same level. For them, the main question is only whether something exists rather than identifying the level at which it exists.
The ontological theories of endurantism and perdurantism aim to explain how material objects persist through time. Endurantism is the view that material objects are three-dimensional entities that travel through time while being fully present in each moment. They remain the same even when they gain or lose properties as they change. Perdurantism is the view that material objects are four-dimensional entities that extend not just through space but also through time. This means that they are composed of temporal parts and, at any moment, only one part of them is present but not the others. According to perdurantists, change means that an earlier part exhibits different qualities than a later part. When a tree loses its leaves, for instance, there is an earlier temporal part with leaves and a later temporal part without leaves.
Differential ontology is a poststructuralist approach interested in the relation between the concepts of identity and difference. It says that traditional ontology sees identity as the more basic term by first characterizing things in terms of their essential features and then elaborating differences based on this conception. Differential ontologists, by contrast, privilege difference and say that the identity of a thing is a secondary determination that depends on how this thing differs from other things.
Object-oriented ontology belongs to the school of speculative realism and examines the nature and role of objects. It sees objects as the fundamental building blocks of reality. As a flat ontology, it denies that some entities have a more fundamental form of existence than others. It uses this idea to argue that objects exist independently of human thought and perception.
Methods
Methods of ontology are ways of conducting ontological inquiry and deciding between competing theories. There is no single standard method; the diverse approaches are studied by metaontology.
Conceptual analysis is a method to understand ontological concepts and clarify their meaning. It proceeds by analyzing their component parts and the necessary and sufficient conditions under which a concept applies to an entity. This information can help ontologists decide whether a certain type of entity, such as numbers, exists. Eidetic variation is a related method in phenomenological ontology that aims to identify the essential features of different types of objects. Phenomenologists start by imagining an example of the investigated type. They proceed by varying the imagined features to determine which ones cannot be changed, meaning they are essential. The transcendental method begins with a simple observation that a certain entity exists. In the following step, it studies the ontological repercussions of this observation by examining how it is possible or which conditions are required for this entity to exist.
Another approach is based on intuitions in the form of non-inferential impressions about the correctness of general principles. These principles can be used as the foundation on which an ontological system is built and expanded using deductive reasoning. A further intuition-based method relies on thought experiments to evoke new intuitions. This happens by imagining a situation relevant to an ontological issue and then employing counterfactual thinking to assess the consequences of this situation. For example, some ontologists examine the relation between mind and matter by imagining creatures identical to humans but without consciousness.
Naturalistic methods rely on the insights of the natural sciences to determine what exists. According to an influential approach by Willard Van Orman Quine, ontology can be conducted by analyzing the ontological commitments of scientific theories. This method is based on the idea that scientific theories provide the most reliable description of reality and that their power can be harnessed by investigating the ontological assumptions underlying them.
Principles of theory choice offer guidelines for assessing the advantages and disadvantages of ontological theories rather than guiding their construction. The principle of Ockham's Razor says that simple theories are preferable. A theory can be simple in different respects, for example, by using very few basic types or by describing the world with a small number of fundamental entities. Ontologists are also interested in the explanatory power of theories and give preference to theories that can explain many observations. A further factor is how close a theory is to common sense. Some ontologists use this principle as an argument against theories that are very different from how ordinary people think about the issue.
In applied ontology, ontological engineering is the process of creating and refining conceptual models of specific domains. Developing a new ontology from scratch involves various preparatory steps, such as delineating the scope of the domain one intends to model and specifying the purpose and use cases of the ontology. Once the foundational concepts within the area have been identified, ontology engineers proceed by defining them and characterizing the relations between them. This is usually done in a formal language to ensure precision and, in some cases, automatic computability. In the following review phase, the validity of the ontology is assessed using test data. Various more specific instructions for how to carry out the different steps have been suggested. They include the Cyc method, Grüninger and Fox's methodology, and so-called METHONTOLOGY. In some cases, it is feasible to adapt a pre-existing ontology to fit a specific domain and purpose rather than creating a new one from scratch.
Related fields
Ontology overlaps with many disciplines, including logic, the study of correct reasoning. Ontologists often employ logical systems to express their insights, specifically in the field of formal ontology. Of particular interest to them is the existential quantifier, which is used to express what exists. In first-order logic, for example, the formula states that dogs exist. Some philosophers study ontology by examining the structure of thought and language, saying that they reflect the structure of being. Doubts about the accuracy of natural language have led some ontologists to seek a new formal language, termed ontologese, for a better representation of the fundamental structure of reality.
Ontologies are often used in information science to provide a conceptual scheme or inventory of a specific domain, making it possible to classify objects and formally represent information about them. This is of specific interest to computer science, which builds databases to store this information and defines computational processes to automatically transform and use it. For instance, to encode and store information about clients and employees in a database, an organization may use an ontology with categories such as person, company, address, and name. In some cases, it is necessary to exchange information belonging to different domains or to integrate databases using distinct ontologies. This can be achieved with the help of upper ontologies, which are not limited to one specific domain. They use general categories that apply to most or all domains, like Suggested Upper Merged Ontology and Basic Formal Ontology.
Similar applications of ontology are found in various fields seeking to manage extensive information within a structured framework. Protein Ontology is a formal framework for the standardized representation of protein-related entities and their relationships. Gene Ontology and Sequence Ontology serve a similar purpose in the field of genetics. Environment Ontology is a knowledge representation focused on ecosystems and environmental processes. Friend of a Friend provides a conceptual framework to represent relations between people and their interests and activities.
The topic of ontology has received increased attention in anthropology since the 1990s, sometimes termed the "ontological turn". This type of inquiry is focused on how people from different cultures experience and understand the nature of being. Specific interest has been given to the ontological outlook of Indigenous people and how it differs from a Western perspective. As an example of this contrast, it has been argued that various indigenous communities ascribe intentionality to non-human entities, like plants, forests, or rivers. This outlook is known as animism and is also found in Native American ontologies, which emphasize the interconnectedness of all living entities and the importance of balance and harmony with nature.
Ontology is closely related to theology and its interest in the existence of God as an ultimate entity. The ontological argument, first proposed by Anselm of Canterbury, attempts to prove the existence of the divine. It defines God as the greatest conceivable being. From this definition it concludes that God must exist since God would not be the greatest conceivable being if God lacked existence. Another overlap in the two disciplines is found in ontological theories that use God or an ultimate being as the foundational principle of reality. Heidegger criticized this approach, terming it ontotheology.
History
The roots of ontology in ancient philosophy are speculations about the nature of being and the source of the universe. Discussions of the essence of reality are found in the Upanishads, ancient Indian scriptures dating from as early as 700 BCE. They say that the universe has a divine foundation and discuss in what sense ultimate reality is one or many. Samkhya, the first orthodox school of Indian philosophy, formulated an atheist dualist ontology based on the Upanishads, identifying pure consciousness and matter as its two foundational principles. The later Vaisheshika school proposed a comprehensive system of categories. In ancient China, Laozi's (6th century BCE) Taoism examines the underlying order of the universe, known as Tao, and how this order is shaped by the interaction of two basic forces, yin and yang. The philosophical movement of Xuanxue emerged in the 3rd century CE and explored the relation between being and non-being.
Starting in the 6th century BCE, Presocratic philosophers in ancient Greece aimed to provide rational explanations of the universe. They suggested that a first principle, such as water or fire, is the primal source of all things. Parmenides (c. 515–450 BCE) is sometimes considered the founder of ontology because of his explicit discussion of the concepts of being and non-being. Inspired by Presocratic philosophy, Plato (427–347 BCE) developed his theory of forms. It distinguishes between unchangeable perfect forms and matter, which has a lower degree of existence and imitates the forms. Aristotle (384–322 BCE) suggested an elaborate system of categories that introduced the concept of substance as the primary kind of being. The school of Neoplatonism arose in the 3rd century CE and proposed an ineffable source of everything, called the One, which is more basic than being itself.
The problem of universals was an influential topic in medieval ontology. Boethius (477–524 CE) suggested that universals can exist not only in matter but also in the mind. This view inspired Peter Abelard (1079–1142 CE), who proposed that universals exist only in the mind. Thomas Aquinas (1224–1274 CE) developed and refined fundamental ontological distinctions, such as the contrast between existence and essence, between substance and accidents, and between matter and form. He also discussed the transcendentals, which are the most general properties or modes of being. John Duns Scotus (1266–1308) argued that all entities, including God, exist in the same way and that each entity has a unique essence, called haecceity. William of Ockham (c. 1287–1347 CE) proposed that one can decide between competing ontological theories by assessing which one uses the smallest number of elements, a principle known as Ockham's razor.
In Arabic-Persian philosophy, Avicenna (980–1037 CE) combined ontology with theology. He identified God as a necessary being that is the source of everything else, which only has contingent existence. In 8th-century Indian philosophy, the school of Advaita Vedanta emerged. It says that only a single all-encompassing entity exists, stating that the impression of a plurality of distinct entities is an illusion. Starting in the 13th century CE, the Navya-Nyāya school built on Vaisheshika ontology with a particular focus on the problem of non-existence and negation. 9th-century China saw the emergence of Neo-Confucianism, which developed the idea that a rational principle, known as li, is the ground of being and order of the cosmos.
René Descartes (1596–1650) formulated a dualist ontology at the beginning of the modern period. It distinguishes between mind and matter as distinct substances that causally interact. Rejecting Descartes's dualism, Baruch Spinoza (1632–1677) proposed a monist ontology according to which there is only a single entity that is identical to God and nature. Gottfried Wilhelm Leibniz (1646–1716), by contrast, said that the universe is made up of many simple substances, which are synchronized but do not interact with one another. John Locke (1632–1704) proposed his substratum theory, which says that each object has a featureless substratum that supports the object's properties. Christian Wolff (1679–1754) was influential in establishing ontology as a distinct discipline, delimiting its scope from other forms of metaphysical inquiry. George Berkeley (1685–1753) developed an idealist ontology according to which material objects are ideas perceived by minds.
Immanuel Kant (1724–1804) rejected the idea that humans can have direct knowledge of independently existing things and their nature, limiting knowledge to the field of appearances. For Kant, ontology does not study external things but provides a system of pure concepts of understanding. Influenced by Kant's philosophy, Georg Wilhelm Friedrich Hegel (1770–1831) linked ontology and logic. He said that being and thought are identical and examined their foundational structures. Arthur Schopenhauer (1788–1860) rejected Hegel's philosophy and proposed that the world is an expression of a blind and irrational will. Francis Herbert Bradley (1846–1924) saw absolute spirit as the ultimate and all-encompassing reality while denying that there are any external relations.
At the beginning of the 20th century, Edmund Husserl (1859–1938) developed phenomenology and employed its method, the description of experience, to address ontological problems. This idea inspired his student Martin Heidegger (1889–1976) to clarify the meaning of being by exploring the mode of human existence. Jean-Paul Sartre responded to Heidegger's philosophy by examining the relation between being and nothingness from the perspective of human existence, freedom, and consciousness. Based on the phenomenological method, Nicolai Hartmann (1882–1950) developed a complex hierarchical ontology that divides reality into four levels: inanimate, biological, psychological, and spiritual.
Alexius Meinong (1853–1920) articulated a controversial ontological theory that includes nonexistent objects as part of being. Arguing against this theory, Bertrand Russell (1872–1970) formulated a fact ontology known as logical atomism. This idea was further refined by the early Ludwig Wittgenstein (1889–1951) and inspired D. M. Armstrong's (1926–2014) ontology. Alfred North Whitehead (1861–1947), by contrast, developed a process ontology. Rudolf Carnap (1891–1970) questioned the objectivity of ontological theories by claiming that what exists depends on one's linguistic framework. He had a strong influence on Willard Van Orman Quine (1908–2000), who analyzed the ontological commitments of scientific theories to solve ontological problems. Quine's student David Lewis (1941–2001) formulated the position of modal realism, which says that possible worlds are as real and concrete as the actual world. Since the end of the 20th century, interest in applied ontology has risen in computer and information science with the development of conceptual frameworks for specific domains.
See also
References
Notes
Citations
Sources
External links | 0.781862 | 0.999506 | 0.781476 |
Gravimetric analysis | Gravimetric analysis describes a set of methods used in analytical chemistry for the quantitative determination of an analyte (the ion being analyzed) based on its mass. The principle of this type of analysis is that once an ion's mass has been determined as a unique compound, that known measurement can then be used to determine the same analyte's mass in a mixture, as long as the relative quantities of the other constituents are known.
The four main types of this method of analysis are precipitation, volatilization, electro-analytical and miscellaneous physical method. The methods involve changing the phase of the analyte to separate it in its pure form from the original mixture and are quantitative measurements.
Precipitation method
The precipitation method is the one used for the determination of the amount of calcium in water. Using this method, an excess of oxalic acid, H2C2O4, is added to a measured, known volume of water. By adding a reagent, here ammonium oxalate, the calcium will precipitate as calcium oxalate. The proper reagent, when added to aqueous solution, will produce highly insoluble precipitates from the positive and negative ions that would otherwise be soluble with their counterparts (equation 1).
The reaction is:
Formation of calcium oxalate:
Ca2+(aq) + C2O42- → CaC2O4
The precipitate is collected, dried and ignited to high (red) heat which converts it entirely to calcium oxide.
The reaction is pure calcium oxide formed
CaC2O4 → CaO(s) + CO(g)+ CO2(g)
The pure precipitate is cooled, then measured by weighing, and the difference in weights before and after reveals the mass of analyte lost, in this case calcium oxide. That number can then be used to calculate the amount, or the percent concentration, of it in the original mix.
Volatilization methods
Volatilization methods can be either direct or indirect. Water eliminated in a quantitative manner from many inorganic substances by ignition is an example of a direct determination. It is collected on a solid desiccant and its mass determined by the gain in mass of the desiccant.
Another direct volatilization method involves carbonates which generally decompose to release carbon dioxide when acids are used. Because carbon dioxide is easily evolved when heat is applied, its mass is directly established by the measured increase in the mass of the absorbent solid used.
Determination of the amount of water by measuring the loss in mass of the sample during heating is an example of an indirect method. It is well known that changes in mass occur due to decomposition of many substances when heat is applied, regardless of the presence or absence of water. Because one must make the assumption that water was the only component lost, this method is less satisfactory than direct methods.
This often faulty and misleading assumption has proven to be wrong on more than a few occasions. There are many substances other than water loss that can lead to loss of mass with the addition of heat, as well as a number of other factors that may contribute to it. The widened margin of error created by this all-too-often false assumption is not one to be lightly disregarded as the consequences could be far-reaching.
Nevertheless, the indirect method, although less reliable than direct, is still widely used in commerce. For example, it's used to measure the moisture content of cereals, where a number of imprecise and inaccurate instruments are available for this purpose.
Types of volatilization methods
In volatilization methods, removal of the analyte involves separation by heating or chemically decomposing a volatile sample at a suitable temperature. In other words, thermal or chemical energy is used to precipitate a volatile species. For example, the water content of a compound can be determined by vaporizing the water using thermal energy (heat). Heat can also be used, if oxygen is present, for combustion to isolate the suspect species and obtain the desired results.
The two most common gravimetric methods using volatilization are those for water and carbon dioxide. An example of this method is the isolation of sodium hydrogen bicarbonate (the main ingredient in most antacid tablets) from a mixture of carbonate and bicarbonate. The total amount of this analyte, in whatever form, is obtained by addition of an excess of dilute sulfuric acid to the analyte in solution.
In this reaction, nitrogen gas is introduced through a tube into the flask which contains the solution. As it passes through, it gently bubbles. The gas then exits, first passing a drying agent (here CaSO4, the common desiccant Drierite). It then passes a mixture of the drying agent and sodium hydroxide which lies on asbestos or Ascarite II, a non-fibrous silicate containing sodium hydroxide. The mass of the carbon dioxide is obtained by measuring the increase in mass of this absorbent. This is performed by measuring the difference in weight of the tube in which the ascarite contained before and after the procedure.
The calcium sulfate (CaSO4) in the tube retains carbon dioxide selectively as it's heated, and thereby, removed from the solution. The drying agent absorbs any aerosolized water and/or water vapor (reaction 3.). The mix of the drying agent and NaOH absorbs the CO2 and any water that may have been produced as a result of the absorption of the NaOH (reaction 4.).
The reactions are:
Reaction 3 - absorption of water
NaHCO3(aq) + H2SO4(aq) → CO2(g) + H2O(l) + NaHSO4(aq).
Reaction 4. Absorption of CO2 and residual water
CO2(g) + 2 NaOH(s) → Na2CO3(s) + H2O(l).
Procedure
The sample is dissolved, if it is not already in solution.
The solution may be treated to adjust the pH (so that the proper precipitate is formed, or to suppress the formation of other precipitates). If it is known that species are present which interfere (by also forming precipitates under the same conditions as the analyte), the sample might require treatment with a different reagent to remove these interferents.
The precipitating reagent is added at a concentration that favors the formation of a "good" precipitate (see below). This may require low concentration, extensive heating (often described as "digestion"), or careful control of the pH. Digestion can help reduce the amount of coprecipitation.
After the precipitate has formed and been allowed to "digest", the solution is carefully filtered. The filter is used to collect the precipitate; smaller particles are more difficult to filter.
Depending on the procedure followed, the filter might be a piece of ashless filter paper in a fluted funnel, or a filter crucible. Filter paper is convenient because it does not typically require cleaning before use; however, filter paper can be chemically attacked by some solutions (such as concentrated acid or base), and may tear during the filtration of large volumes of solution.
The alternative is a crucible whose bottom is made of some porous material, such as sintered glass, porcelain or sometimes metal. These are chemically inert and mechanically stable, even at elevated temperatures. However, they must be carefully cleaned to minimize contamination or carryover(cross-contamination). Crucibles are often used with a mat of glass or asbestos fibers to trap small particles.
After the solution has been filtered, it should be tested to make sure that the analyte has been completely precipitated. This is easily done by adding a few drops of the precipitating reagent; if a precipitate is observed, the precipitation is incomplete.
After filtration, the precipitate – including the filter paper or crucible – is heated, or charred. This accomplishes the following:
The remaining moisture is removed (drying).
Secondly, the precipitate is converted to a more chemically stable form. For instance, calcium ion might be precipitated using oxalate ion, to produce calcium oxalate (CaC2O4); it might then be heated to convert it into the oxide (CaO). It is vital that the empirical formula of the weighed precipitate be known, and that the precipitate be pure; if two forms are present, the results will be inaccurate.
The precipitate cannot be weighed with the necessary accuracy in place on the filter paper; nor can the precipitate be completely removed from the filter paper to weigh it. The precipitate can be carefully heated in a crucible until the filter paper has burned away; this leaves only the precipitate. (As the name suggests, "ashless" paper is used so that the precipitate is not contaminated with ash.)
After the precipitate is allowed to cool (preferably in a desiccator to keep it from absorbing moisture), it is weighed (in the crucible). To calculate the final mass of the analyte, the starting mass of the empty crucible is subtracted from the final mass of the crucible containing the sample. Since the composition of the precipitate is known, it is simple to calculate the mass of analyte in the original sample.
Example
A chunk of ore is to be analyzed for sulfur content. It is treated with concentrated nitric acid and potassium chlorate to convert all of the sulfur to sulfate (SO). The nitrate and chlorate are removed by treating the solution with concentrated HCl. The sulfate is precipitated with barium (Ba2+) and weighed as BaSO4.
Advantages
Gravimetric analysis, if methods are followed carefully, provides for exceedingly precise analysis. In fact, gravimetric analysis was used to determine the atomic masses of many elements in the periodic table to six figure accuracy. Gravimetry provides very little room for instrumental error and does not require a series of standards for calculation of an unknown. Also, methods often do not require expensive equipment. Gravimetric analysis, due to its high degree of accuracy, when performed correctly, can also be used to calibrate other instruments in lieu of reference standards. Gravimetric analysis is currently used to allow undergraduate chemistry/Biochemistry students to experience a grad level laboratory and it is a highly effective teaching tool to those who want to attend medical school or any research graduate school.
Disadvantages
Gravimetric analysis usually only provides for the analysis of a single element, or a limited group of elements, at a time. Comparing modern dynamic flash combustion coupled with gas chromatography with traditional combustion analysis will show that the former is both faster and allows for simultaneous determination of multiple elements while traditional determination allowed only for the determination of carbon and hydrogen. Methods are often convoluted and a slight mis-step in a procedure can often mean disaster for the analysis (colloid formation in precipitation gravimetry, for example). Compare this with hardy methods such as spectrophotometry and one will find that analysis by these methods is much more efficient.
Steps in a gravimetric analysis
After appropriate dissolution of the sample the following steps should be followed for successful gravimetric procedure:
1. Preparation of the Solution: This may involve several steps including adjustment of the pH of the solution in order for the precipitate to occur quantitatively and get a precipitate of desired properties, removing interferences, adjusting the volume of the sample to suit the amount of precipitating agent to be added.
2. Precipitation: This requires addition of a precipitating agent solution to the sample solution. Upon addition of the first drops of the precipitating agent, supersaturation occurs, then nucleation starts to occur where every few molecules of precipitate aggregate together forming a nucleus. At this point, addition of extra precipitating agent will either form new nuclei or will build up on existing nuclei to give a precipitate. This can be predicted by Von Weimarn ratio where, according to this relation the particle size is inversely proportional to a quantity called the relative supersaturation where
Relative supersaturation = (Q – S)/S
The Q is the concentration of reactants before precipitation, S is the solubility of precipitate in the medium from which it is being precipitated. Therefore, to get particle growth instead of further nucleation we must make the relative supersaturation ratio as small as possible. The optimum conditions for precipitation which make the supersaturation low are:
a. Precipitation using dilute solutions to decrease Q
b. Slow addition of precipitating agent to keep Q as low as possible
c. Stirring the solution during addition of precipitating agent to avoid concentration sites and keep Q low
d. Increase solubility by precipitation from hot solution
e. Adjust the pH to increase S, but not too much increase np as we do not want to lose precipitate by dissolution
f. Usually add a little excess of the precipitating agent for quantitative precipitation and check for completeness of the precipitation
3. Digestion of the precipitate: The precipitate is left hot (below boiling) for 30 min to one hour for the particles to be digested. Digestion involves dissolution of small particles and reprecipitation on larger ones resulting in particle growth and better precipitate characteristics. This process is called Ostwald ripening. An important advantage of digestion is observed for colloidal precipitates where large amounts of adsorbed ions cover the huge area of the precipitate. Digestion forces the small colloidal particles to agglomerate which decreases their surface area and thus adsorption. You should know that adsorption is a major problem in gravimetry in case of colloidal precipitate since a precipitate tends to adsorb its own ions present in excess, Therefore, forming what is called a primary ion layer which attracts ions from solution forming a secondary or counter ion layer. Individual particles repel each other keeping the colloidal properties of the precipitate. Particle coagulation can be forced by either digestion or addition of a high concentration of a diverse ions strong electrolytic solution in order to shield the charges on colloidal particles and force agglomeration. Usually, coagulated particles return to the colloidal state if washed with water, a process called peptization.
4. Washing and Filtering the Precipitate: It is crucial to wash the precipitate thoroughly to remove all adsorbed species that would add to the weight of the precipitate. One should be careful nor to use too much water since part of the precipitate may be lost. Also, in case of colloidal precipitates we should not use water as a washing solution since peptization would occur. In such situations dilute nitric acid, ammonium nitrate, or dilute acetic acid may be used. Usually, it is a good practice to check for the presence of precipitating agent in the filtrate of the final washing solution. The presence of precipitating agent means that extra washing is required. Filtration should be done in appropriate sized Gooch or ignition filter paper.
5. Drying and Ignition: The purpose of drying (heating at about 120-150 oC in an oven) or ignition in a muffle furnace at temperatures ranging from 600 to 1200 oC is to get a material with exactly known chemical structure so that the amount of analyte can be accurately determined.
6. Precipitation from Homogeneous Solution: To make Q minimum we can, in some situations, generate the precipitating agent in the precipitation medium rather than adding it. For example, to precipitate iron as the hydroxide, we dissolve urea in the sample. Heating of the solution generates hydroxide ions from the hydrolysis of urea. Hydroxide ions are generated at all points in solution and thus there are no sites of concentration. We can also adjust the rate of urea hydrolysis and thus control the hydroxide generation rate. This type of procedure can be very advantageous in case of colloidal precipitates.
Solubility in the presence of diverse ions
As expected from previous information, diverse ions have a screening effect on dissociated ions which leads to extra dissociation. Solubility will show a clear increase in presence of diverse ions as the solubility product will increase. Look at the following example:
Find the solubility of AgCl (Ksp = 1.0 x 10−10) in 0.1 M NaNO3. The activity coefficients for silver and chloride are 0.75 and 0.76, respectively.
AgCl(s) = Ag+ + Cl−
We can no longer use the thermodynamic equilibrium constant (i.e. in absence of diverse ions) and we have to consider the concentration equilibrium constant or use activities instead of concentration if we use Kth:
Ksp = aAg+ aCl−
Ksp = [Ag+] fAg+ [Cl−] fCl−
1.0 x 10−10 = s x 0.75 x s x 0.76
s = 1.3 x 10−5 M
We have calculated the solubility of AgCl in pure water to be 1.0 x 10−5 M, if we compare this value to that obtained in presence of diverse ions we see % increase in solubility = {(1.3 x 10−5 – 1.0 x 10−5) / 1.0 x 10−5} x 100 = 30%
Therefore, once again we have an evidence for an increase in dissociation or a shift of equilibrium to right in presence of diverse ions.
References
External links
Gravimetric Quimociac Technique
Analytical chemistry
Scientific techniques | 0.789635 | 0.989601 | 0.781424 |
Condensation | Condensation is the change of the state of matter from the gas phase into the liquid phase, and is the reverse of vaporization. The word most often refers to the water cycle. It can also be defined as the change in the state of water vapor to liquid water when in contact with a liquid or solid surface or cloud condensation nuclei within the atmosphere. When the transition happens from the gaseous phase into the solid phase directly, the change is called deposition.
Initiation
Condensation is initiated by the formation of atomic/molecular clusters of that species within its gaseous volume—like rain drop or snow flake formation within clouds—or at the contact between such gaseous phase and a liquid or solid surface. In clouds, this can be catalyzed by water-nucleating proteins, produced by atmospheric microbes, which are capable of binding gaseous or liquid water molecules.
Reversibility scenarios
A few distinct reversibility scenarios emerge here with respect to the nature of the surface.
absorption into the surface of a liquid (either of the same substance or one of its solvents)—is reversible as evaporation.
adsorption (as dew droplets) onto solid surface at pressures and temperatures higher than the species' triple point—also reversible as evaporation.
adsorption onto solid surface (as supplemental layers of solid) at pressures and temperatures lower than the species' triple point—is reversible as sublimation.
Most common scenarios
Condensation commonly occurs when a vapor is cooled and/or compressed to its saturation limit when the molecular density in the gas phase reaches its maximal threshold. Vapor cooling and compressing equipment that collects condensed liquids is called a "condenser".
Measurement
Psychrometry measures the rates of condensation through evaporation into the air moisture at various atmospheric pressures and temperatures. Water is the product of its vapor condensation—condensation is the process of such phase conversion.
Applications of condensation
Condensation is a crucial component of distillation, an important laboratory and industrial chemistry application.
Because condensation is a naturally occurring phenomenon, it can often be used to generate water in large quantities for human use. Many structures are made solely for the purpose of collecting water from condensation, such as air wells and fog fences. Such systems can often be used to retain soil moisture in areas where active desertification is occurring—so much so that some organizations educate people living in affected areas about water condensers to help them deal effectively with the situation.
It is also a crucial process in forming particle tracks in a cloud chamber. In this case, ions produced by an incident particle act as nucleation centers for the condensation of the vapor producing the visible "cloud" trails.
Commercial applications of condensation, by consumers as well as industry, include power generation, water desalination, thermal management, refrigeration, and air conditioning.
Biological adaptation
Numerous living beings use water made accessible by condensation. A few examples of these are the Australian thorny devil, the darkling beetles of the Namibian coast, and the coast redwoods of the West Coast of the United States.
Condensation in building construction
Condensation in building construction is an unwanted phenomenon as it may cause dampness, mold health issues, wood rot, corrosion, weakening of mortar and masonry walls, and energy penalties due to increased heat transfer. To alleviate these issues, the indoor air humidity needs to be lowered, or air ventilation in the building needs to be improved. This can be done in a number of ways, for example opening windows, turning on extractor fans, using dehumidifiers, drying clothes outside and covering pots and pans whilst cooking. Air conditioning or ventilation systems can be installed that help remove moisture from the air, and move air throughout a building. The amount of water vapor that can be stored in the air can be increased simply by increasing the temperature. However, this can be a double edged sword as most condensation in the home occurs when warm, moisture heavy air comes into contact with a cool surface. As the air is cooled, it can no longer hold as much water vapor. This leads to deposition of water on the cool surface. This is very apparent when central heating is used in combination with single glazed windows in winter.
Interstructure condensation may be caused by thermal bridges, insufficient or lacking insulation, damp proofing or insulated glazing.
Table
See also
Air well (condenser)
Bose–Einstein condensate
Cloud physics
Condenser (heat transfer)
DNA condensation
Dropwise condensation
Groasis Waterboxx
Kelvin equation
Liquefaction of gases
Phase diagram
Phase transition
Retrograde condensation
Surface condenser
References
Sources
Phase transitions | 0.783659 | 0.997044 | 0.781343 |
Biomedical sciences | Biomedical sciences are a set of sciences applying portions of natural science or formal science, or both, to develop knowledge, interventions, or technology that are of use in healthcare or public health. Such disciplines as medical microbiology, clinical virology, clinical epidemiology, genetic epidemiology, and biomedical engineering are medical sciences. In explaining physiological mechanisms operating in pathological processes, however, pathophysiology can be regarded as basic science.
Biomedical Sciences, as defined by the UK Quality Assurance Agency for Higher Education Benchmark Statement in 2015, includes those science disciplines whose primary focus is the biology of human health and disease and ranges from the generic study of biomedical sciences and human biology to more specialised subject areas such as pharmacology, human physiology and human nutrition. It is underpinned by relevant basic sciences including anatomy and physiology, cell biology, biochemistry, microbiology, genetics and molecular biology, pharmacology, immunology, mathematics and statistics, and bioinformatics. As such the biomedical sciences have a much wider range of academic and research activities and economic significance than that defined by hospital laboratory sciences. Biomedical Sciences are the major focus of bioscience research and funding in the 21st century.
Roles within biomedical science
A sub-set of biomedical sciences is the science of clinical laboratory diagnosis. This is commonly referred to in the UK as 'biomedical science' or 'healthcare science'. There are at least 45 different specialisms within healthcare science, which are traditionally grouped into three main divisions:
specialisms involving life sciences
specialisms involving physiological science
specialisms involving medical physics or bioengineering
Life sciences specialties
Molecular toxicology
Molecular pathology
Blood transfusion science
Cervical cytology
Clinical biochemistry
Clinical embryology
Clinical immunology
Clinical pharmacology and therapeutics
Electron microscopy
External quality assurance
Haematology
Haemostasis and thrombosis
Histocompatibility and immunogenetics
Histopathology and cytopathology
Molecular genetics and cytogenetics
Molecular biology and cell biology
Microbiology including mycology
Bacteriology
Tropical diseases
Phlebotomy
Tissue banking/transplant
Virology
Physiological science specialisms
Physics and bioengineering specialisms
Biomedical science in the United Kingdom
The healthcare science workforce is an important part of the UK's National Health Service. While people working in healthcare science are only 5% of the staff of the NHS, 80% of all diagnoses can be attributed to their work.
The volume of specialist healthcare science work is a significant part of the work of the NHS. Every year, NHS healthcare scientists carry out:
nearly 1 billion pathology laboratory tests
more than 12 million physiological tests
support for 1.5 million fractions of radiotherapy
The four governments of the UK have recognised the importance of healthcare science to the NHS, introducing the Modernising Scientific Careers initiative to make certain that the education and training for healthcare scientists ensures there is the flexibility to meet patient needs while keeping up to date with scientific developments.
Graduates of an accredited biomedical science degree programme can also apply for the NHS' Scientist training programme, which gives successful applicants an opportunity to work in a clinical setting whilst also studying towards an MSc or Doctoral qualification.
Biomedical Science in the 20th century
At this point in history the field of medicine was the most prevalent sub field of biomedical science, as several breakthroughs on how to treat diseases and help the immune system were made. As well as the birth of body augmentations.
1910s
In 1912, the Institute of Biomedical Science was founded in the United Kingdom. The institute is still standing today and still regularly publishes works in the major breakthroughs in disease treatments and other breakthroughs in the field 117 years later. The IBMS today represents approximately 20,000 members employed mainly in National Health Service and private laboratories.
1920s
In 1928, British Scientist Alexander Fleming discovered the first antibiotic penicillin. This was a huge breakthrough in biomedical science because it allowed for the treatment of bacterial infections.
In 1926, the first artificial pacemaker was made by Australian physician Dr. Mark C. Lidwell. This portable machine was plugged into a lighting point. One pole was applied to a skin pad soaked with strong salt solution, while the other consisted of a needle insulated up to the point and was plunged into the appropriate cardiac chamber and the machine started. A switch was incorporated to change the polarity. The pacemaker rate ranged from about 80 to 120 pulses per minute and the voltage also variable from 1.5 to 120 volts.
1930s
The 1930s was a huge era for biomedical research, as this was the era where antibiotics became more widespread and vaccines started to be developed. In 1935, the idea of a polio vaccine was introduced by Dr. Maurice Brodie. Brodie prepared a died poliomyelitis vaccine, which he then tested on chimpanzees, himself, and several children. Brodie's vaccine trials went poorly since the polio-virus became active in many of the human test subjects. Many subjects had fatal side effects, paralyzing, and causing death.
1940s
During and after World War II, the field of biomedical science saw a new age of technology and treatment methods. For instance in 1941 the first hormonal treatment for prostate cancer was implemented by Urologist and cancer researcher Charles B. Huggins. Huggins discovered that if you remove the testicles from a man with prostate cancer, the cancer had nowhere to spread, and nothing to feed on thus putting the subject into remission. This advancement lead to the development of hormonal blocking drugs, which is less invasive and still used today. At the tail end of this decade, the first bone marrow transplant was done on a mouse in 1949. The surgery was conducted by Dr. Leon O. Jacobson, he discovered that he could transplant bone marrow and spleen tissues in a mouse that had both no bone marrow and a destroyed spleen. The procedure is still used in modern medicine today and is responsible for saving countless lives.
1950s
In the 1950s, we saw innovation in technology across all fields, but most importantly there were many breakthroughs which led to modern medicine. On 6 March 1953, Dr. Jonas Salk announced the completion of the first successful killed-virus Polio vaccine. The vaccine was tested on about 1.6 million Canadian, American, and Finnish children in 1954. The vaccine was announced as safe on 12 April 1955.
See also
Biomedical research institution Austral University Hospital
References
External links
Extraordinary You: Case studies of Healthcare scientists in the UK's National Health Service
National Institute of Environmental Health Sciences
The US National Library of Medicine
National Health Service
Health sciences
Health care occupations
Science occupations | 0.785257 | 0.995015 | 0.781342 |
Physical change | Physical changes are changes affecting the form of a chemical substance, but not its chemical composition. Physical changes are used to separate mixtures into their component compounds, but can not usually be used to separate compounds into chemical elements or simpler compounds.
Physical changes occur when objects or substances undergo a change that does not change their chemical composition. This contrasts with the concept of chemical change in which the composition of a substance changes or one or more substances combine or break up to form new substances. In general a physical change is reversible using physical means. For example, salt dissolved in water can be recovered by allowing the water to evaporate.
A physical change involves a change in physical properties. Examples of physical properties include melting, transition to a gas, change of strength, change of durability, changes to crystal form, textural change, shape, size, color, volume and density.
An example of a physical change is the process of tempering steel to form a knife blade. A steel blank is repeatedly heated and hammered which changes the hardness of the steel, its flexibility and its ability to maintain a sharp edge.
Many physical changes also involve the rearrangement of atoms most noticeably in the formation of crystals. Many chemical changes are irreversible, and many physical changes are reversible, but reversibility is not a certain criterion for classification. Although chemical changes may be recognized by an indication such as odor, color change, or production of a gas, every one of these indicators can result from physical change.
Examples
Heating and cooling
Many elements and some compounds change from solids to liquids and from liquids to gases when heated and the reverse when cooled. Some substances such as iodine and carbon dioxide go directly from solid to gas in a process called sublimation.
Magnetism
Ferro-magnetic materials can become magnetic. The process is reversible and does not affect the chemical composition.
Crystalisation
Many elements and compounds form crystals. Some such as carbon can form several different forms including diamond, graphite, graphene and fullerenes including buckminsterfullerene.
Crystals in metals have a major effect of the physical properties of the metal including strength and ductility. Crystal type, shape and size can be altered by physical hammering, rolling and by heat
Mixtures
Mixtures of substances that are not soluble are usually readily separated by physical sieving or settlement. However mixtures can have different properties from the individual components. One familiar example is the mixture of fine sand with water used to make sandcastles. Neither the sand on its own nor the water on its own will make a sand-castle but by using physical properties of surface tension, the mixture behaves in a different way.
Solutions
Most solutions of salts and some compounds such as sugars can be separated by evaporation. Others such as mixtures or volatile liquids such as low molecular weight alcohols, can be separated by fractional distillation.
Alloys
The mixing of different metal elements is known as alloying. Brass is an alloy of copper and zinc. Separating individual metals from an alloy can be difficult and may require chemical processing – making an alloy is an example of a physical change that cannot readily be undone by physical means.
Alloys where mercury is one of the metals can be separated physically by melting the alloy and boiling the mercury off as a vapour.
See also
Chemical change
Process (science)
Physical property
References
Physical phenomena | 0.783754 | 0.996905 | 0.781328 |
Decomposition (computer science) | Decomposition in computer science, also known as factoring, is breaking a complex problem or system into parts that are easier to conceive, understand, program, and maintain.
Overview
Different types of decomposition are defined in computer sciences:
In structured programming, algorithmic decomposition breaks a process down into well-defined steps.
Structured analysis breaks down a software system from the system context level to system functions and data entities as described by Tom DeMarco.<ref>Tom DeMarco (1978). Structured Analysis and System Specification. New York, NY: Yourdon, 1978. , .</ref>
Object-oriented decomposition breaks a large system down into progressively smaller classes or objects that are responsible for part of the problem domain.
According to Booch, algorithmic decomposition is a necessary part of object-oriented analysis and design, but object-oriented systems start with and emphasize decomposition into objects.
More generally, functional decomposition in computer science is a technique for mastering the complexity of the function of a model. A functional model of a system is thereby replaced by a series of functional models of subsystems.
Decomposition topics
Decomposition paradigm
A decomposition paradigm in computer programming is a strategy for organizing a program as a number of parts, and usually implies a specific way to organize a program text. Typically the aim of using a decomposition paradigm is to optimize some metric related to program complexity, for example a program's modularity or its maintainability.
Most decomposition paradigms suggest breaking down a program into parts to minimize the static dependencies between those parts, and to maximize each part's cohesiveness. Popular decomposition paradigms include the procedural, modules, abstract data type, and object oriented paradigms.
Though the concept of decomposition paradigm is entirely distinct from that of model of computation, they are often confused. For example, the functional model of computation is often confused with procedural decomposition, and the actor model of computation is often confused with object oriented decomposition.
Decomposition diagram
A decomposition diagram shows a complex, process, organization, data subject area, or other type of object broken down into lower level, more detailed components. For example, decomposition diagrams may represent organizational structure or functional decomposition into processes. Decomposition diagrams provide a logical hierarchical decomposition of a system.
See also
Code refactoring
Component-based software engineering
Dynamization
Duplicate code
Event partitioning
How to Solve It''
Integrated Enterprise Modeling
Personal information management
Readability
Subroutine
References
External links
Object Oriented Analysis and Design
On the Criteria To Be Used in Decomposing Systems into Modules
Software design
Decomposition methods | 0.793039 | 0.985114 | 0.781234 |
Elementary reaction | An elementary reaction is a chemical reaction in which one or more chemical species react directly to form products in a single reaction step and with a single transition state. In practice, a reaction is assumed to be elementary if no reaction intermediates have been detected or need to be postulated to describe the reaction on a molecular scale. An apparently elementary reaction may be in fact a stepwise reaction, i.e. a complicated sequence of chemical reactions, with reaction intermediates of variable lifetimes.
In a unimolecular elementary reaction, a molecule dissociates or isomerises to form the products(s)
At constant temperature, the rate of such a reaction is proportional to the concentration of the species
In a bimolecular elementary reaction, two atoms, molecules, ions or radicals, and , react together to form the product(s)
The rate of such a reaction, at constant temperature, is proportional to the product of the concentrations of the species and
The rate expression for an elementary bimolecular reaction is sometimes referred to as the law of mass action as it was first proposed by Guldberg and Waage in 1864. An example of this type of reaction is a cycloaddition reaction.
This rate expression can be derived from first principles by using collision theory for ideal gases. For the case of dilute fluids equivalent results have been obtained from simple probabilistic arguments.
According to collision theory the probability of three chemical species reacting simultaneously with each other in a termolecular elementary reaction is negligible. Hence such termolecular reactions are commonly referred as non-elementary reactions and can be broken down into a more fundamental set of bimolecular reactions, in agreement with the law of mass action. It is not always possible to derive overall reaction schemes, but solutions based on rate equations are often possible in terms of steady-state or Michaelis-Menten approximations.
Notes
Chemical kinetics
Physical chemistry | 0.802431 | 0.973533 | 0.781193 |
Systems biology | Systems biology is the computational and mathematical analysis and modeling of complex biological systems. It is a biology-based interdisciplinary field of study that focuses on complex interactions within biological systems, using a holistic approach (holism instead of the more traditional reductionism) to biological research.
Particularly from the year 2000 onwards, the concept has been used widely in biology in a variety of contexts. The Human Genome Project is an example of applied systems thinking in biology which has led to new, collaborative ways of working on problems in the biological field of genetics. One of the aims of systems biology is to model and discover emergent properties, properties of cells, tissues and organisms functioning as a system whose theoretical description is only possible using techniques of systems biology. These typically involve metabolic networks or cell signaling networks.
Overview
Systems biology can be considered from a number of different aspects.
As a field of study, particularly, the study of the interactions between the components of biological systems, and how these interactions give rise to the function and behavior of that system (for example, the enzymes and metabolites in a metabolic pathway or the heart beats).
As a paradigm, systems biology is usually defined in antithesis to the so-called reductionist paradigm (biological organisation), although it is consistent with the scientific method. The distinction between the two paradigms is referred to in these quotations: "the reductionist approach has successfully identified most of the components and many of the interactions but, unfortunately, offers no convincing concepts or methods to understand how system properties emerge ... the pluralism of causes and effects in biological networks is better addressed by observing, through quantitative measures, multiple components simultaneously and by rigorous data integration with mathematical models." (Sauer et al.) "Systems biology ... is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different. ... It means changing our philosophy, in the full sense of the term." (Denis Noble)
As a series of operational protocols used for performing research, namely a cycle composed of theory, analytic or computational modelling to propose specific testable hypotheses about a biological system, experimental validation, and then using the newly acquired quantitative description of cells or cell processes to refine the computational model or theory. Since the objective is a model of the interactions in a system, the experimental techniques that most suit systems biology are those that are system-wide and attempt to be as complete as possible. Therefore, transcriptomics, metabolomics, proteomics and high-throughput techniques are used to collect quantitative data for the construction and validation of models.
As the application of dynamical systems theory to molecular biology. Indeed, the focus on the dynamics of the studied systems is the main conceptual difference between systems biology and bioinformatics.
As a socioscientific phenomenon defined by the strategy of pursuing integration of complex data about the interactions in biological systems from diverse experimental sources using interdisciplinary tools and personnel.
History
Although the concept of a systems view of cellular function has been well understood since at least the 1930s, technological limitations made it difficult to make systems wide measurements. The advent of microarray technology in the 1990s opened up an entire new visa for studying cells at the systems level. In 2000, the Institute for Systems Biology was established in Seattle in an effort to lure "computational" type people who it was felt were not attracted to the academic settings of the university. The institute did not have a clear definition of what the field actually was: roughly bringing together people from diverse fields to use computers to holistically study biology in new ways. A Department of Systems Biology at Harvard Medical School was launched in 2003. In 2006 it was predicted that the buzz generated by the "very fashionable" new concept would cause all the major universities to need a systems biology department, thus that there would be careers available for graduates with a modicum of ability in computer programming and biology. In 2006 the National Science Foundation put forward a challenge to build a mathematical model of the whole cell. In 2012 the first whole-cell model of Mycoplasma genitalium was achieved by the Covert Laboratory at Stanford University. The whole-cell model is able to predict viability of M. genitalium cells in response to genetic mutations.
An earlier precursor of systems biology, as a distinct discipline, may have been by systems theorist Mihajlo Mesarovic in 1966 with an international symposium at the Case Institute of Technology in Cleveland, Ohio, titled Systems Theory and Biology. Mesarovic predicted that perhaps in the future there would be such a thing as "systems biology". Other early precursors that focused on the view that biology should be analyzed as a system, rather than a simple collection of parts, were Metabolic Control Analysis, developed by Henrik Kacser and Jim Burns later thoroughly revised, and Reinhart Heinrich and Tom Rapoport, and Biochemical Systems Theory developed by Michael Savageau
According to Robert Rosen in the 1960s, holistic biology had become passé by the early 20th century, as more empirical science dominated by molecular chemistry had become popular. Echoing him forty years later in 2006 Kling writes that the success of molecular biology throughout the 20th century had suppressed holistic computational methods. By 2011 the National Institutes of Health had made grant money available to support over ten systems biology centers in the United States, but by 2012 Hunter writes that systems biology still has someway to go to achieve its full potential. Nonetheless, proponents hoped that it might once prove more useful in the future.
An important milestone in the development of systems biology has become the international project Physiome.
Associated disciplines
According to the interpretation of systems biology as using large data sets using interdisciplinary tools, a typical application is metabolomics, which is the complete set of all the metabolic products, metabolites, in the system at the organism, cell, or tissue level.
Items that may be a computer database include: phenomics, organismal variation in phenotype as it changes during its life span; genomics, organismal deoxyribonucleic acid (DNA) sequence, including intra-organismal cell specific variation. (i.e., telomere length variation); epigenomics/epigenetics, organismal and corresponding cell specific transcriptomic regulating factors not empirically coded in the genomic sequence. (i.e., DNA methylation, Histone acetylation and deacetylation, etc.); transcriptomics, organismal, tissue or whole cell gene expression measurements by DNA microarrays or serial analysis of gene expression; interferomics, organismal, tissue, or cell-level transcript correcting factors (i.e., RNA interference), proteomics, organismal, tissue, or cell level measurements of proteins and peptides via two-dimensional gel electrophoresis, mass spectrometry or multi-dimensional protein identification techniques (advanced HPLC systems coupled with mass spectrometry). Sub disciplines include phosphoproteomics, glycoproteomics and other methods to detect chemically modified proteins; glycomics, organismal, tissue, or cell-level measurements of carbohydrates; lipidomics, organismal, tissue, or cell level measurements of lipids.
The molecular interactions within the cell are also studied, this is called interactomics. A discipline in this field of study is protein–protein interactions, although interactomics includes the interactions of other molecules. Neuroelectrodynamics, where the computer's or a brain's computing function as a dynamic system is studied along with its (bio)physical mechanisms; and fluxomics, measurements of the rates of metabolic reactions in a biological system (cell, tissue, or organism).
In approaching a systems biology problem there are two main approaches. These are the top down and bottom up approach. The top down approach takes as much of the system into account as possible and relies largely on experimental results. The RNA-Seq technique is an example of an experimental top down approach. Conversely, the bottom up approach is used to create detailed models while also incorporating experimental data. An example of the bottom up approach is the use of circuit models to describe a simple gene network.
Various technologies utilized to capture dynamic changes in mRNA, proteins, and post-translational modifications. Mechanobiology, forces and physical properties at all scales, their interplay with other regulatory mechanisms; biosemiotics, analysis of the system of sign relations of an organism or other biosystems; Physiomics, a systematic study of physiome in biology.
Cancer systems biology is an example of the systems biology approach, which can be distinguished by the specific object of study (tumorigenesis and treatment of cancer). It works with the specific data (patient samples, high-throughput data with particular attention to characterizing cancer genome in patient tumour samples) and tools (immortalized cancer cell lines, mouse models of tumorigenesis, xenograft models, high-throughput sequencing methods, siRNA-based gene knocking down high-throughput screenings, computational modeling of the consequences of somatic mutations and genome instability). The long-term objective of the systems biology of cancer is ability to better diagnose cancer, classify it and better predict the outcome of a suggested treatment, which is a basis for personalized cancer medicine and virtual cancer patient in more distant prospective. Significant efforts in computational systems biology of cancer have been made in creating realistic multi-scale in silico models of various tumours.
The systems biology approach often involves the development of mechanistic models, such as the reconstruction of dynamic systems from the quantitative properties of their elementary building blocks. For instance, a cellular network can be modelled mathematically using methods coming from chemical kinetics and control theory. Due to the large number of parameters, variables and constraints in cellular networks, numerical and computational techniques are often used (e.g., flux balance analysis).
Bioinformatics and data analysis
Other aspects of computer science, informatics, and statistics are also used in systems biology. These include new forms of computational models, such as the use of process calculi to model biological processes (notable approaches include stochastic π-calculus, BioAmbients, Beta Binders, BioPEPA, and Brane calculus) and constraint-based modeling; integration of information from the literature, using techniques of information extraction and text mining; development of online databases and repositories for sharing data and models, approaches to database integration and software interoperability via loose coupling of software, websites and databases, or commercial suits; network-based approaches for analyzing high dimensional genomic data sets. For example, weighted correlation network analysis is often used for identifying clusters (referred to as modules), modeling the relationship between clusters, calculating fuzzy measures of cluster (module) membership, identifying intramodular hubs, and for studying cluster preservation in other data sets; pathway-based methods for omics data analysis, e.g. approaches to identify and score pathways with differential activity of their gene, protein, or metabolite members. Much of the analysis of genomic data sets also include identifying correlations. Additionally, as much of the information comes from different fields, the development of syntactically and semantically sound ways of representing biological models is needed.
Creating biological models
Researchers begin by choosing a biological pathway and diagramming all of the protein, gene, and/or metabolic pathways. After determining all of the interactions, mass action kinetics or enzyme kinetic rate laws are used to describe the speed of the reactions in the system. Using mass-conservation, the differential equations for the biological system can be constructed. Experiments or parameter fitting can be done to determine the parameter values to use in the differential equations. These parameter values will be the various kinetic constants required to fully describe the model. This model determines the behavior of species in biological systems and bring new insight to the specific activities of system. Sometimes it is not possible to gather all reaction rates of a system. Unknown reaction rates are determined by simulating the model of known parameters and target behavior which provides possible parameter values.
The use of constraint-based reconstruction and analysis (COBRA) methods has become popular among systems biologists to simulate and predict the metabolic phenotypes, using genome-scale models. One of the methods is the flux balance analysis (FBA) approach, by which one can study the biochemical networks and analyze the flow of metabolites through a particular metabolic network, by optimizing the objective function of interest (e.g. maximizing biomass production to predict growth).
See also
Biochemical systems equation
Biological computation
BioSystems (journal)
Computational biology
Exposome
Interactome
List of omics topics in biology
List of systems biology modeling software
Living systems
Metabolic Control Analysis
Metabolic network modelling
Modelling biological systems
Molecular pathological epidemiology
Network biology
Network medicine
Synthetic biology
Systems biomedicine
Systems immunology
Systems medicine
TIARA (database)
References
Further reading
provides a comparative review of three books:
External links
Biological Systems in bio-physics-wiki
Bioinformatics
Computational fields of study | 0.786719 | 0.992762 | 0.781025 |
Paradigm shift | A paradigm shift is a fundamental change in the basic concepts and experimental practices of a scientific discipline. It is a concept in the philosophy of science that was introduced and brought into the common lexicon by the American physicist and philosopher Thomas Kuhn. Even though Kuhn restricted the use of the term to the natural sciences, the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events.
Kuhn presented his notion of a paradigm shift in his influential book The Structure of Scientific Revolutions (1962).
Kuhn contrasts paradigm shifts, which characterize a Scientific Revolution, to the activity of normal science, which he describes as scientific work done within a prevailing framework or paradigm. Paradigm shifts arise when the dominant paradigm under which normal science operates is rendered incompatible with new phenomena, facilitating the adoption of a new theory or paradigm.
As one commentator summarizes:
History
The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to the second edition of his Critique of Pure Reason (1787). Kant used the phrase "revolution of the way of thinking" to refer to Greek mathematics and Newtonian physics. In the 20th century, new developments in the basic concepts of mathematics, physics, and biology revitalized interest in the question among scholars.
Original usage
In his 1962 book The Structure of Scientific Revolutions, Kuhn explains the development of paradigm shifts in science into four stages:
Normal science – In this stage, which Kuhn sees as most prominent in science, a dominant paradigm is active. This paradigm is characterized by a set of theories and ideas that define what is possible and rational to do, giving scientists a clear set of tools to approach certain problems. Some examples of dominant paradigms that Kuhn gives are: Newtonian physics, caloric theory, and the theory of electromagnetism. Insofar as paradigms are useful, they expand both the scope and the tools with which scientists do research. Kuhn stresses that, rather than being monolithic, the paradigms that define normal science can be particular to different people. A chemist and a physicist might operate with different paradigms of what a helium atom is. Under normal science, scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made.
Extraordinary research – When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis. To address the crisis, scientists push the boundaries of normal science in what Kuhn calls “extraordinary research”, which is characterized by its exploratory nature. Without the structures of the dominant paradigm to depend on, scientists engaging in extraordinary research must produce new theories, thought experiments, and experiments to explain the anomalies. Kuhn sees the practice of this stage – “the proliferation of competing articulations, the willingness to try anything, the expression of explicit discontent, the recourse to philosophy and to debate over fundamentals” – as even more important to science than paradigm shifts.
Adoption of a new paradigm – Eventually a new paradigm is formed, which gains its own new followers. For Kuhn, this stage entails both resistance to the new paradigm, and reasons for why individual scientists adopt it. According to Max Planck, "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Because scientists are committed to the dominant paradigm, and paradigm shifts involve gestalt-like changes, Kuhn stresses that paradigms are difficult to change. However, paradigms can gain influence by explaining or predicting phenomena much better than before (i.e., Bohr's model of the atom) or by being more subjectively pleasing. During this phase, proponents for competing paradigms address what Kuhn considers the core of a paradigm debate: whether a given paradigm will be a good guide for problems – things that neither the proposed paradigm nor the dominant paradigm are capable of solving currently.
Aftermath of the scientific revolution – In the long run, the new paradigm becomes institutionalized as the dominant one. Textbooks are written, obscuring the revolutionary process.
Features
Paradigm shifts and progress
A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism: the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different.
Incommensurability
These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another—that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes—so different that whether or not one was better, they could not be understood by one another. However, the philosopher Donald Davidson published the highly regarded essay "On the Very Idea of a Conceptual Scheme" in 1974 arguing that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous, with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour.
Gradualism vs. sudden change
Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system.
In The Structure of Scientific Revolutions, Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science" (p. 12). Kuhn's idea was itself revolutionary in its time as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science.
Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it.
Examples
Natural sciences
Some of the "classical cases" of Kuhnian paradigm shifts in science are:
1543 – The transition in cosmology from a Ptolemaic cosmology to a Copernican one.
1543 – The acceptance of the work of Andreas Vesalius, whose work De humani corporis fabrica corrected the numerous errors in the previously held system of human anatomy created by Galen.
1687 – The transition in mechanics from Aristotelian mechanics to classical mechanics.
1783 – The acceptance of Lavoisier's theory of chemical reactions and combustion in place of phlogiston theory, known as the chemical revolution.
The transition in optics from geometrical optics to physical optics with Augustin-Jean Fresnel's wave theory.
1826 – The discovery of hyperbolic geometry.
1830 to 1833 – Geologist Charles Lyell published Principles of Geology, which not only put forth the concept of uniformitarianism, which was in direct contrast to the popular geological theory, at the time, catastrophism, but also utilized geological proof to determine that the age of the Earth was older than 6,000 years, which was previously held to be true.
1859 – The revolution in evolution from goal-directed change to Charles Darwin's natural selection.
1880 – The germ theory of disease began overtaking Galen's miasma theory.
1905 – The development of quantum mechanics, which replaced classical mechanics at microscopic scales.
1887 to 1905 – The transition from the luminiferous aether present in space to electromagnetic radiation in spacetime.
1919 – The transition between the worldview of Newtonian gravity and general relativity.
1920 – The emergence of the modern view of the Milky Way as just one of countless galaxies within an immeasurably vast universe following the results of the Smithsonian's Great Debate between astronomers Harlow Shapley and Heber Curtis.
1952 – Chemists Stanley Miller and Harold Urey perform an experiment which simulated the conditions on the early Earth that favored chemical reactions that synthesized more complex organic compounds from simpler inorganic precursors, kickstarting decades of research into the chemical origins of life.
1964 – The discovery of cosmic microwave background radiation leads to the big bang theory being accepted over the steady state theory in cosmology.
1965 – The acceptance of plate tectonics as the explanation for large-scale geologic changes.
1969 – Astronomer Victor Safronov, in his book Evolution of the protoplanetary cloud and formation of the Earth and the planets, developed the early version of the current accepted theory of planetary formation.
1974 – The November Revolution, with the discovery of the J/psi meson, and the acceptance of the existence of quarks and the Standard Model of particle physics.
1960 to 1985 – The acceptance of the ubiquity of nonlinear dynamical systems as promoted by chaos theory, instead of a laplacian world-view of deterministic predictability.
Social sciences
In Kuhn's view, the existence of a single reigning paradigm is characteristic of the natural sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals." Others have applied Kuhn's concept of paradigm shift to the social sciences.
The movement known as the cognitive revolution moved away from behaviourist approaches to psychology and the acceptance of cognition as central to studying human behavior.
Anthropologist Franz Boas published The Mind of Primitive Man, which integrated his theories concerning the history and development of cultures and established a program that would dominate American anthropology in the following years. His research, along with that of his other colleagues, combatted and debunked the claims being made by scholars at the time, given scientific racism and eugenics were dominant in many universities and institutions that were dedicated to studying humans and society. Eventually anthropology would apply a holistic approach, utilizing four subcategories to study humans: archaeology, cultural, evolutionary, and linguistic anthropology.
At the turn of the 20th century, sociologists, along with other social scientists developed and adopted methodological antipositivism, which sought to uphold a subjective perspective when studying human activities pertaining to culture, society, and behavior. This was in stark contrast to positivism, which took its influence from the methodologies utilized within the natural sciences.
First proposed by Ferdinand de Saussure in 1879, the laryngeal theory in Indo-European linguistics postulated the existence of "laryngeal" consonants in the Proto-Indo-European language (PIE), a theory that was confirmed by the discovery of the Hittite language in the early 20th century. The theory has since been accepted by the vast majority of linguists, paving the way for the internal reconstruction of the syntax and grammatical rules of PIE and is considered one of the most significant developments in linguistics since the initial discovery of the Indo-European language family.
The adoption of radiocarbon dating by archaeologists has been proposed as a paradigm shift because of how it greatly increased the time depth the archaeologists could reliably date objects from. Similarly the use of LIDAR for remote geospatial imaging of cultural landscapes, and the shift from processual to post-processual archaeology have both been claimed as paradigm shifts by archaeologists.
The emergence of three-phase traffic theory created by Boris Kerner in vehicular traffic science as an alternative theory to classical (standard) traffic flow theories.
Applied sciences
More recently, paradigm shifts are also recognisable in applied sciences:
In medicine, the transition from "clinical judgment" to evidence-based medicine.
In Artificial Intelligence, the transition from a knowledge-based to a data-driven paradigm has been discussed from 2010.
Other uses
The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought pattern—a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing:
M. L. Handa, a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by "paradigm" and introduces the idea of a "social paradigm". In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as "paradigm shift". In this respect, he focuses on the social circumstances that precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education.
The concept has been developed for technology and economics in the identification of new techno-economic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Joseph Schumpeter's idea of creative destruction. Examples include the move to mass production and the introduction of microelectronics.
Two photographs of the Earth from space, "Earthrise" (1968) and "The Blue Marble" (1972), are thought to have helped to usher in the environmentalist movement, which gained great prominence in the years immediately following distribution of those images.
Hans Küng applies Thomas Kuhn's theory of paradigm change to the entire history of Christian thought and theology. He identifies six historical "macromodels": 1) the apocalyptic paradigm of primitive Christianity, 2) the Hellenistic paradigm of the patristic period, 3) the medieval Roman Catholic paradigm, 4) the Protestant (Reformation) paradigm, 5) the modern Enlightenment paradigm, and 6) the emerging ecumenical paradigm. He also discusses five analogies between natural science and theology in relation to paradigm shifts. Küng addresses paradigm change in his books, Paradigm Change in Theology and Theology for the Third Millennium: An Ecumenical View.
In the later part of the 1990s, 'paradigm shift' emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication. In his book Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books as abused and overused to the point of becoming meaningless.
The concept of technological paradigms has been advanced, particularly by Giovanni Dosi.
Criticism
In a 2015 retrospective on Kuhn, the philosopher Martin Cohen describes the notion of the paradigm shift as a kind of intellectual virus – spreading from hard science to social science and on to the arts and even everyday political rhetoric today. Cohen claims that Kuhn had only a very hazy idea of what it might mean and, in line with the Austrian philosopher of science Paul Feyerabend, accuses Kuhn of retreating from the more radical implications of his theory, which are that scientific facts are never really more than opinions whose popularity is transitory and far from conclusive. Cohen says scientific knowledge is less certain than it is usually portrayed, and that science and knowledge generally is not the 'very sensible and reassuringly solid sort of affair' that Kuhn describes, in which progress involves periodic paradigm shifts in which much of the old certainties are abandoned in order to open up new approaches to understanding that scientists would never have considered valid before. He argues that information cascades can distort rational, scientific debate. He has focused on health issues, including the example of highly mediatised 'pandemic' alarms, and why they have turned out eventually to be little more than scares.
See also
(author of Paradigm Shift)
References
Citations
Sources
External links
MIT 6.933J – The Structure of Engineering Revolutions. From MIT OpenCourseWare, course materials (graduate level) for a course on the history of technology through a Kuhnian lens.
Change
Cognition
Concepts in epistemology
Concepts in the philosophy of science
Consensus reality
Critical thinking
Epistemology of science
Historiography of science
Innovation
Philosophical theories
Reasoning
Scientific Revolution
Thomas Kuhn | 0.78318 | 0.997237 | 0.781016 |
Chemical equilibrium | In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium.
Historical introduction
The concept of chemical equilibrium was developed in 1803, after Berthollet found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions must be equal. In the following chemical equation, arrows point both ways to indicate equilibrium. A and B are reactant chemical species, S and T are product species, and α, β, σ, and τ are the stoichiometric coefficients of the respective reactants and products:
α A + β B σ S + τ T
The equilibrium concentration position of a reaction is said to lie "far to the right" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be "far to the left" if hardly any product is formed from the reactants.
Guldberg and Waage (1865), building on Berthollet's ideas, proposed the law of mass action:
where A, B, S and T are active masses and k+ and k− are rate constants. Since at equilibrium forward and backward rates are equal:
and the ratio of the rate constants is also a constant, now known as an equilibrium constant.
By convention, the products form the numerator.
However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction as Guldberg and Waage had proposed (see, for example, nucleophilic aliphatic substitution by SN1 or reaction of hydrogen and bromine to form hydrogen bromide). Equality of forward and backward reaction rates, however, is a necessary condition for chemical equilibrium, though it is not sufficient to explain why equilibrium occurs.
Despite the limitations of this derivation, the equilibrium constant for a reaction is indeed a constant, independent of the activities of the various species involved, though it does depend on temperature as observed by the van 't Hoff equation. Adding a catalyst will affect both the forward reaction and the reverse reaction in the same way and will not have an effect on the equilibrium constant. The catalyst will speed up both reactions thereby increasing the speed at which equilibrium is reached.
Although the macroscopic equilibrium concentrations are constant in time, reactions do occur at the molecular level. For example, in the case of acetic acid dissolved in water and forming acetate and hydronium ions,
a proton may hop from one molecule of acetic acid onto a water molecule and then onto an acetate anion to form another molecule of acetic acid and leaving the number of acetic acid molecules unchanged. This is an example of dynamic equilibrium. Equilibria, like the rest of thermodynamics, are statistical phenomena, averages of microscopic behavior.
Le Châtelier's principle (1884) predicts the behavior of an equilibrium system when changes to its reaction conditions occur. If a dynamic equilibrium is disturbed by changing the conditions, the position of equilibrium moves to partially reverse the change. For example, adding more S (to the chemical reaction above) from the outside will cause an excess of products, and the system will try to counteract this by increasing the reverse reaction and pushing the equilibrium point backward (though the equilibrium constant will stay the same).
If mineral acid is added to the acetic acid mixture, increasing the concentration of hydronium ion, the amount of dissociation must decrease as the reaction is driven to the left in accordance with this principle. This can also be deduced from the equilibrium constant expression for the reaction:
If {H3O+} increases {CH3CO2H} must increase and must decrease. The H2O is left out, as it is the solvent and its concentration remains high and nearly constant.
A quantitative version is given by the reaction quotient.
J. W. Gibbs suggested in 1873 that equilibrium is attained when the Gibbs free energy of the chemical potential of the system is at its minimum value (assuming the reaction is carried out at a constant temperature and pressure). What this means is that the derivative of the Gibbs energy with respect to reaction coordinate (a measure of the extent of reaction that has occurred, ranging from zero for all reactants to a maximum for all products) vanishes (because dG = 0), signaling a stationary point. This derivative is called the reaction Gibbs energy (or energy change) and corresponds to the difference between the chemical potentials of reactants and products at the composition of the reaction mixture. This criterion is both necessary and sufficient. If a mixture is not at equilibrium, the liberation of the excess Gibbs energy (or Helmholtz energy at constant volume reactions) is the "driving force" for the composition of the mixture to change until equilibrium is reached. The equilibrium constant can be related to the standard Gibbs free energy change for the reaction by the equation
where R is the universal gas constant and T the temperature.
When the reactants are dissolved in a medium of high ionic strength the quotient of activity coefficients may be taken to be constant. In that case the concentration quotient, Kc,
where [A] is the concentration of A, etc., is independent of the analytical concentration of the reactants. For this reason, equilibrium constants for solutions are usually determined in media of high ionic strength. Kc varies with ionic strength, temperature and pressure (or volume). Likewise Kp for gases depends on partial pressure. These constants are easier to measure and encountered in high-school chemistry courses.
Thermodynamics
At constant temperature and pressure, one must consider the Gibbs free energy, G, while at constant temperature and volume, one must consider the Helmholtz free energy, A, for the reaction; and at constant internal energy and volume, one must consider the entropy, S, for the reaction.
The constant volume case is important in geochemistry and atmospheric chemistry where pressure variations are significant. Note that, if reactants and products were in standard state (completely pure), then there would be no reversibility and no equilibrium. Indeed, they would necessarily occupy disjoint volumes of space. The mixing of the products and reactants contributes a large entropy increase (known as entropy of mixing) to states containing equal mixture of products and reactants and gives rise to a distinctive minimum in the Gibbs energy as a function of the extent of reaction. The standard Gibbs energy change, together with the Gibbs energy of mixing, determine the equilibrium state.
In this article only the constant pressure case is considered. The relation between the Gibbs free energy and the equilibrium constant can be found by considering chemical potentials.
At constant temperature and pressure in the absence of an applied voltage, the Gibbs free energy, G, for the reaction depends only on the extent of reaction: ξ (Greek letter xi), and can only decrease according to the second law of thermodynamics. It means that the derivative of G with respect to ξ must be negative if the reaction happens; at the equilibrium this derivative is equal to zero.
:equilibrium
In order to meet the thermodynamic condition for equilibrium, the Gibbs energy must be stationary, meaning that the derivative of G with respect to the extent of reaction, ξ, must be zero. It can be shown that in this case, the sum of chemical potentials times the stoichiometric coefficients of the products is equal to the sum of those corresponding to the reactants. Therefore, the sum of the Gibbs energies of the reactants must be the equal to the sum of the Gibbs energies of the products.
where μ is in this case a partial molar Gibbs energy, a chemical potential. The chemical potential of a reagent A is a function of the activity, {A} of that reagent.
(where μ is the standard chemical potential).
The definition of the Gibbs energy equation interacts with the fundamental thermodynamic relation to produce
.
Inserting dNi = νi dξ into the above equation gives a stoichiometric coefficient and a differential that denotes the reaction occurring to an infinitesimal extent (dξ). At constant pressure and temperature the above equations can be written as
which is the "Gibbs free energy change for the reaction. This results in:
.
By substituting the chemical potentials:
,
the relationship becomes:
:
which is the standard Gibbs energy change for the reaction' that can be calculated using thermodynamical tables.
The reaction quotient is defined as:
Therefore,
At equilibrium:
leading to:
and
Obtaining the value of the standard Gibbs energy change, allows the calculation of the equilibrium constant.
Addition of reactants or products
For a reactional system at equilibrium: Qr = Keq; ξ = ξeq.
If the activities of constituents are modified, the value of the reaction quotient changes and becomes different from the equilibrium constant: Qr ≠ Keq and then
If activity of a reagent i increases the reaction quotient decreases. Then and The reaction will shift to the right (i.e. in the forward direction, and thus more products will form).
If activity of a product j increases, then and The reaction will shift to the left (i.e. in the reverse direction, and thus less products will form).
Note that activities and equilibrium constants are dimensionless numbers.
Treatment of activity
The expression for the equilibrium constant can be rewritten as the product of a concentration quotient, Kc and an activity coefficient quotient, Γ.
[A] is the concentration of reagent A, etc. It is possible in principle to obtain values of the activity coefficients, γ. For solutions, equations such as the Debye–Hückel equation or extensions such as Davies equation Specific ion interaction theory or Pitzer equations may be used.Software (below) However this is not always possible. It is common practice to assume that Γ is a constant, and to use the concentration quotient in place of the thermodynamic equilibrium constant. It is also general practice to use the term equilibrium constant instead of the more accurate concentration quotient. This practice will be followed here.
For reactions in the gas phase partial pressure is used in place of concentration and fugacity coefficient in place of activity coefficient. In the real world, for example, when making ammonia in industry, fugacity coefficients must be taken into account. Fugacity, f, is the product of partial pressure and fugacity coefficient. The chemical potential of a species in the real gas phase is given by
so the general expression defining an equilibrium constant is valid for both solution and gas phases.
Concentration quotients
In aqueous solution, equilibrium constants are usually determined in the presence of an "inert" electrolyte such as sodium nitrate, NaNO3, or potassium perchlorate, KClO4. The ionic strength of a solution is given by
where ci and zi stand for the concentration and ionic charge of ion type i, and the sum is taken over all the N types of charged species in solution. When the concentration of dissolved salt is much higher than the analytical concentrations of the reagents, the ions originating from the dissolved salt determine the ionic strength, and the ionic strength is effectively constant. Since activity coefficients depend on ionic strength, the activity coefficients of the species are effectively independent of concentration. Thus, the assumption that Γ is constant is justified. The concentration quotient is a simple multiple of the equilibrium constant.
However, Kc will vary with ionic strength. If it is measured at a series of different ionic strengths, the value can be extrapolated to zero ionic strength. The concentration quotient obtained in this manner is known, paradoxically, as a thermodynamic equilibrium constant.
Before using a published value of an equilibrium constant in conditions of ionic strength different from the conditions used in its determination, the value should be adjustedSoftware (below).
Metastable mixtures
A mixture may appear to have no tendency to change, though it is not at equilibrium. For example, a mixture of SO2 and O2 is metastable as there is a kinetic barrier to formation of the product, SO3.
2 SO2 + O2 2 SO3
The barrier can be overcome when a catalyst is also present in the mixture as in the contact process, but the catalyst does not affect the equilibrium concentrations.
Likewise, the formation of bicarbonate from carbon dioxide and water is very slow under normal conditions
but almost instantaneous in the presence of the catalytic enzyme carbonic anhydrase.
Pure substances
When pure substances (liquids or solids) are involved in equilibria their activities do not appear in the equilibrium constant because their numerical values are considered one.
Applying the general formula for an equilibrium constant to the specific case of a dilute solution of acetic acid in water one obtains
CH3CO2H + H2O CH3CO2− + H3O+
For all but very concentrated solutions, the water can be considered a "pure" liquid, and therefore it has an activity of one. The equilibrium constant expression is therefore usually written as
.
A particular case is the self-ionization of water
2 H2O H3O+ + OH−
Because water is the solvent, and has an activity of one, the self-ionization constant of water is defined as
It is perfectly legitimate to write [H+] for the hydronium ion concentration, since the state of solvation of the proton is constant (in dilute solutions) and so does not affect the equilibrium concentrations. Kw varies with variation in ionic strength and/or temperature.
The concentrations of H+ and OH− are not independent quantities. Most commonly [OH−] is replaced by Kw[H+]−1 in equilibrium constant expressions which would otherwise include hydroxide ion.
Solids also do not appear in the equilibrium constant expression, if they are considered to be pure and thus their activities taken to be one. An example is the Boudouard reaction:
2 CO CO2 + C
for which the equation (without solid carbon) is written as:
Multiple equilibria
Consider the case of a dibasic acid H2A. When dissolved in water, the mixture will contain H2A, HA− and A2−. This equilibrium can be split into two steps in each of which one proton is liberated.K1 and K2 are examples of stepwise equilibrium constants. The overall equilibrium constant, βD, is product of the stepwise constants.
{H2A} <=> {A^{2-}} + {2H+}:
Note that these constants are dissociation constants because the products on the right hand side of the equilibrium expression are dissociation products. In many systems, it is preferable to use association constants.β1 and β2 are examples of association constants. Clearly and ; and
For multiple equilibrium systems, also see: theory of Response reactions.
Effect of temperature
The effect of changing temperature on an equilibrium constant is given by the van 't Hoff equation
Thus, for exothermic reactions (ΔH is negative), K decreases with an increase in temperature, but, for endothermic reactions, (ΔH is positive) K increases with an increase temperature. An alternative formulation is
At first sight this appears to offer a means of obtaining the standard molar enthalpy of the reaction by studying the variation of K with temperature. In practice, however, the method is unreliable because error propagation almost always gives very large errors on the values calculated in this way.
Effect of electric and magnetic fields
The effect of electric field on equilibrium has been studied by Manfred Eigen among others.
Types of equilibrium
Equilibrium can be broadly classified as heterogeneous and homogeneous equilibrium. Homogeneous equilibrium consists of reactants and products belonging in the same phase whereas heterogeneous equilibrium comes into play for reactants and products in different phases.
In the gas phase: rocket engines
The industrial synthesis such as ammonia in the Haber–Bosch process (depicted right) takes place through a succession of equilibrium steps including adsorption processes
Atmospheric chemistry
Seawater and other natural waters: chemical oceanography
Distribution between two phases
log D distribution coefficient: important for pharmaceuticals where lipophilicity is a significant property of a drug
Liquid–liquid extraction, Ion exchange, Chromatography
Solubility product
Uptake and release of oxygen by hemoglobin in blood
Acid–base equilibria: acid dissociation constant, hydrolysis, buffer solutions, indicators, acid–base homeostasis
Metal–ligand complexation: sequestering agents, chelation therapy, MRI contrast reagents, Schlenk equilibrium
Adduct formation: host–guest chemistry, supramolecular chemistry, molecular recognition, dinitrogen tetroxide
In certain oscillating reactions, the approach to equilibrium is not asymptotically but in the form of a damped oscillation .
The related Nernst equation in electrochemistry gives the difference in electrode potential as a function of redox concentrations.
When molecules on each side of the equilibrium are able to further react irreversibly in secondary reactions, the final product ratio is determined according to the Curtin–Hammett principle.
In these applications, terms such as stability constant, formation constant, binding constant, affinity constant, association constant and dissociation constant are used. In biochemistry, it is common to give units for binding constants, which serve to define the concentration units used when the constant's value was determined.
Composition of a mixture
When the only equilibrium is that of the formation of a 1:1 adduct as the composition of a mixture, there are many ways that the composition of a mixture can be calculated. For example, see ICE table for a traditional method of calculating the pH of a solution of a weak acid.
There are three approaches to the general calculation of the composition of a mixture at equilibrium.
The most basic approach is to manipulate the various equilibrium constants until the desired concentrations are expressed in terms of measured equilibrium constants (equivalent to measuring chemical potentials) and initial conditions.
Minimize the Gibbs energy of the system.
Satisfy the equation of mass balance. The equations of mass balance are simply statements that demonstrate that the total concentration of each reactant must be constant by the law of conservation of mass.
Mass-balance equations
In general, the calculations are rather complicated or complex. For instance, in the case of a dibasic acid, H2A dissolved in water the two reactants can be specified as the conjugate base, A2−, and the proton, H+. The following equations of mass-balance could apply equally well to a base such as 1,2-diaminoethane, in which case the base itself is designated as the reactant A:
with TA the total concentration of species A. Note that it is customary to omit the ionic charges when writing and using these equations.
When the equilibrium constants are known and the total concentrations are specified there are two equations in two unknown "free concentrations" [A] and [H]. This follows from the fact that [HA] = β1[A][H], [H2A] = β2[A][H]2 and [OH] = Kw[H]−1
so the concentrations of the "complexes" are calculated from the free concentrations and the equilibrium constants.
General expressions applicable to all systems with two reagents, A and B would be
It is easy to see how this can be extended to three or more reagents.
Polybasic acids
The composition of solutions containing reactants A and H is easy to calculate as a function of p[H]. When [H] is known, the free concentration [A] is calculated from the mass-balance equation in A.
The diagram alongside, shows an example of the hydrolysis of the aluminium Lewis acid Al3+(aq) shows the species concentrations for a 5 × 10−6 M solution of an aluminium salt as a function of pH. Each concentration is shown as a percentage of the total aluminium.
Solution and precipitation
The diagram above illustrates the point that a precipitate that is not one of the main species in the solution equilibrium may be formed. At pH just below 5.5 the main species present in a 5 μM solution of Al3+ are aluminium hydroxides Al(OH)2+, and , but on raising the pH Al(OH)3 precipitates from the solution. This occurs because Al(OH)3 has a very large lattice energy. As the pH rises more and more Al(OH)3 comes out of solution. This is an example of Le Châtelier's principle in action: Increasing the concentration of the hydroxide ion causes more aluminium hydroxide to precipitate, which removes hydroxide from the solution. When the hydroxide concentration becomes sufficiently high the soluble aluminate, , is formed.
Another common instance where precipitation occurs is when a metal cation interacts with an anionic ligand to form an electrically neutral complex. If the complex is hydrophobic, it will precipitate out of water. This occurs with the nickel ion Ni2+ and dimethylglyoxime, (dmgH2): in this case the lattice energy of the solid is not particularly large, but it greatly exceeds the energy of solvation of the molecule Ni(dmgH)2.
Minimization of Gibbs energy
At equilibrium, at a specified temperature and pressure, and with no external forces, the Gibbs free energy G is at a minimum:
where μj is the chemical potential of molecular species j, and Nj is the amount of molecular species j. It may be expressed in terms of thermodynamic activity as:
where is the chemical potential in the standard state, R is the gas constant T is the absolute temperature, and Aj is the activity.
For a closed system, no particles may enter or leave, although they may combine in various ways. The total number of atoms of each element will remain constant. This means that the minimization above must be subjected to the constraints:
where aij is the number of atoms of element i in molecule j and b is the total number of atoms of element i, which is a constant, since the system is closed. If there are a total of k types of atoms in the system, then there will be k such equations. If ions are involved, an additional row is added to the aij matrix specifying the respective charge on each molecule which will sum to zero.
This is a standard problem in optimisation, known as constrained minimisation. The most common method of solving it is using the method of Lagrange multipliers (although other methods may be used).
Define:
where the λi are the Lagrange multipliers, one for each element. This allows each of the Nj and λj to be treated independently, and it can be shown using the tools of multivariate calculus that the equilibrium condition is given by
(For proof see Lagrange multipliers.) This is a set of (m + k) equations in (m + k) unknowns (the Nj and the λi) and may, therefore, be solved for the equilibrium concentrations Nj as long as the chemical activities are known as functions of the concentrations at the given temperature and pressure. (In the ideal case, activities are proportional to concentrations.) (See Thermodynamic databases for pure substances.) Note that the second equation is just the initial constraints for minimization.
This method of calculating equilibrium chemical concentrations is useful for systems with a large number of different molecules. The use of k atomic element conservation equations for the mass constraint is straightforward, and replaces the use of the stoichiometric coefficient equations. The results are consistent with those specified by chemical equations. For example, if equilibrium is specified by a single chemical equation:,
where νj is the stoichiometric coefficient for the j th molecule (negative for reactants, positive for products) and Rj is the symbol for the j th molecule, a properly balanced equation will obey:
Multiplying the first equilibrium condition by νj and using the above equation yields:
As above, defining ΔG
where Kc'' is the equilibrium constant, and ΔG will be zero at equilibrium.
Analogous procedures exist for the minimization of other thermodynamic potentials.
See also
Acidosis
Alkalosis
Arterial blood gas
Benesi–Hildebrand method
Determination of equilibrium constants
Equilibrium constant
Henderson–Hasselbalch equation
Mass-action ratio
Michaelis–Menten kinetics
pCO2
pH
pKa
Redox equilibria
Steady state (chemistry)
Thermodynamic databases for pure substances
Non-random two-liquid model (NRTL model) – Phase equilibrium calculations
UNIQUAC model – Phase equilibrium calculations
References
Further reading
Mainly concerned with gas-phase equilibria.
External links
Analytical chemistry
Physical chemistry | 0.785293 | 0.9945 | 0.780973 |
Abiotic component | In biology and ecology, abiotic components or abiotic factors are non-living chemical and physical parts of the environment that affect living organisms and the functioning of ecosystems. Abiotic factors and the phenomena associated with them underpin biology as a whole. They affect a plethora of species, in all forms of environmental conditions, such as marine or terrestrial animals. Humans can make or change abiotic factors in a species' environment. For instance, fertilizers can affect a snail's habitat, or the greenhouse gases which humans utilize can change marine pH levels.
Abiotic components include physical conditions and non-living resources that affect living organisms in terms of growth, maintenance, and reproduction. Resources are distinguished as substances or objects in the environment required by one organism and consumed or otherwise made unavailable for use by other organisms. Component degradation of a substance occurs by chemical or physical processes, e.g. hydrolysis. All non-living components of an ecosystem, such as atmospheric conditions and water resources, are called abiotic components.
Factors
In biology, abiotic factors can include water, light, radiation, temperature, humidity, atmosphere, acidity, salinity, precipitation, altitude, minerals, tides, rain, dissolved oxygen nutrients, and soil. The macroscopic climate often influences each of the above. Pressure and sound waves may also be considered in the context of marine or sub-terrestrial environments. Abiotic factors in ocean environments also include aerial exposure, substrate, water clarity, solar energy and tides.
Consider the differences in the mechanics of C3, C4, and CAM plants in regulating the influx of carbon dioxide to the Calvin-Benson Cycle in relation to their abiotic stressors. C3 plants have no mechanisms to manage photorespiration, whereas C4 and CAM plants utilize a separate PEP carboxylase enzyme to prevent photorespiration, thus increasing the yield of photosynthesis processes in certain high energy environments.
Examples
Many Archea require very high temperatures, pressures or unusual concentrations of chemical substances such as sulfur; this is due to their specialization into extreme conditions. In addition, fungi have also evolved to survive at the temperature, the humidity, and stability of their environment.
For example, there is a significant difference in access in both water and humidity between temperate rain forests and deserts. This difference in water availability causes a diversity in the organisms that survive in these areas. These differences in abiotic components alter the species present both by creating boundaries of what species can survive within the environment, and influencing competition between two species. Abiotic factors such as salinity can give one species a competitive advantage over another, creating pressures that lead to speciation and alteration of a species to and from generalist and specialist competitors.
See also
Biotic component, a living part of an ecosystem that affects and shapes it.
Abiogenesis, the gradual process of increasing complexity of non-living into living matter.
Nitrogen cycle
Phosphorus cycle
References
Environmental science | 0.7838 | 0.99639 | 0.78097 |
Protein primary structure | Protein primary structure is the linear sequence of amino acids in a peptide or protein. By convention, the primary structure of a protein is reported starting from the amino-terminal (N) end to the carboxyl-terminal (C) end. Protein biosynthesis is most commonly performed by ribosomes in cells. Peptides can also be synthesized in the laboratory. Protein primary structures can be directly sequenced, or inferred from DNA sequences.
Formation
Biological
Amino acids are polymerised via peptide bonds to form a long backbone, with the different amino acid side chains protruding along it. In biological systems, proteins are produced during translation by a cell's ribosomes. Some organisms can also make short peptides by non-ribosomal peptide synthesis, which often use amino acids other than the standard 20, and may be cyclised, modified and cross-linked.
Chemical
Peptides can be synthesised chemically via a range of laboratory methods. Chemical methods typically synthesise peptides in the opposite order (starting at the C-terminus) to biological protein synthesis (starting at the N-terminus).
Notation
Protein sequence is typically notated as a string of letters, listing the amino acids starting at the amino-terminal end through to the carboxyl-terminal end. Either a three letter code or single letter code can be used to represent the 20 naturally occurring amino acids, as well as mixtures or ambiguous amino acids (similar to nucleic acid notation).
Peptides can be directly sequenced, or inferred from DNA sequences. Large sequence databases now exist that collate known protein sequences.
Modification
In general, polypeptides are unbranched polymers, so their primary structure can often be specified by the sequence of amino acids along their backbone. However, proteins can become cross-linked, most commonly by disulfide bonds, and the primary structure also requires specifying the cross-linking atoms, e.g., specifying the cysteines involved in the protein's disulfide bonds. Other crosslinks include desmosine.
Isomerisation
The chiral centers of a polypeptide chain can undergo racemization. Although it does not change the sequence, it does affect the chemical properties of the sequence. In particular, the L-amino acids normally found in proteins can spontaneously isomerize at the atom to form D-amino acids, which cannot be cleaved by most proteases. Additionally, proline can form stable trans-isomers at the peptide bond.
Post-translational modification
Additionally, the protein can undergo a variety of post-translational modifications, which are briefly summarized here.
The N-terminal amino group of a polypeptide can be modified covalently, e.g.,
acetylation
The positive charge on the N-terminal amino group may be eliminated by changing it to an acetyl group (N-terminal blocking).
formylation
The N-terminal methionine usually found after translation has an N-terminus blocked with a formyl group. This formyl group (and sometimes the methionine residue itself, if followed by Gly or Ser) is removed by the enzyme deformylase.
pyroglutamate
An N-terminal glutamine can attack itself, forming a cyclic pyroglutamate group.
myristoylation
Similar to acetylation. Instead of a simple methyl group, the myristoyl group has a tail of 14 hydrophobic carbons, which make it ideal for anchoring proteins to cellular membranes.
The C-terminal carboxylate group of a polypeptide can also be modified, e.g.,
amination (see Figure)
The C-terminus can also be blocked (thus, neutralizing its negative charge) by amination.
glycosyl phosphatidylinositol (GPI) attachment
Glycosyl phosphatidylinositol(GPI) is a large, hydrophobic phospholipid prosthetic group that anchors proteins to cellular membranes. It is attached to the polypeptide C-terminus through an amide linkage that then connects to ethanolamine, thence to sundry sugars and finally to the phosphatidylinositol lipid moiety.
Finally, the peptide side chains can also be modified covalently, e.g.,
phosphorylation
Aside from cleavage, phosphorylation is perhaps the most important chemical modification of proteins. A phosphate group can be attached to the sidechain hydroxyl group of serine, threonine and tyrosine residues, adding a negative charge at that site and producing an unnatural amino acid. Such reactions are catalyzed by kinases and the reverse reaction is catalyzed by phosphatases. The phosphorylated tyrosines are often used as "handles" by which proteins can bind to one another, whereas phosphorylation of Ser/Thr often induces conformational changes, presumably because of the introduced negative charge. The effects of phosphorylating Ser/Thr can sometimes be simulated by mutating the Ser/Thr residue to glutamate.
glycosylation
A catch-all name for a set of very common and very heterogeneous chemical modifications. Sugar moieties can be attached to the sidechain hydroxyl groups of Ser/Thr or to the sidechain amide groups of Asn. Such attachments can serve many functions, ranging from increasing solubility to complex recognition. All glycosylation can be blocked with certain inhibitors, such as tunicamycin.
deamidation (succinimide formation)
In this modification, an asparagine or aspartate side chain attacks the following peptide bond, forming a symmetrical succinimide intermediate. Hydrolysis of the intermediate produces either aspartate or the β-amino acid, iso(Asp). For asparagine, either product results in the loss of the amide group, hence "deamidation".
hydroxylation
Proline residues may be hydroxylated at either of two atoms, as can lysine (at one atom). Hydroxyproline is a critical component of collagen, which becomes unstable upon its loss. The hydroxylation reaction is catalyzed by an enzyme that requires ascorbic acid (vitamin C), deficiencies in which lead to many connective-tissue diseases such as scurvy.
methylation
Several protein residues can be methylated, most notably the positive groups of lysine and arginine. Arginine residues interact with the nucleic acid phosphate backbone and commonly form hydrogen bonds with the base residues, particularly guanine, in protein–DNA complexes. Lysine residues can be singly, doubly and even triply methylated. Methylation does not alter the positive charge on the side chain, however.
acetylation
Acetylation of the lysine amino groups is chemically analogous to the acetylation of the N-terminus. Functionally, however, the acetylation of lysine residues is used to regulate the binding of proteins to nucleic acids. The cancellation of the positive charge on the lysine weakens the electrostatic attraction for the (negatively charged) nucleic acids.
sulfation
Tyrosines may become sulfated on their atom. Somewhat unusually, this modification occurs in the Golgi apparatus, not in the endoplasmic reticulum. Similar to phosphorylated tyrosines, sulfated tyrosines are used for specific recognition, e.g., in chemokine receptors on the cell surface. As with phosphorylation, sulfation adds a negative charge to a previously neutral site.
prenylation and palmitoylation
The hydrophobic isoprene (e.g., farnesyl, geranyl, and geranylgeranyl groups) and palmitoyl groups may be added to the atom of cysteine residues to anchor proteins to cellular membranes. Unlike the GPI and myritoyl anchors, these groups are not necessarily added at the termini.
carboxylation
A relatively rare modification that adds an extra carboxylate group (and, hence, a double negative charge) to a glutamate side chain, producing a Gla residue. This is used to strengthen the binding to "hard" metal ions such as calcium.
ADP-ribosylation
The large ADP-ribosyl group can be transferred to several types of side chains within proteins, with heterogeneous effects. This modification is a target for the powerful toxins of disparate bacteria, e.g., Vibrio cholerae, Corynebacterium diphtheriae and Bordetella pertussis.
ubiquitination and SUMOylation
Various full-length, folded proteins can be attached at their C-termini to the sidechain ammonium groups of lysines of other proteins. Ubiquitin is the most common of these, and usually signals that the ubiquitin-tagged protein should be degraded.
Most of the polypeptide modifications listed above occur post-translationally, i.e., after the protein has been synthesized on the ribosome, typically occurring in the endoplasmic reticulum, a subcellular organelle of the eukaryotic cell.
Many other chemical reactions (e.g., cyanylation) have been applied to proteins by chemists, although they are not found in biological systems.
Cleavage and ligation
In addition to those listed above, the most important modification of primary structure is peptide cleavage (by chemical hydrolysis or by proteases). Proteins are often synthesized in an inactive precursor form; typically, an N-terminal or C-terminal segment blocks the active site of the protein, inhibiting its function. The protein is activated by cleaving off the inhibitory peptide.
Some proteins even have the power to cleave themselves. Typically, the hydroxyl group of a serine (rarely, threonine) or the thiol group of a cysteine residue will attack the carbonyl carbon of the preceding peptide bond, forming a tetrahedrally bonded intermediate [classified as a hydroxyoxazolidine (Ser/Thr) or hydroxythiazolidine (Cys) intermediate]. This intermediate tends to revert to the amide form, expelling the attacking group, since the amide form is usually favored by free energy, (presumably due to the strong resonance stabilization of the peptide group). However, additional molecular interactions may render the amide form less stable; the amino group is expelled instead, resulting in an ester (Ser/Thr) or thioester (Cys) bond in place of the peptide bond. This chemical reaction is called an N-O acyl shift.
The ester/thioester bond can be resolved in several ways:
Simple hydrolysis will split the polypeptide chain, where the displaced amino group becomes the new N-terminus. This is seen in the maturation of glycosylasparaginase.
A β-elimination reaction also splits the chain, but results in a pyruvoyl group at the new N-terminus. This pyruvoyl group may be used as a covalently attached catalytic cofactor in some enzymes, especially decarboxylases such as S-adenosylmethionine decarboxylase (SAMDC) that exploit the electron-withdrawing power of the pyruvoyl group.
Intramolecular transesterification, resulting in a branched polypeptide. In inteins, the new ester bond is broken by an intramolecular attack by the soon-to-be C-terminal asparagine.
Intermolecular transesterification can transfer a whole segment from one polypeptide to another, as is seen in the Hedgehog protein autoprocessing.
Sequence compression
The compression of amino acid sequences is a comparatively challenging task. The existing specialized amino acid sequence compressors are low compared with that of DNA sequence compressors, mainly because of the characteristics of the data. For example, modeling inversions is harder because of the reverse information loss (from amino acids to DNA sequence). The current lossless data compressor that provides higher compression is AC2. AC2 mixes various context models using Neural Networks and encodes the data using arithmetic encoding.
History
The proposal that proteins were linear chains of α-amino acids was made nearly simultaneously by two scientists at the same conference in 1902, the 74th meeting of the Society of German Scientists and Physicians, held in Karlsbad. Franz Hofmeister made the proposal in the morning, based on his observations of the biuret reaction in proteins. Hofmeister was followed a few hours later by Emil Fischer, who had amassed a wealth of chemical details supporting the peptide-bond model. For completeness, the proposal that proteins contained amide linkages was made as early as 1882 by the French chemist E. Grimaux.
Despite these data and later evidence that proteolytically digested proteins yielded only oligopeptides, the idea that proteins were linear, unbranched polymers of amino acids was not accepted immediately. Some well-respected scientists such as William Astbury doubted that covalent bonds were strong enough to hold such long molecules together; they feared that thermal agitations would shake such long molecules asunder. Hermann Staudinger faced similar prejudices in the 1920s when he argued that rubber was composed of macromolecules.
Thus, several alternative hypotheses arose. The colloidal protein hypothesis stated that proteins were colloidal assemblies of smaller molecules. This hypothesis was disproved in the 1920s by ultracentrifugation measurements by Theodor Svedberg that showed that proteins had a well-defined, reproducible molecular weight and by electrophoretic measurements by Arne Tiselius that indicated that proteins were single molecules. A second hypothesis, the cyclol hypothesis advanced by Dorothy Wrinch, proposed that the linear polypeptide underwent a chemical cyclol rearrangement C=O + HN C(OH)-N that crosslinked its backbone amide groups, forming a two-dimensional fabric. Other primary structures of proteins were proposed by various researchers, such as the diketopiperazine model of Emil Abderhalden and the pyrrol/piperidine model of Troensegaard in 1942. Although never given much credence, these alternative models were finally disproved when Frederick Sanger successfully sequenced insulin and by the crystallographic determination of myoglobin and hemoglobin by Max Perutz and John Kendrew.
Primary structure in other molecules
Any linear-chain heteropolymer can be said to have a "primary structure" by analogy to the usage of the term for proteins, but this usage is rare compared to the extremely common usage in reference to proteins. In RNA, which also has extensive secondary structure, the linear chain of bases is generally just referred to as the "sequence" as it is in DNA (which usually forms a linear double helix with little secondary structure). Other biological polymers such as polysaccharides can also be considered to have a primary structure, although the usage is not standard.
Relation to secondary and tertiary structure
The primary structure of a biological polymer to a large extent determines the three-dimensional shape (tertiary structure). Protein sequence can be used to predict local features, such as segments of secondary structure, or trans-membrane regions. However, the complexity of protein folding currently prohibits predicting the tertiary structure of a protein from its sequence alone. Knowing the structure of a similar homologous sequence (for example a member of the same protein family) allows highly accurate prediction of the tertiary structure by homology modeling. If the full-length protein sequence is available, it is possible to estimate its general biophysical properties, such as its isoelectric point.
Sequence families are often determined by sequence clustering, and structural genomics projects aim to produce a set of representative structures to cover the sequence space of possible non-redundant sequences.
See also
Protein sequencing
Nucleic acid primary structure
Translation
Pseudo amino acid composition
Notes and references
Protein structure 1
Stereochemistry | 0.786363 | 0.993105 | 0.780941 |
Computational chemistry | Computational chemistry is a branch of chemistry that uses computer simulations to assist in solving chemical problems. It uses methods of theoretical chemistry incorporated into computer programs to calculate the structures and properties of molecules, groups of molecules, and solids. The importance of this subject stems from the fact that, with the exception of some relatively recent findings related to the hydrogen molecular ion (dihydrogen cation), achieving an accurate quantum mechanical depiction of chemical systems analytically, or in a closed form, is not feasible. The complexity inherent in the many-body problem exacerbates the challenge of providing detailed descriptions of quantum mechanical systems. While computational results normally complement information obtained by chemical experiments, it can occasionally predict unobserved chemical phenomena.
Overview
Computational chemistry differs from theoretical chemistry, which involves a mathematical description of chemistry. However, computational chemistry involves the usage of computer programs and additional mathematical skills in order to accurately model various chemical problems. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions.
Historically, computational chemistry has had two different aspects:
Computational studies, used to find a starting point for a laboratory synthesis or to assist in understanding experimental data, such as the position and source of spectroscopic peaks.
Computational studies, used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms not readily studied via experiments.
These aspects, along with computational chemistry's purpose, have resulted in a whole host of algorithms.
History
Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow.
With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One significant advancement was marked by Clemens C. J. Roothaan's 1951 paper in the Reviews of Modern Physics. This paper focused largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals). For many years, it was the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first ab initio Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of ab initio calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in ab initio theory have been published by Schaefer.
In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO.
In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger.
One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality." During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry. The Journal of Computational Chemistry was first published in 1980.
Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry. Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems".
Applications
There are several fields within computational chemistry.
The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied.
Storing and searching for data on chemical entities (see chemical databases).
Identifying correlations between chemical structures and properties (see quantitative structure–property relationship (QSPR) and quantitative structure–activity relationship (QSAR)).
Computational approaches to help in the efficient synthesis of compounds.
Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design and catalysis).
These fields can give rise to several applications as shown below.
Catalysis
Computational chemistry is a tool for analyzing catalytic systems without doing experiments. Modern electronic structure theory and density functional theory has allowed researchers to discover and understand catalysts. Computational studies apply theoretical chemistry to catalysis research. Density functional theory methods calculate the energies and orbitals of molecules to give models of those structures. Using these methods, researchers can predict values like activation energy, site reactivity and other thermodynamic properties.
Data that is difficult to obtain experimentally can be found using computational methods to model the mechanisms of catalytic cycles. Skilled computational chemists provide predictions that are close to experimental data with proper considerations of methods and basis sets. With good computational data, researchers can predict how catalysts can be improved to lower the cost and increase the efficiency of these reactions.
Drug development
Computational chemistry is used in drug development to model potentially useful drug molecules and help companies save time and cost in drug development. The drug discovery process involves analyzing data, finding ways to improve current molecules, finding synthetic routes, and testing those molecules. Computational chemistry helps with this process by giving predictions of which experiments would be best to do without conducting other experiments. Computational methods can also find values that are difficult to find experimentally like pKa's of compounds. Methods like density functional theory can be used to model drug molecules and find their properties, like their HOMO and LUMO energies and molecular orbitals. Computational chemists also help companies with developing informatics, infrastructure and designs of drugs.
Aside from drug synthesis, drug carriers are also researched by computational chemists for nanomaterials. It allows researchers to simulate environments to test the effectiveness and stability of drug carriers. Understanding how water interacts with these nanomaterials ensures stability of the material in human bodies. These computational simulations help researchers optimize the material find the best way to structure these nanomaterials before making them.
Computational chemistry databases
Databases are useful for both computational and non computational chemists in research and verifying the validity of computational methods. Empirical data is used to analyze the error of computational methods against experimental data. Empirical data helps researchers with their methods and basis sets to have greater confidence in the researchers results. Computational chemistry databases are also used in testing software or hardware for computational chemistry.
Databases can also use purely calculated data. Purely calculated data uses calculated values over experimental values for databases. Purely calculated data avoids dealing with these adjusting for different experimental conditions like zero-point energy. These calculations can also avoid experimental errors for difficult to test molecules. Though purely calculated data is often not perfect, identifying issues is often easier for calculated data than experimental.
Databases also give public access to information for researchers to use. They contain data that other researchers have found and uploaded to these databases so that anyone can search for them. Researchers use these databases to find information on molecules of interest and learn what can be done with those molecules. Some publicly available chemistry databases include the following.
BindingDB: Contains experimental information about protein-small molecule interactions.
RCSB: Stores publicly available 3D models of macromolecules (proteins, nucleic acids) and small molecules (drugs, inhibitors)
ChEMBL: Contains data from research on drug development such as assay results.
DrugBank: Data about mechanisms of drugs can be found here.
Methods
Ab initio method
The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theory, with no inclusion of experimental data – are called ab initio methods. A theoretical approximation is rigorously defined on first principles and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made).
Ab initio methods need to define a level of theory (the method) and a basis set. A basis set consists of functions centered on the molecule's atoms. These sets are then used to describe molecular orbitals via the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz.
A common type of ab initio electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory, where electron-electron repulsions in the molecule are not specifically taken into account; only the electrons' average effect is included in the calculation. As the basis set size increases, the energy and wave function tend towards a limit called the Hartree–Fock limit.
Many types of calculations begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. These types of calculations are termed post-Hartree–Fock methods. By continually improving these methods, scientists can get increasingly closer to perfectly predicting the behavior of atomic and molecular systems under the framework of quantum mechanics, as defined by the Schrödinger equation. To obtain exact agreement with the experiment, it is necessary to include specific terms, some of which are far more important for heavy atoms than lighter ones.
In most cases, the Hartree–Fock wave function occupies a single configuration or determinant. In some cases, particularly for bond-breaking processes, this is inadequate, and several configurations must be used.
The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without full knowledge of the complete surface.
Computational thermochemistry
A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way, it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods.
Chemical dynamics
After the electronic and nuclear variables are separated within the Born–Oppenheimer representation), the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms.
The most popular methods for propagating the wave packet associated to the molecular geometry are:
the Chebyshev (real) polynomial,
the multi-configuration time-dependent Hartree method (MCTDH),
the semiclassical method
and the split operator technique explained below.
Split operator technique
How a computational method solves quantum equations impacts the accuracy and efficiency of the method. The split operator technique is one of these methods for solving differential equations. In computational chemistry, split operator technique reduces computational costs of simulating chemical systems. Computational costs are about how much time it takes for computers to calculate these chemical systems, as it can take days for more complex systems. Quantum systems are difficult and time-consuming to solve for humans. Split operator methods help computers calculate these systems quickly by solving the sub problems in a quantum differential equation. The method does this by separating the differential equation into two different equations, like when there are more than two operators. Once solved, the split equations are combined into one equation again to give an easily calculable solution.
This method is used in many fields that require solving differential equations, such as biology. However, the technique comes with a splitting error. For example, with the following solution for a differential equation.
The equation can be split, but the solutions will not be exact, only similar. This is an example of first order splitting.
There are ways to reduce this error, which include taking an average of two split equations.
Another way to increase accuracy is to use higher order splitting. Usually, second order splitting is the most that is done because higher order splitting requires much more time to calculate and is not worth the cost. Higher order methods become too difficult to implement, and are not useful for solving differential equations despite the higher accuracy.
Computational chemists spend much time making systems calculated with split operator technique more accurate while minimizing the computational cost. Calculating methods is a massive challenge for many chemists trying to simulate molecules or chemical environments.
Density functional methods
Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods.
Semi-empirical methods
Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods.
Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely empirical" because they do not derive from a Hamiltonian. Yet, the term "empirical methods", or "empirical force fields" is usually used to describe molecular mechanics.
Molecular mechanics
In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance, the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations.
The database of compounds used for parameterization, i.e. the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance, proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules.
Molecular dynamics
Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behavior of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point will determine the next phase point in time by integrating over Newton's laws of motion.
Monte Carlo
Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called importance sampling. Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms.
Quantum mechanics/molecular mechanics (QM/MM)
QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes.
Quantum Computational Chemistry
Quantum computational chemistry aims to exploit quantum computing to simulate chemical systems, distinguishing itself from the QM/MM (Quantum Mechanics/Molecular Mechanics) approach. While QM/MM uses a hybrid approach, combining quantum mechanics for a portion of the system with classical mechanics for the remainder, quantum computational chemistry exclusively uses quantum computing methods to represent and process information, such as Hamiltonian operators.
Conventional computational chemistry methods often struggle with the complex quantum mechanical equations, particularly due to the exponential growth of a quantum system's wave function. Quantum computational chemistry addresses these challenges using quantum computing methods, such as qubitization and quantum phase estimation, which are believed to offer scalable solutions.
Qubitization involves adapting the Hamiltonian operator for more efficient processing on quantum computers, enhancing the simulation's efficiency. Quantum phase estimation, on the other hand, assists in accurately determining energy eigenstates, which are critical for understanding the quantum system's behavior.
While these techniques have advanced the field of computational chemistry, especially in the simulation of chemical systems, their practical application is currently limited mainly to smaller systems due to technological constraints. Nevertheless, these developments may lead to significant progress towards achieving more precise and resource-efficient quantum chemistry simulations.
Computational costs in chemistry algorithms
The computational cost and algorithmic complexity in chemistry are used to help understand and predict chemical phenomena. They help determine which algorithms/computational methods to use when solving chemical problems.This section focuses on the scaling of computational complexity with molecule size and details the algorithms commonly used in both domains.
In quantum chemistry, particularly, the complexity can grow exponentially with the number of electrons involved in the system. This exponential growth is a significant barrier to simulating large or complex systems accurately.
Advanced algorithms in both fields strive to balance accuracy with computational efficiency. For instance, in MD, methods like Verlet integration or Beeman's algorithm are employed for their computational efficiency. In quantum chemistry, hybrid methods combining different computational approaches (like QM/MM) are increasingly used to tackle large biomolecular systems.
Algorithmic complexity examples
The following list illustrates the impact of computational complexity on algorithms used in chemical computations. It is important to note that while this list provides key examples, it is not comprehensive and serves as a guide to understanding how computational demands influence the selection of specific computational methods in chemistry.
Molecular dynamics
Algorithm
Solves Newton's equations of motion for atoms and molecules.
Complexity
The standard pairwise interaction calculation in MD leads to an complexity for particles. This is because each particle interacts with every other particle, resulting in interactions. Advanced algorithms, such as the Ewald summation or Fast Multipole Method, reduce this to or even by grouping distant particles and treating them as a single entity or using clever mathematical approximations.
Quantum mechanics/molecular mechanics (QM/MM)
Algorithm
Combines quantum mechanical calculations for a small region with molecular mechanics for the larger environment.
Complexity
The complexity of QM/MM methods depends on both the size of the quantum region and the method used for quantum calculations. For example, if a Hartree-Fock method is used for the quantum part, the complexity can be approximated as , where is the number of basis functions in the quantum region. This complexity arises from the need to solve a set of coupled equations iteratively until self-consistency is achieved.
Hartree-Fock method
Algorithm
Finds a single Fock state that minimizes the energy.
Complexity
NP-hard or NP-complete as demonstrated by embedding instances of the Ising model into Hartree-Fock calculations. The Hartree-Fock method involves solving the Roothaan-Hall equations, which scales as to depending on implementation, with being the number of basis functions. The computational cost mainly comes from evaluating and transforming the two-electron integrals. This proof of NP-hardness or NP-completeness comes from embedding problems like the Ising model into the Hartree-Fock formalism.
Density functional theory
Algorithm
Investigates the electronic structure or nuclear structure of many-body systems such as atoms, molecules, and the condensed phases.
Complexity
Traditional implementations of DFT typically scale as , mainly due to the need to diagonalize the Kohn-Sham matrix. The diagonalization step, which finds the eigenvalues and eigenvectors of the matrix, contributes most to this scaling. Recent advances in DFT aim to reduce this complexity through various approximations and algorithmic improvements.
Standard CCSD and CCSD(T) method
Algorithm
CCSD and CCSD(T) methods are advanced electronic structure techniques involving single, double, and in the case of CCSD(T), perturbative triple excitations for calculating electronic correlation effects.
Complexity
CCSD
Scales as where is the number of basis functions. This intense computational demand arises from the inclusion of single and double excitations in the electron correlation calculation.
CCSD(T)
With the addition of perturbative triples, the complexity increases to . This elevated complexity restricts practical usage to smaller systems, typically up to 20-25 atoms in conventional implementations.
Linear-scaling CCSD(T) method
Algorithm
An adaptation of the standard CCSD(T) method using local natural orbitals (NOs) to significantly reduce the computational burden and enable application to larger systems.
Complexity
Achieves linear scaling with the system size, a major improvement over the traditional fifth-power scaling of CCSD. This advancement allows for practical applications to molecules of up to 100 atoms with reasonable basis sets, marking a significant step forward in computational chemistry's capability to handle larger systems with high accuracy.
Proving the complexity classes for algorithms involves a combination of mathematical proof and computational experiments. For example, in the case of the Hartree-Fock method, the proof of NP-hardness is a theoretical result derived from complexity theory, specifically through reductions from known NP-hard problems.
For other methods like MD or DFT, the computational complexity is often empirically observed and supported by algorithm analysis. In these cases, the proof of correctness is less about formal mathematical proofs and more about consistently observing the computational behaviour across various systems and implementations.
Accuracy
Computational chemistry is not an exact description of real-life chemistry, as the mathematical and physical models of nature can only provide an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme.
Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost.
Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT).
There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM).In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM).
Software packages
Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in:
Biomolecular modelling programs: proteins, nucleic acid.
Molecular mechanics programs.
Quantum chemistry and solid state-physics software supporting several methods.
Molecular design software
Semi-empirical programs.
Valence bond programs.
Specialized journals on computational chemistry
Annual Reports in Computational Chemistry
Computational and Theoretical Chemistry
Computational and Theoretical Polymer Science
Computers & Chemical Engineering
Journal of Chemical Information and Modeling
Journal of Chemical Software
Journal of Chemical Theory and Computation
Journal of Cheminformatics
Journal of Computational Chemistry
Journal of Computer Aided Chemistry
Journal of Computer Chemistry Japan
Journal of Computer-aided Molecular Design
Journal of Theoretical and Computational Chemistry
Molecular Informatics
Theoretical Chemistry Accounts
External links
NIST Computational Chemistry Comparison and Benchmark DataBase – Contains a database of thousands of computational and experimental results for hundreds of systems
American Chemical Society Division of Computers in Chemistry – American Chemical Society Computers in Chemistry Division, resources for grants, awards, contacts and meetings.
CSTB report Mathematical Research in Materials Science: Opportunities and Perspectives – CSTB Report
3.320 Atomistic Computer Modeling of Materials (SMA 5107) Free MIT Course
Chem 4021/8021 Computational Chemistry Free University of Minnesota Course
Technology Roadmap for Computational Chemistry
Applications of molecular and materials modelling.
Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology CSTB Report
MD and Computational Chemistry applications on GPUs
Susi Lehtola, Antti J. Karttunen:"Free and open source software for computational chemistry education", First published: 23 March 2022, https://doi.org/10.1002/wcms.1610 (Open Access)
CCL.NET: Computational Chemistry List, Ltd.
See also
References
Computational fields of study
Theoretical chemistry
Physical chemistry
Chemical physics
Computational physics | 0.786196 | 0.993222 | 0.780867 |
Crystallization | Crystallization is the process by which solids form, where the atoms or molecules are highly organized into a structure known as a crystal. Some ways by which crystals form are precipitating from a solution, freezing, or more rarely deposition directly from a gas. Attributes of the resulting crystal depend largely on factors such as temperature, air pressure, cooling rate, and in the case of liquid crystals, time of fluid evaporation.
Crystallization occurs in two major steps. The first is nucleation, the appearance of a crystalline phase from either a supercooled liquid or a supersaturated solvent. The second step is known as crystal growth, which is the increase in the size of particles and leads to a crystal state. An important feature of this step is that loose particles form layers at the crystal's surface and lodge themselves into open inconsistencies such as pores, cracks, etc.
The majority of minerals and organic molecules crystallize easily, and the resulting crystals are generally of good quality, i.e. without visible defects. However, larger biochemical particles, like proteins, are often difficult to crystallize. The ease with which molecules will crystallize strongly depends on the intensity of either atomic forces (in the case of mineral substances), intermolecular forces (organic and biochemical substances) or intramolecular forces (biochemical substances).
Crystallization is also a chemical solid–liquid separation technique, in which mass transfer of a solute from the liquid solution to a pure solid crystalline phase occurs. In chemical engineering, crystallization occurs in a crystallizer. Crystallization is therefore related to precipitation, although the result is not amorphous or disordered, but a crystal.
Process
The crystallization process consists of two major events, nucleation and crystal growth which are driven by thermodynamic properties as well as chemical properties.
Nucleation is the step where the solute molecules or atoms dispersed in the solvent start to gather into clusters, on the microscopic scale (elevating solute concentration in a small region), that become stable under the current operating conditions. These stable clusters constitute the nuclei. Therefore, the clusters need to reach a critical size in order to become stable nuclei. Such critical size is dictated by many different factors (temperature, supersaturation, etc.). It is at the stage of nucleation that the atoms or molecules arrange in a defined and periodic manner that defines the crystal structure – note that "crystal structure" is a special term that refers to the relative arrangement of the atoms or molecules, not the macroscopic properties of the crystal (size and shape), although those are a result of the internal crystal structure.
The crystal growth is the subsequent size increase of the nuclei that succeed in achieving the critical cluster size. Crystal growth is a dynamic process occurring in equilibrium where solute molecules or atoms precipitate out of solution, and dissolve back into solution. Supersaturation is one of the driving forces of crystallization, as the solubility of a species is an equilibrium process quantified by Ksp. Depending upon the conditions, either nucleation or growth may be predominant over the other, dictating crystal size.
Many compounds have the ability to crystallize with some having different crystal structures, a phenomenon called polymorphism. Certain polymorphs may be metastable, meaning that although it is not in thermodynamic equilibrium, it is kinetically stable and requires some input of energy to initiate a transformation to the equilibrium phase. Each polymorph is in fact a different thermodynamic solid state and crystal polymorphs of the same compound exhibit different physical properties, such as dissolution rate, shape (angles between facets and facet growth rates), melting point, etc. For this reason, polymorphism is of major importance in industrial manufacture of crystalline products. Additionally, crystal phases can sometimes be interconverted by varying factors such as temperature, such as in the transformation of anatase to rutile phases of titanium dioxide.
In nature
There are many examples of natural process that involve crystallization.
Geological time scale process examples include:
Natural (mineral) crystal formation (see also gemstone);
Stalactite/stalagmite, rings formation;
Human time scale process examples include:
Snow flakes formation;
Honey crystallization (nearly all types of honey crystallize).
Methods
Crystal formation can be divided into two types, where the first type of crystals are composed of a cation and anion, also known as a salt, such as sodium acetate. The second type of crystals are composed of uncharged species, for example menthol.
Crystals can be formed by various methods, such as: cooling, evaporation, addition of a second solvent to reduce the solubility of the solute (technique known as antisolvent or drown-out), solvent layering, sublimation, changing the cation or anion, as well as other methods.
The formation of a supersaturated solution does not guarantee crystal formation, and often a seed crystal or scratching the glass is required to form nucleation sites.
A typical laboratory technique for crystal formation is to dissolve the solid in a solution in which it is partially soluble, usually at high temperatures to obtain supersaturation. The hot mixture is then filtered to remove any insoluble impurities. The filtrate is allowed to slowly cool. Crystals that form are then filtered and washed with a solvent in which they are not soluble, but is miscible with the mother liquor. The process is then repeated to increase the purity in a technique known as recrystallization.
For biological molecules in which the solvent channels continue to be present to retain the three dimensional structure intact, microbatch crystallization under oil and vapor diffusion have been the common methods.
Typical equipment
Equipment for the main industrial processes for crystallization.
Tank crystallizers. Tank crystallization is an old method still used in some specialized cases. Saturated solutions, in tank crystallization, are allowed to cool in open tanks. After a period of time the mother liquor is drained and the crystals removed. Nucleation and size of crystals are difficult to control. Typically, labor costs are very high.
Mixed-Suspension, Mixed-Product-Removal (MSMPR): MSMPR is used for much larger scale inorganic crystallization. MSMPR can crystalize solutions in a continuous manner.
Thermodynamic view
The crystallization process appears to violate the second principle of thermodynamics. Whereas most processes that yield more orderly results are achieved by applying heat, crystals usually form at lower temperaturesespecially by supercooling. However, the release of the heat of fusion during crystallization causes the entropy of the universe to increase, thus this principle remains unaltered.
The molecules within a pure, perfect crystal, when heated by an external source, will become liquid. This occurs at a sharply defined temperature (different for each type of crystal). As it liquifies, the complicated architecture of the crystal collapses. Melting occurs because the entropy (S) gain in the system by spatial randomization of the molecules has overcome the enthalpy (H) loss due to breaking the crystal packing forces:
Regarding crystals, there are no exceptions to this rule. Similarly, when the molten crystal is cooled, the molecules will return to their crystalline form once the temperature falls beyond the turning point. This is because the thermal randomization of the surroundings compensates for the loss of entropy that results from the reordering of molecules within the system. Such liquids that crystallize on cooling are the exception rather than the rule.
The nature of the crystallization process is governed by both thermodynamic and kinetic factors, which can make it highly variable and difficult to control. Factors such as impurity level, mixing regime, vessel design, and cooling profile can have a major impact on the size, number, and shape of crystals produced.
Dynamics
As mentioned above, a crystal is formed following a well-defined pattern, or structure, dictated by forces acting at the molecular level. As a consequence, during its formation process the crystal is in an environment where the solute concentration reaches a certain critical value, before changing status. Solid formation, impossible below the solubility threshold at the given temperature and pressure conditions, may then take place at a concentration higher than the theoretical solubility level. The difference between the actual value of the solute concentration at the crystallization limit and the theoretical (static) solubility threshold is called supersaturation and is a fundamental factor in crystallization.
Nucleation
Nucleation is the initiation of a phase change in a small region, such as the formation of a solid crystal from a liquid solution. It is a consequence of rapid local fluctuations on a molecular scale in a homogeneous phase that is in a state of metastable equilibrium. Total nucleation is the sum effect of two categories of nucleation – primary and secondary.
Primary nucleation
Primary nucleation is the initial formation of a crystal where there are no other crystals present or where, if there are crystals present in the system, they do not have any influence on the process. This can occur in two conditions. The first is homogeneous nucleation, which is nucleation that is not influenced in any way by solids. These solids include the walls of the crystallizer vessel and particles of any foreign substance. The second category, then, is heterogeneous nucleation. This occurs when solid particles of foreign substances cause an increase in the rate of nucleation that would otherwise not be seen without the existence of these foreign particles. Homogeneous nucleation rarely occurs in practice due to the high energy necessary to begin nucleation without a solid surface to catalyze the nucleation.
Primary nucleation (both homogeneous and heterogeneous) has been modeled as follows:
where
B is the number of nuclei formed per unit volume per unit time,
N is the number of nuclei per unit volume,
kn is a rate constant,
c is the instantaneous solute concentration,
c* is the solute concentration at saturation,
(c − c*) is also known as supersaturation,
n is an empirical exponent that can be as large as 10, but generally ranges between 3 and 4.
Secondary nucleation
Secondary nucleation is the formation of nuclei attributable to the influence of the existing microscopic crystals in the magma. More simply put, secondary nucleation is when crystal growth is initiated with contact of other existing crystals or "seeds". The first type of known secondary crystallization is attributable to fluid shear, the other due to collisions between already existing crystals with either a solid surface of the crystallizer or with other crystals themselves. Fluid-shear nucleation occurs when liquid travels across a crystal at a high speed, sweeping away nuclei that would otherwise be incorporated into a crystal, causing the swept-away nuclei to become new crystals. Contact nucleation has been found to be the most effective and common method for nucleation. The benefits include the following:
Low kinetic order and rate-proportional to supersaturation, allowing easy control without unstable operation.
Occurs at low supersaturation, where growth rate is optimal for good quality.
Low necessary energy at which crystals strike avoids the breaking of existing crystals into new crystals.
The quantitative fundamentals have already been isolated and are being incorporated into practice.
The following model, although somewhat simplified, is often used to model secondary nucleation:
where
k1 is a rate constant,
MT is the suspension density,
j is an empirical exponent that can range up to 1.5, but is generally 1,
b is an empirical exponent that can range up to 5, but is generally 2.
Growth
Once the first small crystal, the nucleus, forms it acts as a convergence point (if unstable due to supersaturation) for molecules of solute touching – or adjacent to – the crystal so that it increases its own dimension in successive layers. The pattern of growth resembles the rings of an onion, as shown in the picture, where each colour indicates the same mass of solute; this mass creates increasingly thin layers due to the increasing surface area of the growing crystal. The supersaturated solute mass the original nucleus may capture in a time unit is called the growth rate expressed in kg/(m2*h), and is a constant specific to the process. Growth rate is influenced by several physical factors, such as surface tension of solution, pressure, temperature, relative crystal velocity in the solution, Reynolds number, and so forth.
The main values to control are therefore:
Supersaturation value, as an index of the quantity of solute available for the growth of the crystal;
Total crystal surface in unit fluid mass, as an index of the capability of the solute to fix onto the crystal;
Retention time, as an index of the probability of a molecule of solute to come into contact with an existing crystal;
Flow pattern, again as an index of the probability of a molecule of solute to come into contact with an existing crystal (higher in laminar flow, lower in turbulent flow, but the reverse applies to the probability of contact).
The first value is a consequence of the physical characteristics of the solution, while the others define a difference between a well- and poorly designed crystallizer.
Size distribution
The appearance and size range of a crystalline product is extremely important in crystallization. If further processing of the crystals is desired, large crystals with uniform size are important for washing, filtering, transportation, and storage, because large crystals are easier to filter out of a solution than small crystals. Also, larger crystals have a smaller surface area to volume ratio, leading to a higher purity. This higher purity is due to less retention of mother liquor which contains impurities, and a smaller loss of yield when the crystals are washed to remove the mother liquor. In special cases, for example during drug manufacturing in the pharmaceutical industry, small crystal sizes are often desired to improve drug dissolution rate and bio-availability. The theoretical crystal size distribution can be estimated as a function of operating conditions with a fairly complicated mathematical process called population balance theory (using population balance equations).
Main crystallization processes
Some of the important factors influencing solubility are:
Concentration
Temperature
Solvent mixture composition
Polarity
Ionic strength
So one may identify two main families of crystallization processes:
Cooling crystallization
Evaporative crystallization
This division is not really clear-cut, since hybrid systems exist, where cooling is performed through evaporation, thus obtaining at the same time a concentration of the solution.
A crystallization process often referred to in chemical engineering is the fractional crystallization. This is not a different process, rather a special application of one (or both) of the above.
Cooling crystallization
Application
Most chemical compounds, dissolved in most solvents, show the so-called direct solubility that is, the solubility threshold increases with temperature.
So, whenever the conditions are favorable, crystal formation results from simply cooling the solution. Here cooling is a relative term: austenite crystals in a steel form well above 1000 °C. An example of this crystallization process is the production of Glauber's salt, a crystalline form of sodium sulfate. In the diagram, where equilibrium temperature is on the x-axis and equilibrium concentration (as mass percent of solute in saturated solution) in y-axis, it is clear that sulfate solubility quickly decreases below 32.5 °C. Assuming a saturated solution at 30 °C, by cooling it to 0 °C (note that this is possible thanks to the freezing-point depression), the precipitation of a mass of sulfate occurs corresponding to the change in solubility from 29% (equilibrium value at 30 °C) to approximately 4.5% (at 0 °C) – actually a larger crystal mass is precipitated, since sulfate entrains hydration water, and this has the side effect of increasing the final concentration.
There are limitations in the use of cooling crystallization:
Many solutes precipitate in hydrate form at low temperatures: in the previous example this is acceptable, and even useful, but it may be detrimental when, for example, the mass of water of hydration to reach a stable hydrate crystallization form is more than the available water: a single block of hydrate solute will be formed – this occurs in the case of calcium chloride);
Maximum supersaturation will take place in the coldest points. These may be the heat exchanger tubes which are sensitive to scaling, and heat exchange may be greatly reduced or discontinued;
A decrease in temperature usually implies an increase of the viscosity of a solution. Too high a viscosity may give hydraulic problems, and the laminar flow thus created may affect the crystallization dynamics.
It is not applicable to compounds having reverse solubility, a term to indicate that solubility increases with temperature decrease (an example occurs with sodium sulfate where solubility is reversed above 32.5 °C).
Cooling crystallizers
The simplest cooling crystallizers are tanks provided with a mixer for internal circulation, where temperature decrease is obtained by heat exchange with an intermediate fluid circulating in a jacket. These simple machines are used in batch processes, as in processing of pharmaceuticals and are prone to scaling. Batch processes normally provide a relatively variable quality of the product along with the batch.
The Swenson-Walker crystallizer is a model, specifically conceived by Swenson Co. around 1920, having a semicylindric horizontal hollow trough in which a hollow screw conveyor or some hollow discs, in which a refrigerating fluid is circulated, plunge during rotation on a longitudinal axis. The refrigerating fluid is sometimes also circulated in a jacket around the trough. Crystals precipitate on the cold surfaces of the screw/discs, from which they are removed by scrapers and settle on the bottom of the trough. The screw, if provided, pushes the slurry towards a discharge port.
A common practice is to cool the solutions by flash evaporation: when a liquid at a given T0 temperature is transferred in a chamber at a pressure P1 such that the liquid saturation temperature T1 at P1 is lower than T0, the liquid will release heat according to the temperature difference and a quantity of solvent, whose total latent heat of vaporization equals the difference in enthalpy. In simple words, the liquid is cooled by evaporating a part of it.
In the sugar industry, vertical cooling crystallizers are used to exhaust the molasses in the last crystallization stage downstream of vacuum pans, prior to centrifugation. The massecuite enters the crystallizers at the top, and cooling water is pumped through pipes in counterflow.
Evaporative crystallization
Another option is to obtain, at an approximately constant temperature, the precipitation of the crystals by increasing the solute concentration above the solubility threshold. To obtain this, the solute/solvent mass ratio is increased using the technique of evaporation. This process is insensitive to change in temperature (as long as hydration state remains unchanged).
All considerations on control of crystallization parameters are the same as for the cooling models.
Evaporative crystallizers
Most industrial crystallizers are of the evaporative type, such as the very large sodium chloride and sucrose units, whose production accounts for more than 50% of the total world production of crystals. The most common type is the forced circulation (FC) model (see evaporator). A pumping device (a pump or an axial flow mixer) keeps the crystal slurry in homogeneous suspension throughout the tank, including the exchange surfaces; by controlling pump flow, control of the contact time of the crystal mass with the supersaturated solution is achieved, together with reasonable velocities at the exchange surfaces. The Oslo, mentioned above, is a refining of the evaporative forced circulation crystallizer, now equipped with a large crystals settling zone to increase the retention time (usually low in the FC) and to roughly separate heavy slurry zones from clear liquid. Evaporative crystallizers tend to yield larger average crystal size and narrows the crystal size distribution curve.
DTB crystallizer
Whichever the form of the crystallizer, to achieve an effective process control it is important to control the retention time and the crystal mass, to obtain the optimum conditions in terms of crystal specific surface and the fastest possible growth. This is achieved by a separation – to put it simply – of the crystals from the liquid mass, in order to manage the two flows in a different way. The practical way is to perform a gravity settling to be able to extract (and possibly recycle separately) the (almost) clear liquid, while managing the mass flow around the crystallizer to obtain a precise slurry density elsewhere. A typical example is the DTB (Draft Tube and Baffle) crystallizer, an idea of Richard Chisum Bennett (a Swenson engineer and later President of Swenson) at the end of the 1950s. The DTB crystallizer (see images) has an internal circulator, typically an axial flow mixer – yellow – pushing upwards in a draft tube while outside the crystallizer there is a settling area in an annulus; in it the exhaust solution moves upwards at a very low velocity, so that large crystals settle – and return to the main circulation – while only the fines, below a given grain size are extracted and eventually destroyed by increasing or decreasing temperature, thus creating additional supersaturation. A quasi-perfect control of all parameters is achieved as DTF crystallizers offer superior control over crystal size and characteristics. This crystallizer, and the derivative models (Krystal, CSC, etc.) could be the ultimate solution if not for a major limitation in the evaporative capacity, due to the limited diameter of the vapor head and the relatively low external circulation not allowing large amounts of energy to be supplied to the system.
See also
Abnormal grain growth
Chiral resolution by crystallization
Crystal habit
Crystal structure
Crystallite
Fractional crystallization (chemistry)
Igneous differentiation
Laser heated pedestal growth
Micro-pulling-down
Protein crystallization
Pumpable ice technology
Quasicrystal
Recrystallization (chemistry)
Recrystallization (metallurgy)
Seed crystal
Single crystal
Symplectite
Vitrification
X-ray crystallography
References
Further reading
"Small Molecule Crystallization" (PDF) at Illinois Institute of Technology website
Arkenbout-de Vroome, Tine (1995). Melt Crystallization Technology CRC
Geankoplis, C.J. (2003) "Transport Processes and Separation Process Principles". 4th Ed. Prentice-Hall Inc.
Glynn P.D. and Reardon E.J. (1990) "Solid-solution aqueous-solution equilibria: thermodynamic theory and representation". Amer. J. Sci. 290, 164–201.
Jancic, S. J.; Grootscholten, P.A.M.: “Industrial Crystallization”, Textbook, Delft University Press and Reidel Publishing Company, Delft, The Netherlands, 1984.
Mersmann, A. (2001) Crystallization Technology Handbook CRC; 2nd ed.
External links
Batch Crystallization
Industrial Crystallization
Liquid-solid separation
Crystallography
Laboratory techniques
Phase transitions
Articles containing video clips | 0.7832 | 0.996991 | 0.780843 |
Carbonic acid | Carbonic acid is a chemical compound with the chemical formula . The molecule rapidly converts to water and carbon dioxide in the presence of water. However, in the absence of water, it is quite stable at room temperature. The interconversion of carbon dioxide and carbonic acid is related to the breathing cycle of animals and the acidification of natural waters.
In biochemistry and physiology, the name "carbonic acid" is sometimes applied to aqueous solutions of carbon dioxide. These chemical species play an important role in the bicarbonate buffer system, used to maintain acid–base homeostasis.
Terminology in biochemical literature
In chemistry, the term "carbonic acid" strictly refers to the chemical compound with the formula . Some biochemistry literature effaces the distinction between carbonic acid and carbon dioxide dissolved in extracellular fluid.
In physiology, carbon dioxide excreted by the lungs may be called volatile acid or respiratory acid.
Anhydrous carbonic acid
At ambient temperatures, pure carbonic acid is a stable gas. There are two main methods to produce anhydrous carbonic acid: reaction of hydrogen chloride and potassium bicarbonate at 100 K in methanol and proton irradiation of pure solid carbon dioxide. Chemically, it behaves as a diprotic Brønsted acid.
Carbonic acid monomers exhibit three conformational isomers: cis–cis, cis–trans, and trans–trans.
At low temperatures and atmospheric pressure, solid carbonic acid is amorphous and lacks Bragg peaks in X-ray diffraction. But at high pressure, carbonic acid crystallizes, and modern analytical spectroscopy can measure its geometry.
According to neutron diffraction of dideuterated carbonic acid in a hybrid clamped cell (Russian alloy/copper-beryllium) at 1.85 GPa, the molecules are planar and form dimers joined by pairs of hydrogen bonds. All three C-O bonds are nearly equidistant at 1.34 Å, intermediate between typical C-O and C=O distances (respectively 1.43 and 1.23 Å). The unusual C-O bond lengths are attributed to delocalized π bonding in the molecule's center and extraordinarily strong hydrogen bonds. The same effects also induce a very short O—O separation (2.13 Å), through the 136° O-H-O angle imposed by the doubly hydrogen-bonded 8-membered rings. Longer O—O distances are observed in strong intramolecular hydrogen bonds, e.g. in oxalic acid, where the distances exceed 2.4 Å.
In aqueous solution
In even a slight presence of water, carbonic acid dehydrates to carbon dioxide and water, which then catalyzes further decomposition. For this reason, carbon dioxide can be considered the carbonic acid anhydride.
The hydration equilibrium constant at 25 °C is in pure water and ≈ 1.2×10−3 in seawater. Hence the majority of carbon dioxide at geophysical or biological air-water interfaces does not convert to carbonic acid, remaining dissolved gas. However, the uncatalyzed equilibrium is reached quite slowly: the rate constants are 0.039 s−1 for hydration and 23 s−1 for dehydration.
In biological solutions
In the presence of the enzyme carbonic anhydrase, equilibrium is instead reached rapidly, and the following reaction takes precedence: HCO3^- {+} H^+ <=> CO2 {+} H2O
When the created carbon dioxide exceeds its solubility, gas evolves and a third equilibrium CO_2 (soln) <=> CO_2 (g) must also be taken into consideration. The equilibrium constant for this reaction is defined by Henry's law.
The two reactions can be combined for the equilibrium in solution: When Henry's law is used to calculate the denominator care is needed with regard to units since Henry's law constant can be commonly expressed with 8 different dimensionalities.
Under high CO2 partial pressure
In the beverage industry, sparkling or "fizzy water" is usually referred to as carbonated water. It is made by dissolving carbon dioxide under a small positive pressure in water. Many soft drinks treated the same way effervesce.
Significant amounts of molecular exist in aqueous solutions subjected to pressures of multiple gigapascals (tens of thousands of atmospheres) in planetary interiors. Pressures of 0.6–1.6 GPa at 100 K, and 0.75–1.75 GPa at 300 K are attained in the cores of large icy satellites such as Ganymede, Callisto, and Titan, where water and carbon dioxide are present. Pure carbonic acid, being denser, is expected to have sunk under the ice layers and separate them from the rocky cores of these moons.
Relationship to bicarbonate and carbonate
Carbonic acid is the formal Brønsted–Lowry conjugate acid of the bicarbonate anion, stable in alkaline solution. The protonation constants have been measured to great precision, but depend on overall ionic strength . The two equilibria most easily measured are as follows: where brackets indicate the concentration of specie. At 25 °C, these equilibria empirically satisfy decreases with increasing , as does . In a solution absent other ions (e.g. ), these curves imply the following stepwise dissociation constants: Direct values for these constants in the literature include and .
To interpret these numbers, note that two chemical species in an acid equilibrium are equiconcentrated when . In particular, the extracellular fluid (cytosol) in biological systems exhibits , so that carbonic acid will be almost 50%-dissociated at equilibrium.
Ocean acidification
The Bjerrum plot shows typical equilibrium concentrations, in solution, in seawater, of carbon dioxide and the various species derived from it, as a function of pH. As human industrialization has increased the proportion of carbon dioxide in Earth's atmosphere, the proportion of carbon dioxide dissolved in sea- and freshwater as carbonic acid is also expected to increase. This rise in dissolved acid is also expected to acidify those waters, generating a decrease in pH. It has been estimated that the increase in dissolved carbon dioxide has already caused the ocean's average surface pH to decrease by about 0.1 from pre-industrial levels.
Further reading
References
External links
Carbonic acid/bicarbonate/carbonate equilibrium in water: pH of solutions, buffer capacity, titration, and species distribution vs. pH, computed with a free spreadsheet
How to calculate concentration of carbonic acid in water
Carbonates
Carboxylic acids
Inorganic carbon compounds
Mineral acids | 0.782269 | 0.99808 | 0.780767 |
Protein | Proteins are large biomolecules and macromolecules that comprise one or more long chains of amino acid residues. Proteins perform a vast array of functions within organisms, including catalysing metabolic reactions, DNA replication, responding to stimuli, providing structure to cells and organisms, and transporting molecules from one location to another. Proteins differ from one another primarily in their sequence of amino acids, which is dictated by the nucleotide sequence of their genes, and which usually results in protein folding into a specific 3D structure that determines its activity.
A linear chain of amino acid residues is called a polypeptide. A protein contains at least one long polypeptide. Short polypeptides, containing less than 20–30 residues, are rarely considered to be proteins and are commonly called peptides. The individual amino acid residues are bonded together by peptide bonds and adjacent amino acid residues. The sequence of amino acid residues in a protein is defined by the sequence of a gene, which is encoded in the genetic code. In general, the genetic code specifies 20 standard amino acids; but in certain organisms the genetic code can include selenocysteine and—in certain archaea—pyrrolysine. Shortly after or even during synthesis, the residues in a protein are often chemically modified by post-translational modification, which alters the physical and chemical properties, folding, stability, activity, and ultimately, the function of the proteins. Some proteins have non-peptide groups attached, which can be called prosthetic groups or cofactors. Proteins can also work together to achieve a particular function, and they often associate to form stable protein complexes.
Once formed, proteins only exist for a certain period and are then degraded and recycled by the cell's machinery through the process of protein turnover. A protein's lifespan is measured in terms of its half-life and covers a wide range. They can exist for minutes or years with an average lifespan of 1–2 days in mammalian cells. Abnormal or misfolded proteins are degraded more rapidly either due to being targeted for destruction or due to being unstable.
Like other biological macromolecules such as polysaccharides and nucleic acids, proteins are essential parts of organisms and participate in virtually every process within cells. Many proteins are enzymes that catalyse biochemical reactions and are vital to metabolism. Proteins also have structural or mechanical functions, such as actin and myosin in muscle and the proteins in the cytoskeleton, which form a system of scaffolding that maintains cell shape. Other proteins are important in cell signaling, immune responses, cell adhesion, and the cell cycle. In animals, proteins are needed in the diet to provide the essential amino acids that cannot be synthesized. Digestion breaks the proteins down for metabolic use.
History and etymology
Discovery and early studies
Proteins have been studied and recognized since the 1700s by Antoine Fourcroy and others, who often collectively called them "albumins", or "albuminous materials" (Eiweisskörper, in German). Gluten, for example, was first separated from wheat in published research around 1747, and later determined to exist in many plants. In 1789, Antoine Fourcroy recognized three distinct varieties of animal proteins: albumin, fibrin, and gelatin. Vegetable (plant) proteins studied in the late 1700s and early 1800s included gluten, plant albumin, gliadin, and legumin.
Proteins were first described by the Dutch chemist Gerardus Johannes Mulder and named by the Swedish chemist Jöns Jacob Berzelius in 1838. Mulder carried out elemental analysis of common proteins and found that nearly all proteins had the same empirical formula, C400H620N100O120P1S1. He came to the erroneous conclusion that they might be composed of a single type of (very large) molecule. The term "protein" to describe these molecules was proposed by Mulder's associate Berzelius; protein is derived from the Greek word , meaning "primary", "in the lead", or "standing in front", + -in. Mulder went on to identify the products of protein degradation such as the amino acid leucine for which he found a (nearly correct) molecular weight of 131 Da.
Early nutritional scientists such as the German Carl von Voit believed that protein was the most important nutrient for maintaining the structure of the body, because it was generally believed that "flesh makes flesh." Around 1862, Karl Heinrich Ritthausen isolated the amino acid glutamic acid. Thomas Burr Osborne compiled a detailed review of the vegetable proteins at the Connecticut Agricultural Experiment Station. Then, working with Lafayette Mendel and applying Liebig's law of the minimum, which states that growth is limited by the scarcest resource, to the feeding of laboratory rats, the nutritionally essential amino acids were established. The work was continued and communicated by William Cumming Rose.
The difficulty in purifying proteins in large quantities made them very difficult for early protein biochemists to study. Hence, early studies focused on proteins that could be purified in large quantities, including those of blood, egg whites, and various toxins, as well as digestive and metabolic enzymes obtained from slaughterhouses. In the 1950s, the Armour Hot Dog Company purified 1 kg of pure bovine pancreatic ribonuclease A and made it freely available to scientists; this gesture helped ribonuclease A become a major target for biochemical study for the following decades.
Polypeptides
The understanding of proteins as polypeptides, or chains of amino acids, came through the work of Franz Hofmeister and Hermann Emil Fischer in 1902. The central role of proteins as enzymes in living organisms that catalyzed reactions was not fully appreciated until 1926, when James B. Sumner showed that the enzyme urease was in fact a protein.
Linus Pauling is credited with the successful prediction of regular protein secondary structures based on hydrogen bonding, an idea first put forth by William Astbury in 1933. Later work by Walter Kauzmann on denaturation, based partly on previous studies by Kaj Linderstrøm-Lang, contributed an understanding of protein folding and structure mediated by hydrophobic interactions.
The first protein to have its amino acid chain sequenced was insulin, by Frederick Sanger, in 1949. Sanger correctly determined the amino acid sequence of insulin, thus conclusively demonstrating that proteins consisted of linear polymers of amino acids rather than branched chains, colloids, or cyclols. He won the Nobel Prize for this achievement in 1958. Christian Anfinsen's studies of the oxidative folding process of ribonuclease A, for which he won the nobel prize in 1972, solidified the thermodynamic hypothesis of protein folding, according to which the folded form of a protein represents its free energy minimum.
Structure
With the development of X-ray crystallography, it became possible to determine protein structures as well as their sequences. The first protein structures to be solved were hemoglobin by Max Perutz and myoglobin by John Kendrew, in 1958. The use of computers and increasing computing power also supported the sequencing of complex proteins. In 1999, Roger Kornberg succeeded in sequencing the highly complex structure of RNA polymerase using high intensity X-rays from synchrotrons.
Since then, cryo-electron microscopy (cryo-EM) of large macromolecular assemblies has been developed. Cryo-EM uses protein samples that are frozen rather than crystals, and beams of electrons rather than X-rays. It causes less damage to the sample, allowing scientists to obtain more information and analyze larger structures. Computational protein structure prediction of small protein structural domains has also helped researchers to approach atomic-level resolution of protein structures.
, the Protein Data Bank contains 181,018 X-ray, 19,809 EM and 12,697 NMR protein structures.
Classification
Proteins are primarily classified by sequence and structure, although other classifications are commonly used. Especially for enzymes the EC number system provides a functional classification scheme. Similarly, the gene ontology classifies both genes and proteins by their biological and biochemical function, but also by their intracellular location.
Sequence similarity is used to classify proteins both in terms of evolutionary and functional similarity. This may use either whole proteins or protein domains, especially in multi-domain proteins. Protein domains allow protein classification by a combination of sequence, structure and function, and they can be combined in many different ways. In an early study of 170,000 proteins, about two-thirds were assigned at least one domain, with larger proteins containing more domains (e.g. proteins larger than 600 amino acids having an average of more than 5 domains).
Biochemistry
Most proteins consist of linear polymers built from series of up to 20 different L-α- amino acids. All proteinogenic amino acids possess common structural features, including an α-carbon to which an amino group, a carboxyl group, and a variable side chain are bonded. Only proline differs from this basic structure as it contains an unusual ring to the N-end amine group, which forces the CO–NH amide moiety into a fixed conformation. The side chains of the standard amino acids, detailed in the list of standard amino acids, have a great variety of chemical structures and properties; it is the combined effect of all of the amino acid side chains in a protein that ultimately determines its three-dimensional structure and its chemical reactivity.
The amino acids in a polypeptide chain are linked by peptide bonds. Once linked in the protein chain, an individual amino acid is called a residue, and the linked series of carbon, nitrogen, and oxygen atoms are known as the main chain or protein backbone.
The peptide bond has two resonance forms that contribute some double-bond character and inhibit rotation around its axis, so that the alpha carbons are roughly coplanar. The other two dihedral angles in the peptide bond determine the local shape assumed by the protein backbone. The end with a free amino group is known as the N-terminus or amino terminus, whereas the end of the protein with a free carboxyl group is known as the C-terminus or carboxy terminus (the sequence of the protein is written from N-terminus to C-terminus, from left to right).
The words protein, polypeptide, and peptide are a little ambiguous and can overlap in meaning. Protein is generally used to refer to the complete biological molecule in a stable conformation, whereas peptide is generally reserved for a short amino acid oligomers often lacking a stable 3D structure. But the boundary between the two is not well defined and usually lies near 20–30 residues. Polypeptide can refer to any single linear chain of amino acids, usually regardless of length, but often implies an absence of a defined conformation.
Interactions
Proteins can interact with many types of molecules, including with other proteins, with lipids, with carbohydrates, and with DNA.
Abundance in cells
It has been estimated that average-sized bacteria contain about 2 million proteins per cell (e.g. E. coli and Staphylococcus aureus). Smaller bacteria, such as Mycoplasma or spirochetes contain fewer molecules, on the order of 50,000 to 1 million. By contrast, eukaryotic cells are larger and thus contain much more protein. For instance, yeast cells have been estimated to contain about 50 million proteins and human cells on the order of 1 to 3 billion. The concentration of individual protein copies ranges from a few molecules per cell up to 20 million. Not all genes coding proteins are expressed in most cells and their number depends on, for example, cell type and external stimuli. For instance, of the 20,000 or so proteins encoded by the human genome, only 6,000 are detected in lymphoblastoid cells.
Synthesis
Biosynthesis
Proteins are assembled from amino acids using information encoded in genes. Each protein has its own unique amino acid sequence that is specified by the nucleotide sequence of the gene encoding this protein. The genetic code is a set of three-nucleotide sets called codons and each three-nucleotide combination designates an amino acid, for example AUG (adenine–uracil–guanine) is the code for methionine. Because DNA contains four nucleotides, the total number of possible codons is 64; hence, there is some redundancy in the genetic code, with some amino acids specified by more than one codon. Genes encoded in DNA are first transcribed into pre-messenger RNA (mRNA) by proteins such as RNA polymerase. Most organisms then process the pre-mRNA (also known as a primary transcript) using various forms of post-transcriptional modification to form the mature mRNA, which is then used as a template for protein synthesis by the ribosome. In prokaryotes the mRNA may either be used as soon as it is produced, or be bound by a ribosome after having moved away from the nucleoid. In contrast, eukaryotes make mRNA in the cell nucleus and then translocate it across the nuclear membrane into the cytoplasm, where protein synthesis then takes place. The rate of protein synthesis is higher in prokaryotes than eukaryotes and can reach up to 20 amino acids per second.
The process of synthesizing a protein from an mRNA template is known as translation. The mRNA is loaded onto the ribosome and is read three nucleotides at a time by matching each codon to its base pairing anticodon located on a transfer RNA molecule, which carries the amino acid corresponding to the codon it recognizes. The enzyme aminoacyl tRNA synthetase "charges" the tRNA molecules with the correct amino acids. The growing polypeptide is often termed the nascent chain. Proteins are always biosynthesized from N-terminus to C-terminus.
The size of a synthesized protein can be measured by the number of amino acids it contains and by its total molecular mass, which is normally reported in units of daltons (synonymous with atomic mass units), or the derivative unit kilodalton (kDa). The average size of a protein increases from Archaea to Bacteria to Eukaryote (283, 311, 438 residues and 31, 34, 49 kDa respectively) due to a bigger number of protein domains constituting proteins in higher organisms. For instance, yeast proteins are on average 466 amino acids long and 53 kDa in mass. The largest known proteins are the titins, a component of the muscle sarcomere, with a molecular mass of almost 3,000 kDa and a total length of almost 27,000 amino acids.
Chemical synthesis
Short proteins can also be synthesized chemically by a family of methods known as peptide synthesis, which rely on organic synthesis techniques such as chemical ligation to produce peptides in high yield. Chemical synthesis allows for the introduction of non-natural amino acids into polypeptide chains, such as attachment of fluorescent probes to amino acid side chains. These methods are useful in laboratory biochemistry and cell biology, though generally not for commercial applications. Chemical synthesis is inefficient for polypeptides longer than about 300 amino acids, and the synthesized proteins may not readily assume their native tertiary structure. Most chemical synthesis methods proceed from C-terminus to N-terminus, opposite the biological reaction.
Structure
Most proteins fold into unique 3D structures. The shape into which a protein naturally folds is known as its native conformation. Although many proteins can fold unassisted, simply through the chemical properties of their amino acids, others require the aid of molecular chaperones to fold into their native states. Biochemists often refer to four distinct aspects of a protein's structure:
Primary structure: the amino acid sequence. A protein is a polyamide.
Secondary structure: regularly repeating local structures stabilized by hydrogen bonds. The most common examples are the α-helix, β-sheet and turns. Because secondary structures are local, many regions of different secondary structure can be present in the same protein molecule.
Tertiary structure: the overall shape of a single protein molecule; the spatial relationship of the secondary structures to one another. Tertiary structure is generally stabilized by nonlocal interactions, most commonly the formation of a hydrophobic core, but also through salt bridges, hydrogen bonds, disulfide bonds, and even post-translational modifications. The term "tertiary structure" is often used as synonymous with the term fold. The tertiary structure is what controls the basic function of the protein.
Quaternary structure: the structure formed by several protein molecules (polypeptide chains), usually called protein subunits in this context, which function as a single protein complex.
Quinary structure: the signatures of protein surface that organize the crowded cellular interior. Quinary structure is dependent on transient, yet essential, macromolecular interactions that occur inside living cells.
Proteins are not entirely rigid molecules. In addition to these levels of structure, proteins may shift between several related structures while they perform their functions. In the context of these functional rearrangements, these tertiary or quaternary structures are usually referred to as "conformations", and transitions between them are called conformational changes. Such changes are often induced by the binding of a substrate molecule to an enzyme's active site, or the physical region of the protein that participates in chemical catalysis. In solution, proteins also undergo variation in structure through thermal vibration and the collision with other molecules.
Proteins can be informally divided into three main classes, which correlate with typical tertiary structures: globular proteins, fibrous proteins, and membrane proteins. Almost all globular proteins are soluble and many are enzymes. Fibrous proteins are often structural, such as collagen, the major component of connective tissue, or keratin, the protein component of hair and nails. Membrane proteins often serve as receptors or provide channels for polar or charged molecules to pass through the cell membrane.
A special case of intramolecular hydrogen bonds within proteins, poorly shielded from water attack and hence promoting their own dehydration, are called dehydrons.
Protein domains
Many proteins are composed of several protein domains, i.e. segments of a protein that fold into distinct structural units. Domains usually also have specific functions, such as enzymatic activities (e.g. kinase) or they serve as binding modules (e.g. the SH3 domain binds to proline-rich sequences in other proteins).
Sequence motif
Short amino acid sequences within proteins often act as recognition sites for other proteins. For instance, SH3 domains typically bind to short PxxP motifs (i.e. 2 prolines [P], separated by two unspecified amino acids [x], although the surrounding amino acids may determine the exact binding specificity). Many such motifs has been collected in the Eukaryotic Linear Motif (ELM) database.
Protein topology
Topology of a protein describes the entanglement of the backbone and the arrangement of contacts within the folded chain. Two theoretical frameworks of knot theory and Circuit topology have been applied to characterise protein topology. Being able to describe protein topology opens up new pathways for protein engineering and pharmaceutical development, and adds to our understanding of protein misfolding diseases such as neuromuscular disorders and cancer.
Cellular functions
Proteins are the chief actors within the cell, said to be carrying out the duties specified by the information encoded in genes. With the exception of certain types of RNA, most other biological molecules are relatively inert elements upon which proteins act. Proteins make up half the dry weight of an Escherichia coli cell, whereas other macromolecules such as DNA and RNA make up only 3% and 20%, respectively. The set of proteins expressed in a particular cell or cell type is known as its proteome.
The chief characteristic of proteins that also allows their diverse set of functions is their ability to bind other molecules specifically and tightly. The region of the protein responsible for binding another molecule is known as the binding site and is often a depression or "pocket" on the molecular surface. This binding ability is mediated by the tertiary structure of the protein, which defines the binding site pocket, and by the chemical properties of the surrounding amino acids' side chains. Protein binding can be extraordinarily tight and specific; for example, the ribonuclease inhibitor protein binds to human angiogenin with a sub-femtomolar dissociation constant (<10−15 M) but does not bind at all to its amphibian homolog onconase (> 1 M). Extremely minor chemical changes such as the addition of a single methyl group to a binding partner can sometimes suffice to nearly eliminate binding; for example, the aminoacyl tRNA synthetase specific to the amino acid valine discriminates against the very similar side chain of the amino acid isoleucine.
Proteins can bind to other proteins as well as to small-molecule substrates. When proteins bind specifically to other copies of the same molecule, they can oligomerize to form fibrils; this process occurs often in structural proteins that consist of globular monomers that self-associate to form rigid fibers. Protein–protein interactions also regulate enzymatic activity, control progression through the cell cycle, and allow the assembly of large protein complexes that carry out many closely related reactions with a common biological function. Proteins can also bind to, or even be integrated into, cell membranes. The ability of binding partners to induce conformational changes in proteins allows the construction of enormously complex signaling networks.
As interactions between proteins are reversible, and depend heavily on the availability of different groups of partner proteins to form aggregates that are capable to carry out discrete sets of function, study of the interactions between specific proteins is a key to understand important aspects of cellular function, and ultimately the properties that distinguish particular cell types.
Enzymes
The best-known role of proteins in the cell is as enzymes, which catalyse chemical reactions. Enzymes are usually highly specific and accelerate only one or a few chemical reactions. Enzymes carry out most of the reactions involved in metabolism, as well as manipulating DNA in processes such as DNA replication, DNA repair, and transcription. Some enzymes act on other proteins to add or remove chemical groups in a process known as posttranslational modification. About 4,000 reactions are known to be catalysed by enzymes. The rate acceleration conferred by enzymatic catalysis is often enormous—as much as 1017-fold increase in rate over the uncatalysed reaction in the case of orotate decarboxylase (78 million years without the enzyme, 18 milliseconds with the enzyme).
The molecules bound and acted upon by enzymes are called substrates. Although enzymes can consist of hundreds of amino acids, it is usually only a small fraction of the residues that come in contact with the substrate, and an even smaller fraction—three to four residues on average—that are directly involved in catalysis. The region of the enzyme that binds the substrate and contains the catalytic residues is known as the active site.
Dirigent proteins are members of a class of proteins that dictate the stereochemistry of a compound synthesized by other enzymes.
Cell signaling and ligand binding
Many proteins are involved in the process of cell signaling and signal transduction. Some proteins, such as insulin, are extracellular proteins that transmit a signal from the cell in which they were synthesized to other cells in distant tissues. Others are membrane proteins that act as receptors whose main function is to bind a signaling molecule and induce a biochemical response in the cell. Many receptors have a binding site exposed on the cell surface and an effector domain within the cell, which may have enzymatic activity or may undergo a conformational change detected by other proteins within the cell.
Antibodies are protein components of an adaptive immune system whose main function is to bind antigens, or foreign substances in the body, and target them for destruction. Antibodies can be secreted into the extracellular environment or anchored in the membranes of specialized B cells known as plasma cells. Whereas enzymes are limited in their binding affinity for their substrates by the necessity of conducting their reaction, antibodies have no such constraints. An antibody's binding affinity to its target is extraordinarily high.
Many ligand transport proteins bind particular small biomolecules and transport them to other locations in the body of a multicellular organism. These proteins must have a high binding affinity when their ligand is present in high concentrations, but must also release the ligand when it is present at low concentrations in the target tissues. The canonical example of a ligand-binding protein is haemoglobin, which transports oxygen from the lungs to other organs and tissues in all vertebrates and has close homologs in every biological kingdom. Lectins are sugar-binding proteins which are highly specific for their sugar moieties. Lectins typically play a role in biological recognition phenomena involving cells and proteins. Receptors and hormones are highly specific binding proteins.
Transmembrane proteins can also serve as ligand transport proteins that alter the permeability of the cell membrane to small molecules and ions. The membrane alone has a hydrophobic core through which polar or charged molecules cannot diffuse. Membrane proteins contain internal channels that allow such molecules to enter and exit the cell. Many ion channel proteins are specialized to select for only a particular ion; for example, potassium and sodium channels often discriminate for only one of the two ions.
Structural proteins
Structural proteins confer stiffness and rigidity to otherwise-fluid biological components. Most structural proteins are fibrous proteins; for example, collagen and elastin are critical components of connective tissue such as cartilage, and keratin is found in hard or filamentous structures such as hair, nails, feathers, hooves, and some animal shells. Some globular proteins can also play structural functions, for example, actin and tubulin are globular and soluble as monomers, but polymerize to form long, stiff fibers that make up the cytoskeleton, which allows the cell to maintain its shape and size.
Other proteins that serve structural functions are motor proteins such as myosin, kinesin, and dynein, which are capable of generating mechanical forces. These proteins are crucial for cellular motility of single celled organisms and the sperm of many multicellular organisms which reproduce sexually. They also generate the forces exerted by contracting muscles and play essential roles in intracellular transport.
Protein evolution
A key question in molecular biology is how proteins evolve, i.e. how can mutations (or rather changes in amino acid sequence) lead to new structures and functions? Most amino acids in a protein can be changed without disrupting activity or function, as can be seen from numerous homologous proteins across species (as collected in specialized databases for protein families, e.g. PFAM). In order to prevent dramatic consequences of mutations, a gene may be duplicated before it can mutate freely. However, this can also lead to complete loss of gene function and thus pseudo-genes. More commonly, single amino acid changes have limited consequences although some can change protein function substantially, especially in enzymes. For instance, many enzymes can change their substrate specificity by one or a few mutations. Changes in substrate specificity are facilitated by substrate promiscuity, i.e. the ability of many enzymes to bind and process multiple substrates. When mutations occur, the specificity of an enzyme can increase (or decrease) and thus its enzymatic activity. Thus, bacteria (or other organisms) can adapt to different food sources, including unnatural substrates such as plastic.
Methods of study
Methods commonly used to study protein structure and function include immunohistochemistry, site-directed mutagenesis, X-ray crystallography, nuclear magnetic resonance and mass spectrometry.
The activities and structures of proteins may be examined in vitro, in vivo, and in silico. In vitro studies of purified proteins in controlled environments are useful for learning how a protein carries out its function: for example, enzyme kinetics studies explore the chemical mechanism of an enzyme's catalytic activity and its relative affinity for various possible substrate molecules. By contrast, in vivo experiments can provide information about the physiological role of a protein in the context of a cell or even a whole organism. In silico studies use computational methods to study proteins.
Protein purification
Proteins may be purified from other cellular components using a variety of techniques such as ultracentrifugation, precipitation, electrophoresis, and chromatography; the advent of genetic engineering has made possible a number of methods to facilitate purification.
To perform in vitro analysis, a protein must be purified away from other cellular components. This process usually begins with cell lysis, in which a cell's membrane is disrupted and its internal contents released into a solution known as a crude lysate. The resulting mixture can be purified using ultracentrifugation, which fractionates the various cellular components into fractions containing soluble proteins; membrane lipids and proteins; cellular organelles, and nucleic acids. Precipitation by a method known as salting out can concentrate the proteins from this lysate. Various types of chromatography are then used to isolate the protein or proteins of interest based on properties such as molecular weight, net charge and binding affinity. The level of purification can be monitored using various types of gel electrophoresis if the desired protein's molecular weight and isoelectric point are known, by spectroscopy if the protein has distinguishable spectroscopic features, or by enzyme assays if the protein has enzymatic activity. Additionally, proteins can be isolated according to their charge using electrofocusing.
For natural proteins, a series of purification steps may be necessary to obtain protein sufficiently pure for laboratory applications. To simplify this process, genetic engineering is often used to add chemical features to proteins that make them easier to purify without affecting their structure or activity. Here, a "tag" consisting of a specific amino acid sequence, often a series of histidine residues (a "His-tag"), is attached to one terminus of the protein. As a result, when the lysate is passed over a chromatography column containing nickel, the histidine residues ligate the nickel and attach to the column while the untagged components of the lysate pass unimpeded. A number of different tags have been developed to help researchers purify specific proteins from complex mixtures.
Cellular localization
The study of proteins in vivo is often concerned with the synthesis and localization of the protein within the cell. Although many intracellular proteins are synthesized in the cytoplasm and membrane-bound or secreted proteins in the endoplasmic reticulum, the specifics of how proteins are targeted to specific organelles or cellular structures is often unclear. A useful technique for assessing cellular localization uses genetic engineering to express in a cell a fusion protein or chimera consisting of the natural protein of interest linked to a "reporter" such as green fluorescent protein (GFP). The fused protein's position within the cell can then be cleanly and efficiently visualized using microscopy, as shown in the figure opposite.
Other methods for elucidating the cellular location of proteins requires the use of known compartmental markers for regions such as the ER, the Golgi, lysosomes or vacuoles, mitochondria, chloroplasts, plasma membrane, etc. With the use of fluorescently tagged versions of these markers or of antibodies to known markers, it becomes much simpler to identify the localization of a protein of interest. For example, indirect immunofluorescence will allow for fluorescence colocalization and demonstration of location. Fluorescent dyes are used to label cellular compartments for a similar purpose.
Other possibilities exist, as well. For example, immunohistochemistry usually uses an antibody to one or more proteins of interest that are conjugated to enzymes yielding either luminescent or chromogenic signals that can be compared between samples, allowing for localization information. Another applicable technique is cofractionation in sucrose (or other material) gradients using isopycnic centrifugation. While this technique does not prove colocalization of a compartment of known density and the protein of interest, it does increase the likelihood, and is more amenable to large-scale studies.
Finally, the gold-standard method of cellular localization is immunoelectron microscopy. This technique also uses an antibody to the protein of interest, along with classical electron microscopy techniques. The sample is prepared for normal electron microscopic examination, and then treated with an antibody to the protein of interest that is conjugated to an extremely electro-dense material, usually gold. This allows for the localization of both ultrastructural details as well as the protein of interest.
Through another genetic engineering application known as site-directed mutagenesis, researchers can alter the protein sequence and hence its structure, cellular localization, and susceptibility to regulation. This technique even allows the incorporation of unnatural amino acids into proteins, using modified tRNAs, and may allow the rational design of new proteins with novel properties.
Proteomics
The total complement of proteins present at a time in a cell or cell type is known as its proteome, and the study of such large-scale data sets defines the field of proteomics, named by analogy to the related field of genomics. Key experimental techniques in proteomics include 2D electrophoresis, which allows the separation of many proteins, mass spectrometry, which allows rapid high-throughput identification of proteins and sequencing of peptides (most often after in-gel digestion), protein microarrays, which allow the detection of the relative levels of the various proteins present in a cell, and two-hybrid screening, which allows the systematic exploration of protein–protein interactions. The total complement of biologically possible such interactions is known as the interactome. A systematic attempt to determine the structures of proteins representing every possible fold is known as structural genomics.
Structure determination
Discovering the tertiary structure of a protein, or the quaternary structure of its complexes, can provide important clues about how the protein performs its function and how it can be affected, i.e. in drug design. As proteins are too small to be seen under a light microscope, other methods have to be employed to determine their structure. Common experimental methods include X-ray crystallography and NMR spectroscopy, both of which can produce structural information at atomic resolution. However, NMR experiments are able to provide information from which a subset of distances between pairs of atoms can be estimated, and the final possible conformations for a protein are determined by solving a distance geometry problem. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet / α-helical composition of proteins. Cryoelectron microscopy is used to produce lower-resolution structural information about very large protein complexes, including assembled viruses; a variant known as electron crystallography can also produce high-resolution information in some cases, especially for two-dimensional crystals of membrane proteins. Solved structures are usually deposited in the Protein Data Bank (PDB), a freely available resource from which structural data about thousands of proteins can be obtained in the form of Cartesian coordinates for each atom in the protein.
Many more gene sequences are known than protein structures. Further, the set of solved structures is biased toward proteins that can be easily subjected to the conditions required in X-ray crystallography, one of the major structure determination methods. In particular, globular proteins are comparatively easy to crystallize in preparation for X-ray crystallography. Membrane proteins and large protein complexes, by contrast, are difficult to crystallize and are underrepresented in the PDB. Structural genomics initiatives have attempted to remedy these deficiencies by systematically solving representative structures of major fold classes. Protein structure prediction methods attempt to provide a means of generating a plausible structure for proteins whose structures have not been experimentally determined.
Structure prediction
Complementary to the field of structural genomics, protein structure prediction develops efficient mathematical models of proteins to computationally predict the molecular formations in theory, instead of detecting structures with laboratory observation. The most successful type of structure prediction, known as homology modeling, relies on the existence of a "template" structure with sequence similarity to the protein being modeled; structural genomics' goal is to provide sufficient representation in solved structures to model most of those that remain. Although producing accurate models remains a challenge when only distantly related template structures are available, it has been suggested that sequence alignment is the bottleneck in this process, as quite accurate models can be produced if a "perfect" sequence alignment is known. Many structure prediction methods have served to inform the emerging field of protein engineering, in which novel protein folds have already been designed. Also proteins (in eukaryotes ~33%) contain large unstructured but biologically functional segments and can be classified as intrinsically disordered proteins. Predicting and analysing protein disorder is, therefore, an important part of protein structure characterisation.
Bioinformatics
A vast array of computational methods have been developed to analyze the structure, function and evolution of proteins. The development of such tools has been driven by the large and fast-growing amount of genomic and proteomic data available for a variety of organisms, including the human genome. The resources do not exist to study all proteins experimentally, thus only a few are subjected to laboratory experiments while computational tools are used to extrapolate to similar proteins. Such homologous proteins can be efficiently identified in distantly related organisms by sequence alignment. Genome and gene sequences can be searched by a variety of tools for certain properties. Sequence profiling tools can find restriction enzyme sites, open reading frames in nucleotide sequences, and predict secondary structures. Phylogenetic trees can be constructed and evolutionary hypotheses developed using special software like ClustalW regarding the ancestry of modern organisms and the genes they express. The field of bioinformatics is now indispensable for the analysis of genes and proteins.
In silico simulation of dynamical processes
A more complex computational problem is the prediction of intermolecular interactions, such as in molecular docking, protein folding, protein–protein interaction and chemical reactivity. Mathematical models to simulate these dynamical processes involve molecular mechanics, in particular, molecular dynamics. In this regard, in silico simulations discovered the folding of small α-helical protein domains such as the villin headpiece, the HIV accessory protein and hybrid methods combining standard molecular dynamics with quantum mechanical mathematics have explored the electronic states of rhodopsins.
Beyond classical molecular dynamics, quantum dynamics methods allow the simulation of proteins in atomistic detail with an accurate description of quantum mechanical effects. Examples include the multi-layer multi-configuration time-dependent Hartree (MCTDH) method and the hierarchical equations of motion (HEOM) approach, which have been applied to plant cryptochromes and bacteria light-harvesting complexes, respectively. Both quantum and classical mechanical simulations of biological-scale systems are extremely computationally demanding, so distributed computing initiatives (for example, the Folding@home project) facilitate the molecular modeling by exploiting advances in GPU parallel processing and Monte Carlo techniques.
Chemical analysis
The total nitrogen content of organic matter is mainly formed by the amino groups in proteins. The Total Kjeldahl Nitrogen (TKN) is a measure of nitrogen widely used in the analysis of (waste) water, soil, food, feed and organic matter in general. As the name suggests, the Kjeldahl method is applied. More sensitive methods are available.
Nutrition
Most microorganisms and plants can biosynthesize all 20 standard amino acids, while animals (including humans) must obtain some of the amino acids from the diet. The amino acids that an organism cannot synthesize on its own are referred to as essential amino acids. Key enzymes that synthesize certain amino acids are not present in animals—such as aspartokinase, which catalyses the first step in the synthesis of lysine, methionine, and threonine from aspartate. If amino acids are present in the environment, microorganisms can conserve energy by taking up the amino acids from their surroundings and downregulating their biosynthetic pathways.
In animals, amino acids are obtained through the consumption of foods containing protein. Ingested proteins are then broken down into amino acids through digestion, which typically involves denaturation of the protein through exposure to acid and hydrolysis by enzymes called proteases. Some ingested amino acids are used for protein biosynthesis, while others are converted to glucose through gluconeogenesis, or fed into the citric acid cycle. This use of protein as a fuel is particularly important under starvation conditions as it allows the body's own proteins to be used to support life, particularly those found in muscle.
In animals such as dogs and cats, protein maintains the health and quality of the skin by promoting hair follicle growth and keratinization, and thus reducing the likelihood of skin problems producing malodours. Poor-quality proteins also have a role regarding gastrointestinal health, increasing the potential for flatulence and odorous compounds in dogs because when proteins reach the colon in an undigested state, they are fermented producing hydrogen sulfide gas, indole, and skatole. Dogs and cats digest animal proteins better than those from plants, but products of low-quality animal origin are poorly digested, including skin, feathers, and connective tissue.
Mechanical properties
The mechanical properties of proteins are highly diverse and are often central to their biological function, as in the case of proteins like keratin and collagen. For instance, the ability of muscle tissue to continually expand and contract is directly tied to the elastic properties of their underlying protein makeup. Beyond fibrous proteins, the conformational dynamics of enzymes and the structure of biological membranes, among other biological functions, are governed by the mechanical properties of the proteins. Outside of their biological context, the unique mechanical properties of many proteins, along with their relative sustainability when compared to synthetic polymers, have made them desirable targets for next-generation materials design.
Young's modulus
Young's modulus, E, is calculated as the axial stress σ over the resulting strain ε. It is a measure of the relative stiffness of a material. In the context of proteins, this stiffness often directly correlates to biological function. For example, collagen, found in connective tissue, bones, and cartilage, and keratin, found in nails, claws, and hair, have observed stiffnesses that are several orders of magnitude higher than that of elastin, which is though to give elasticity to structures such as blood vessels, pulmonary tissue, and bladder tissue, among others. In comparison to this, globular proteins, such as Bovine Serum Albumin, which float relatively freely in the cytosol and often function as enzymes (and thus undergoing frequent conformational changes) have comparably much lower Young's moduli.
The Young's modulus of a single protein can be found through molecular dynamics simulation. Using either atomistic force-fields, such as CHARMM or GROMOS, or coarse-grained forcefields like Martini, a single protein molecule can be stretched by a uniaxial force while the resulting extension is recorded in order to calculate the strain. Experimentally, methods such as atomic force microscopy can be used to obtain similar data.
At the macroscopic level, the Young's modulus of cross-linked protein networks can be obtained through more traditional mechanical testing. Experimentally observed values for a few proteins can be seen below.
Viscosity
In addition to serving as enzymes within the cell, globular proteins often act as key transport molecules. For instance, Serum Albumins, a key component of blood, are necessary for the transport of a multitude of small molecules throughout the body. Because of this, the concentration dependent behavior of these proteins in solution is directly tied to the function of the circulatory system. On way of quantifying this behavior is through the viscosity of the solution.
Viscosity, η, is generally given is a measure of a fluid's resistance to deformation. It can be calculated as the ratio between the applied stress and the rate of change of the resulting shear strain, that is, the rate of deformation. Viscosity of complex liquid mixtures, such as blood, often depends strongly on temperature and solute concentration. For serum albumin, specifically bovine serum albumin, the following relation between viscosity and temperature and concentration can be used.
Where c is the concentration, T is the temperature, R is the gas constant, and α, β, B, D, and ΔE are all material-based property constants. This equation has the form of an Arrhenius equation, assigning viscosity an exponential dependence on temperature and concentration.
See also
References
Further reading
Textbooks
History
External links
Databases and projects
NCBI Entrez Protein database
NCBI Protein Structure database
Human Protein Reference Database
Human Proteinpedia
Folding@Home (Stanford University)
Protein Databank in Europe (see also PDBeQuips, short articles and tutorials on interesting PDB structures)
Research Collaboratory for Structural Bioinformatics (see also Molecule of the Month , presenting short accounts on selected proteins from the PDB)
Proteopedia – Life in 3D: rotatable, zoomable 3D model with wiki annotations for every known protein molecular structure.
UniProt the Universal Protein Resource
Tutorials and educational websites
"An Introduction to Proteins" from HOPES (Huntington's Disease Outreach Project for Education at Stanford)
Proteins: Biogenesis to Degradation – The Virtual Library of Biochemistry and Cell Biology
Molecular biology
Proteomics | 0.781065 | 0.999543 | 0.780708 |
Protonation | In chemistry, protonation (or hydronation) is the adding of a proton (or hydron, or hydrogen cation), usually denoted by H+, to an atom, molecule, or ion, forming a conjugate acid. (The complementary process, when a proton is removed from a Brønsted–Lowry acid, is deprotonation.) Some examples include
The protonation of water by sulfuric acid:
H2SO4 + H2O H3O+ +
The protonation of isobutene in the formation of a carbocation:
(CH3)2C=CH2 + HBF4 (CH3)3C+ +
The protonation of ammonia in the formation of ammonium chloride from ammonia and hydrogen chloride:
NH3(g) + HCl(g) → NH4Cl(s)
Protonation is a fundamental chemical reaction and is a step in many stoichiometric and catalytic processes. Some ions and molecules can undergo more than one protonation and are labeled polybasic, which is true of many biological macromolecules. Protonation and deprotonation (removal of a proton) occur in most acid–base reactions; they are the core of most acid–base reaction theories. A Brønsted–Lowry acid is defined as a chemical substance that protonates another substance. Upon protonating a substrate, the mass and the charge of the species each increase by one unit, making it an essential step in certain analytical procedures such as electrospray mass spectrometry. Protonating or deprotonating a molecule or ion can change many other chemical properties, not just the charge and mass, for example solubility, hydrophilicity, reduction potential or oxidation potential, and optical properties can change.
Rates
Protonations are often rapid, partly because of the high mobility of protons in many solvents. The rate of protonation is related to the acidity of the protonating species: protonation by weak acids is slower than protonation of the same base by strong acids. The rates of protonation and deprotonation can be especially slow when protonation induces significant structural changes.
Reversibility and catalysis
Protonation is usually reversible, and the structure and bonding of the conjugate base are normally unchanged on protonation. In some cases, however, protonation induces isomerization, for example cis-alkenes can be converted to trans-alkenes using a catalytic amount of protonating agent. Many enzymes, such as the serine hydrolases, operate by mechanisms that involve reversible protonation of substrates.
See also
Acid dissociation constant
Deprotonation (or dehydronation)
Molecular autoionization
References
Chemical reactions
Reaction mechanisms | 0.790233 | 0.987703 | 0.780515 |
Post-translational modification | In molecular biology, post-translational modification (PTM) is the covalent process of changing proteins following protein biosynthesis. PTMs may involve enzymes or occur spontaneously. Proteins are created by ribosomes, which translate mRNA into polypeptide chains, which may then change to form the mature protein product. PTMs are important components in cell signalling, as for example when prohormones are converted to hormones.
Post-translational modifications can occur on the amino acid side chains or at the protein's C- or N- termini. They can expand the chemical set of the 22 amino acids by changing an existing functional group or adding a new one such as phosphate. Phosphorylation is highly effective for controlling the enzyme activity and is the most common change after translation. Many eukaryotic and prokaryotic proteins also have carbohydrate molecules attached to them in a process called glycosylation, which can promote protein folding and improve stability as well as serving regulatory functions. Attachment of lipid molecules, known as lipidation, often targets a protein or part of a protein attached to the cell membrane.
Other forms of post-translational modification consist of cleaving peptide bonds, as in processing a propeptide to a mature form or removing the initiator methionine residue. The formation of disulfide bonds from cysteine residues may also be referred to as a post-translational modification. For instance, the peptide hormone insulin is cut twice after disulfide bonds are formed, and a propeptide is removed from the middle of the chain; the resulting protein consists of two polypeptide chains connected by disulfide bonds.
Some types of post-translational modification are consequences of oxidative stress. Carbonylation is one example that targets the modified protein for degradation and can result in the formation of protein aggregates. Specific amino acid modifications can be used as biomarkers indicating oxidative damage.
Sites that often undergo post-translational modification are those that have a functional group that can serve as a nucleophile in the reaction: the hydroxyl groups of serine, threonine, and tyrosine; the amine forms of lysine, arginine, and histidine; the thiolate anion of cysteine; the carboxylates of aspartate and glutamate; and the N- and C-termini. In addition, although the amide of asparagine is a weak nucleophile, it can serve as an attachment point for glycans. Rarer modifications can occur at oxidized methionines and at some methylene groups in side chains.
Post-translational modification of proteins can be experimentally detected by a variety of techniques, including mass spectrometry, Eastern blotting, and Western blotting. Additional methods are provided in the #External links section.
PTMs involving addition of functional groups
Addition by an enzyme in vivo
Hydrophobic groups for membrane localization
myristoylation (a type of acylation), attachment of myristate, a C14 saturated acid
palmitoylation (a type of acylation), attachment of palmitate, a C16 saturated acid
isoprenylation or prenylation, the addition of an isoprenoid group (e.g. farnesol and geranylgeraniol)
farnesylation
geranylgeranylation
glypiation, glycosylphosphatidylinositol (GPI) anchor formation via an amide bond to C-terminal tail
Cofactors for enhanced enzymatic activity
lipoylation (a type of acylation), attachment of a lipoate (C8) functional group
flavin moiety (FMN or FAD) may be covalently attached
heme C attachment via thioether bonds with cysteines
phosphopantetheinylation, the addition of a 4'-phosphopantetheinyl moiety from coenzyme A, as in fatty acid, polyketide, non-ribosomal peptide and leucine biosynthesis
retinylidene Schiff base formation
Modifications of translation factors
diphthamide formation (on a histidine found in eEF2)
ethanolamine phosphoglycerol attachment (on glutamate found in eEF1α)
hypusine formation (on conserved lysine of eIF5A (eukaryotic) and aIF5A (archaeal))
beta-Lysine addition on a conserved lysine of the elongation factor P (EFP) in most bacteria. EFP is a homolog to eIF5A (eukaryotic) and aIF5A (archaeal) (see above).
Smaller chemical groups
acylation, e.g. O-acylation (esters), N-acylation (amides), S-acylation (thioesters)
acetylation, the addition of an acetyl group, either at the N-terminus of the protein or at lysine residues. The reverse is called deacetylation.
formylation
alkylation, the addition of an alkyl group, e.g. methyl, ethyl
methylation the addition of a methyl group, usually at lysine or arginine residues. The reverse is called demethylation.
amidation at C-terminus. Formed by oxidative dissociation of a C-terminal Gly residue.
amide bond formation
amino acid addition
arginylation, a tRNA-mediation addition
polyglutamylation, covalent linkage of glutamic acid residues to the N-terminus of tubulin and some other proteins. (See tubulin polyglutamylase)
polyglycylation, covalent linkage of one to more than 40 glycine residues to the tubulin C-terminal tail
butyrylation
gamma-carboxylation dependent on Vitamin K
glycosylation, the addition of a glycosyl group to either arginine, asparagine, cysteine, hydroxylysine, serine, threonine, tyrosine, or tryptophan resulting in a glycoprotein. Distinct from glycation, which is regarded as a nonenzymatic attachment of sugars.
O-GlcNAc, addition of N-acetylglucosamine to serine or threonine residues in a β-glycosidic linkage
polysialylation, addition of polysialic acid, PSA, to NCAM
malonylation
hydroxylation: addition of an oxygen atom to the side-chain of a Pro or Lys residue
iodination: addition of an iodine atom to the aromatic ring of a tyrosine residue (e.g. in thyroglobulin)
nucleotide addition such as ADP-ribosylation
phosphate ester (O-linked) or phosphoramidate (N-linked) formation
phosphorylation, the addition of a phosphate group, usually to serine, threonine, and tyrosine (O-linked), or histidine (N-linked)
adenylylation, the addition of an adenylyl moiety, usually to tyrosine (O-linked), or histidine and lysine (N-linked)
uridylylation, the addition of an uridylyl-group (i.e. uridine monophosphate, UMP), usually to tyrosine
propionylation
pyroglutamate formation
S-glutathionylation
S-nitrosylation
S-sulfenylation (aka S-sulphenylation), reversible covalent addition of one oxygen atom to the thiol group of a cysteine residue
S-sulfinylation, normally irreversible covalent addition of two oxygen atoms to the thiol group of a cysteine residue
S-sulfonylation, normally irreversible covalent addition of three oxygen atoms to the thiol group of a cysteine residue, resulting in the formation of a cysteic acid residue
succinylation addition of a succinyl group to lysine
sulfation, the addition of a sulfate group to a tyrosine.
Non-enzymatic modifications in vivo
Examples of non-enzymatic PTMs are glycation, glycoxidation, nitrosylation, oxidation, succination, and lipoxidation.
glycation, the addition of a sugar molecule to a protein without the controlling action of an enzyme.
carbamylation the addition of Isocyanic acid to a protein's N-terminus or the side-chain of Lys.
carbonylation the addition of carbon monoxide to other organic/inorganic compounds.
spontaneous isopeptide bond formation, as found in many surface proteins of Gram-positive bacteria.
Non-enzymatic additions in vitro
biotinylation: covalent attachment of a biotin moiety using a biotinylation reagent, typically for the purpose of labeling a protein.
carbamylation: the addition of Isocyanic acid to a protein's N-terminus or the side-chain of Lys or Cys residues, typically resulting from exposure to urea solutions.
oxidation: addition of one or more Oxygen atoms to a susceptible side-chain, principally of Met, Trp, His or Cys residues. Formation of disulfide bonds between Cys residues.
pegylation: covalent attachment of polyethylene glycol (PEG) using a pegylation reagent, typically to the N-terminus or the side-chains of Lys residues. Pegylation is used to improve the efficacy of protein pharmaceuticals.
Conjugation with other proteins or peptides
ubiquitination, the covalent linkage to the protein ubiquitin.
SUMOylation, the covalent linkage to the SUMO protein (Small Ubiquitin-related MOdifier)
neddylation, the covalent linkage to the Nedd protein
ISGylation, the covalent linkage to the ISG15 protein (Interferon-Stimulated Gene 15)
pupylation, the covalent linkage to the prokaryotic ubiquitin-like protein
Chemical modification of amino acids
citrullination, or deimination, the conversion of arginine to citrulline
deamidation, the conversion of glutamine to glutamic acid or asparagine to aspartic acid
eliminylation, the conversion to an alkene by beta-elimination of phosphothreonine and phosphoserine, or dehydration of threonine and serine
Structural changes
disulfide bridges, the covalent linkage of two cysteine amino acids
lysine-cysteine bridges, the covalent linkage of 1 lysine and 1 or 2 cystine residues via an oxygen atom (NOS and SONOS bridges)
proteolytic cleavage, cleavage of a protein at a peptide bond
isoaspartate formation, via the cyclisation of asparagine or aspartic acid amino-acid residues
racemization
of serine by protein-serine epimerase
of alanine in dermorphin, a frog opioid peptide
of methionine in deltorphin, also a frog opioid peptide
protein splicing, self-catalytic removal of inteins analogous to mRNA processing
Statistics
Common PTMs by frequency
In 2011, statistics of each post-translational modification experimentally and putatively detected have been compiled using proteome-wide information from the Swiss-Prot database. The 10 most common experimentally found modifications were as follows:
Common PTMs by residue
Some common post-translational modifications to specific amino-acid residues are shown below. Modifications occur on the side-chain unless indicated otherwise.
Databases and tools
Protein sequences contain sequence motifs that are recognized by modifying enzymes, and which can be documented or predicted in PTM databases. With the large number of different modifications being discovered, there is a need to document this sort of information in databases. PTM information can be collected through experimental means or predicted from high-quality, manually curated data. Numerous databases have been created, often with a focus on certain taxonomic groups (e.g. human proteins) or other features.
List of resources
PhosphoSitePlus – A database of comprehensive information and tools for the study of mammalian protein post-translational modification
ProteomeScout – A database of proteins and post-translational modifications experimentally
Human Protein Reference Database – A database for different modifications and understand different proteins, their class, and function/process related to disease causing proteins
PROSITE – A database of Consensus patterns for many types of PTM's including sites
RESID – A database consisting of a collection of annotations and structures for PTMs.
iPTMnet – A database that integrates PTM information from several knowledgbases and text mining results.
dbPTM – A database that shows different PTM's and information regarding their chemical components/structures and a frequency for amino acid modified site
Uniprot has PTM information although that may be less comprehensive than in more specialized databases.
The O-GlcNAc Database - A curated database for protein O-GlcNAcylation and referencing more than 14 000 protein entries and 10 000 O-GlcNAc sites.
Tools
List of software for visualization of proteins and their PTMs
PyMOL – introduce a set of common PTM's into protein models
AWESOME – Interactive tool to see the role of single nucleotide polymorphisms to PTM's
Chimera – Interactive Database to visualize molecules
Case examples
Cleavage and formation of disulfide bridges during the production of insulin
PTM of histones as regulation of transcription: RNA polymerase control by chromatin structure
PTM of RNA polymerase II as regulation of transcription
Cleavage of polypeptide chains as crucial for lectin specificity
See also
Protein targeting
Post-translational regulation
References
External links
dbPTM - database of protein post-translational modifications
(Wayback Machine copy)
List of posttranslational modifications in ExPASy
Browse SCOP domains by PTM — from the dcGO database
Statistics of each post-translational modification from the Swiss-Prot database
(Wayback Machine copy)
AutoMotif Server - A Computational Protocol for Identification of Post-Translational Modifications in Protein Sequences
Functional analyses for site-specific phosphorylation of a target protein in cells
Detection of Post-Translational Modifications after high-accuracy MSMS
Overview and description of commonly used post-translational modification detection techniques
Gene expression
Protein structure
Protein biosynthesis
Cell biology | 0.785226 | 0.993888 | 0.780427 |
Scientific modelling | Scientific modelling is an activity that produces models representing empirical objects, phenomena, and physical processes, to make a particular part or feature of the world easier to understand, define, quantify, visualize, or simulate. It requires selecting and identifying relevant aspects of a situation in the real world and then developing a model to replicate a system with those features. Different types of models may be used for different purposes, such as conceptual models to better understand, operational models to operationalize, mathematical models to quantify, computational models to simulate, and graphical models to visualize the subject.
Modelling is an essential and inseparable part of many scientific disciplines, each of which has its own ideas about specific types of modelling. The following was said by John von Neumann.
There is also an increasing attention to scientific modelling in fields such as science education, philosophy of science, systems theory, and knowledge visualization. There is a growing collection of methods, techniques and meta-theory about all kinds of specialized scientific modelling.
Overview
A scientific model seeks to represent empirical objects, phenomena, and physical processes in a logical and objective way. All models are in simulacra, that is, simplified reflections of reality that, despite being approximations, can be extremely useful. Building and disputing models is fundamental to the scientific enterprise. Complete and true representation may be impossible, but scientific debate often concerns which is the better model for a given task, e.g., which is the more accurate climate model for seasonal forecasting.
Attempts to formalize the principles of the empirical sciences use an interpretation to model reality, in the same way logicians axiomatize the principles of logic. The aim of these attempts is to construct a formal system that will not produce theoretical consequences that are contrary to what is found in reality. Predictions or other statements drawn from such a formal system mirror or map the real world only insofar as these scientific models are true.<ref name="tcarotmimanass">Leo Apostel (1961). "Formal study of models". In: The Concept and the Role of the Model in Mathematics and Natural and Social. Edited by Hans Freudenthal. Springer. pp. 8–9 (Source)],</ref>
For the scientist, a model is also a way in which the human thought processes can be amplified. For instance, models that are rendered in software allow scientists to leverage computational power to simulate, visualize, manipulate and gain intuition about the entity, phenomenon, or process being represented. Such computer models are in silico. Other types of scientific models are in vivo (living models, such as laboratory rats) and in vitro (in glassware, such as tissue culture).
Basics
Modelling as a substitute for direct measurement and experimentation
Models are typically used when it is either impossible or impractical to create experimental conditions in which scientists can directly measure outcomes. Direct measurement of outcomes under controlled conditions (see Scientific method) will always be more reliable than modeled estimates of outcomes.
Within modeling and simulation, a model is a task-driven, purposeful simplification and abstraction of a perception of reality, shaped by physical, legal, and cognitive constraints. It is task-driven because a model is captured with a certain question or task in mind. Simplifications leave all the known and observed entities and their relation out that are not important for the task. Abstraction aggregates information that is important but not needed in the same detail as the object of interest. Both activities, simplification, and abstraction, are done purposefully. However, they are done based on a perception of reality. This perception is already a model in itself, as it comes with a physical constraint. There are also constraints on what we are able to legally observe with our current tools and methods, and cognitive constraints that limit what we are able to explain with our current theories. This model comprises the concepts, their behavior, and their relations informal form and is often referred to as a conceptual model. In order to execute the model, it needs to be implemented as a computer simulation. This requires more choices, such as numerical approximations or the use of heuristics. Despite all these epistemological and computational constraints, simulation has been recognized as the third pillar of scientific methods: theory building, simulation, and experimentation.
Simulation
A simulation is a way to implement the model, often employed when the model is too complex for the analytical solution. A steady-state simulation provides information about the system at a specific instant in time (usually at equilibrium, if such a state exists). A dynamic simulation provides information over time. A simulation shows how a particular object or phenomenon will behave. Such a simulation can be useful for testing, analysis, or training in those cases where real-world systems or concepts can be represented by models.
Structure
Structure is a fundamental and sometimes intangible notion covering the recognition, observation, nature, and stability of patterns and relationships of entities. From a child's verbal description of a snowflake, to the detailed scientific analysis of the properties of magnetic fields, the concept of structure is an essential foundation of nearly every mode of inquiry and discovery in science, philosophy, and art.
Systems
A system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole. In general, a system is a construct or collection of different elements that together can produce results not obtainable by the elements alone. The concept of an 'integrated whole' can also be stated in terms of a system embodying a set of relationships which are differentiated from relationships of the set to other elements, and form relationships between an element of the set and elements not a part of the relational regime. There are two types of system models: 1) discrete in which the variables change instantaneously at separate points in time and, 2) continuous where the state variables change continuously with respect to time.
Generating a model
Modelling is the process of generating a model as a conceptual representation of some phenomenon. Typically a model will deal with only some aspects of the phenomenon in question, and two models of the same phenomenon may be essentially different—that is to say, that the differences between them comprise more than just a simple renaming of components.
Such differences may be due to differing requirements of the model's end users, or to conceptual or aesthetic differences among the modelers and to contingent decisions made during the modelling process. Considerations that may influence the structure of a model might be the modeler's preference for a reduced ontology, preferences regarding statistical models versus deterministic models, discrete versus continuous time, etc. In any case, users of a model need to understand the assumptions made that are pertinent to its validity for a given use.
Building a model requires abstraction. Assumptions are used in modelling in order to specify the domain of application of the model. For example, the special theory of relativity assumes an inertial frame of reference. This assumption was contextualized and further explained by the general theory of relativity. A model makes accurate predictions when its assumptions are valid, and might well not make accurate predictions when its assumptions do not hold. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).
Evaluating a model
A model is evaluated first and foremost by its consistency to empirical data; any model inconsistent with reproducible observations must be modified or rejected. One way to modify the model is by restricting the domain over which it is credited with having high validity. A case in point is Newtonian physics, which is highly useful except for the very small, the very fast, and the very massive phenomena of the universe. However, a fit to empirical data alone is not sufficient for a model to be accepted as valid. Factors important in evaluating a model include:
Ability to explain past observations
Ability to predict future observations
Cost of use, especially in combination with other models
Refutability, enabling estimation of the degree of confidence in the model
Simplicity, or even aesthetic appeal
People may attempt to quantify the evaluation of a model using a utility function.
Visualization
Visualization is any technique for creating images, diagrams, or animations to communicate a message. Visualization through visual imagery has been an effective way to communicate both abstract and concrete ideas since the dawn of man. Examples from history include cave paintings, Egyptian hieroglyphs, Greek geometry, and Leonardo da Vinci's revolutionary methods of technical drawing for engineering and scientific purposes.
Space mapping
Space mapping refers to a methodology that employs a "quasi-global" modelling formulation to link companion "coarse" (ideal or low-fidelity) with "fine" (practical or high-fidelity) models of different complexities. In engineering optimization, space mapping aligns (maps) a very fast coarse model with its related expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment process iteratively refines a "mapped" coarse model (surrogate model).
Types
Analogical modelling
Assembly modelling
Catastrophe modelling
Choice modelling
Climate model
Computational model
Continuous modelling
Data modelling
Discrete modelling
Document modelling
Econometric model
Economic model
Ecosystem model
Empirical modelling
Enterprise modelling
Futures studies
Geologic modelling
Goal modeling
Homology modelling
Hydrogeology
Hydrography
Hydrologic modelling
Informative modelling
Macroscale modelling
Mathematical modelling
Metabolic network modelling
Microscale modelling
Modelling biological systems
Modelling in epidemiology
Molecular modelling
Multicomputational model
Multiscale modelling
NLP modelling
Phenomenological modelling
Predictive intake modelling
Predictive modelling
Scale modelling
Simulation
Software modelling
Solid modelling
Space mapping
Statistical model
Stochastic modelling (insurance)
Surrogate model
System architecture
System dynamics
Systems modelling
System-level modelling and simulation
Water quality modelling
Applications
Modelling and simulation
One application of scientific modelling is the field of modelling and simulation, generally referred to as "M&S". M&S has a spectrum of applications which range from concept development and analysis, through experimentation, measurement, and verification, to disposal analysis. Projects and programs may use hundreds of different simulations, simulators and model analysis tools.
The figure shows how modelling and simulation is used as a central part of an integrated program in a defence capability development process.
See also
References
Further reading
Nowadays there are some 40 magazines about scientific modelling which offer all kinds of international forums. Since the 1960s there is a strongly growing number of books and magazines about specific forms of scientific modelling. There is also a lot of discussion about scientific modelling in the philosophy-of-science literature. A selection:
Rainer Hegselmann, Ulrich Müller and Klaus Troitzsch (eds.) (1996). Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View. Theory and Decision Library. Dordrecht: Kluwer.
Paul Humphreys (2004). Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford: Oxford University Press.
Johannes Lenhard, Günter Küppers and Terry Shinn (Eds.) (2006) "Simulation: Pragmatic Constructions of Reality", Springer Berlin.
Tom Ritchey (2012). "Outline for a Morphology of Modelling Methods: Contribution to a General Theory of Modelling". In: Acta Morphologica Generalis, Vol 1. No 1. pp. 1–20.
William Silvert (2001). "Modelling as a Discipline". In: Int. J. General Systems. Vol. 30(3), pp. 261.
Sergio Sismondo and Snait Gissis (eds.) (1999). Modeling and Simulation. Special Issue of Science in Context 12.
Eric Winsberg (2018) "Philosophy and Climate Science" Cambridge: Cambridge University Press
Eric Winsberg (2010) "Science in the Age of Computer Simulation" Chicago: University of Chicago Press
Eric Winsberg (2003). "Simulated Experiments: Methodology for a Virtual World". In: Philosophy of Science 70: 105–125.
Tomáš Helikar, Jim A Rogers (2009). "ChemChains: a platform for simulation and analysis of biochemical networks aimed to laboratory scientists". BioMed Central.
External links
Models. Entry in the Internet Encyclopedia of PhilosophyModels in Science. Entry in the Stanford Encyclopedia of Philosophy''
The World as a Process: Simulations in the Natural and Social Sciences, in: R. Hegselmann et al. (eds.), Modelling and Simulation in the Social Sciences from the Philosophy of Science Point of View, Theory and Decision Library. Dordrecht: Kluwer 1996, 77-100.
Research in simulation and modelling of various physical systems
Modelling Water Quality Information Center, U.S. Department of Agriculture
Ecotoxicology & Models
A Morphology of Modelling Methods. Acta Morphologica Generalis, Vol 1. No 1. pp. 1–20.
Conceptual modelling
Epistemology of science
Interpretation (philosophy)
it:Teoria#Modelli | 0.78793 | 0.99043 | 0.78039 |
Electrochemical gradient | An electrochemical gradient is a gradient of electrochemical potential, usually for an ion that can move across a membrane. The gradient consists of two parts:
The chemical gradient, or difference in solute concentration across a membrane.
The electrical gradient, or difference in charge across a membrane.
When there are unequal concentrations of an ion across a permeable membrane, the ion will move across the membrane from the area of higher concentration to the area of lower concentration through simple diffusion. Ions also carry an electric charge that forms an electric potential across a membrane. If there is an unequal distribution of charges across the membrane, then the difference in electric potential generates a force that drives ion diffusion until the charges are balanced on both sides of the membrane.
Electrochemical gradients are essential to the operation of batteries and other electrochemical cells, photosynthesis and cellular respiration, and certain other biological processes.
Overview
Electrochemical energy is one of the many interchangeable forms of potential energy through which energy may be conserved. It appears in electroanalytical chemistry and has industrial applications such as batteries and fuel cells. In biology, electrochemical gradients allow cells to control the direction ions move across membranes. In mitochondria and chloroplasts, proton gradients generate a chemiosmotic potential used to synthesize ATP, and the sodium-potassium gradient helps neural synapses quickly transmit information.
An electrochemical gradient has two components: a differential concentration of electric charge across a membrane and a differential concentration of chemical species across that same membrane. In the former effect, the concentrated charge attracts charges of the opposite sign; in the latter, the concentrated species tends to diffuse across the membrane to an equalize concentrations. The combination of these two phenomena determines the thermodynamically-preferred direction for an ion's movement across the membrane.
The combined effect can be quantified as a gradient in the thermodynamic electrochemical potential: with the chemical potential of the ion species
the charge per ion of the species
, Faraday constant (the electrochemical potential is implicitly measured on a per-mole basis)
, the local electric potential. Sometimes, the term "electrochemical potential" is abused to describe the electric potential generated by an ionic concentration gradient; that is, .
An electrochemical gradient is analogous to the water pressure across a hydroelectric dam. Routes unblocked by the membrane (e.g. membrane transport protein or electrodes) correspond to turbines that convert the water's potential energy to other forms of physical or chemical energy, and the ions that pass through the membrane correspond to water traveling into the lower river. Conversely, energy can be used to pump water up into the lake above the dam, and chemical energy can be used to create electrochemical gradients.
Chemistry
The term typically applies in electrochemistry, when electrical energy in the form of an applied voltage is used to modulate the thermodynamic favorability of a chemical reaction. In a battery, an electrochemical potential arising from the movement of ions balances the reaction energy of the electrodes. The maximum voltage that a battery reaction can produce is sometimes called the standard electrochemical potential of that reaction.
Biological context
The generation of a transmembrane electrical potential through ion movement across a cell membrane drives biological processes like nerve conduction, muscle contraction, hormone secretion, and sensation. By convention, physiological voltages are measured relative to the extracellular region; a typical animal cell has an internal electrical potential of (−70)–(−50) mV.
An electrochemical gradient is essential to mitochondrial oxidative phosphorylation. The final step of cellular respiration is the electron transport chain, composed of four complexes embedded in the inner mitochondrial membrane. Complexes I, III, and IV pump protons from the matrix to the intermembrane space (IMS); for every electron pair entering the chain, ten protons translocate into the IMS. The result is an electric potential of more than . The energy resulting from the flux of protons back into the matrix is used by ATP synthase to combine inorganic phosphate and ADP.
Similar to the electron transport chain, the light-dependent reactions of photosynthesis pump protons into the thylakoid lumen of chloroplasts to drive the synthesis of ATP. The proton gradient can be generated through either noncyclic or cyclic photophosphorylation. Of the proteins that participate in noncyclic photophosphorylation, photosystem II (PSII), plastiquinone, and cytochrome b6f complex directly contribute to generating the proton gradient. For each four photons absorbed by PSII, eight protons are pumped into the lumen.
Several other transporters and ion channels play a role in generating a proton electrochemical gradient. One is TPK3, a potassium channel that is activated by Ca2+ and conducts K+ from the thylakoid lumen to the stroma, which helps establish the electric field. On the other hand, the electro-neutral K+ efflux antiporter (KEA3) transports K+ into the thylakoid lumen and H+ into the stroma, which helps establish the pH gradient.
Ion gradients
Since the ions are charged, they cannot pass through cellular membranes via simple diffusion. Two different mechanisms can transport the ions across the membrane: active or passive transport.
An example of active transport of ions is the Na+-K+-ATPase (NKA). NKA is powered by the hydrolysis of ATP into ADP and an inorganic phosphate; for every molecule of ATP hydrolized, three Na+ are transported outside and two K+ are transported inside the cell. This makes the inside of the cell more negative than the outside and more specifically generates a membrane potential Vmembrane of about .
An example of passive transport is ion fluxes through Na+, K+, Ca2+, and Cl− channels. Unlike active transport, passive transport is powered by the arithmetic sum of osmosis (a concentration gradient) and an electric field (the transmembrane potential). Formally, the molar Gibbs free energy change associated with successful transport is where represents the gas constant, represents absolute temperature, is the charge per ion, and represents the Faraday constant.
In the example of Na+, both terms tend to support transport: the negative electric potential inside the cell attracts the positive ion and since Na+ is concentrated outside the cell, osmosis supports diffusion through the Na+ channel into the cell. In the case of K+, the effect of osmosis is reversed: although external ions are attracted by the negative intracellular potential, entropy seeks to diffuse the ions already concentrated inside the cell. The converse phenomenon (osmosis supports transport, electric potential opposes it) can be achieved for Na+ in cells with abnormal transmembrane potentials: at , the Na+ influx halts; at higher potentials, it becomes an efflux.
Proton gradients
Proton gradients in particular are important in many types of cells as a form of energy storage. The gradient is usually used to drive ATP synthase, flagellar rotation, or metabolite transport. This section will focus on three processes that help establish proton gradients in their respective cells: bacteriorhodopsin and noncyclic photophosphorylation and oxidative phosphorylation.
Bacteriorhodopsin
The way bacteriorhodopsin generates a proton gradient in Archaea is through a proton pump. The proton pump relies on proton carriers to drive protons from the side of the membrane with a low H+ concentration to the side of the membrane with a high H+ concentration. In bacteriorhodopsin, the proton pump is activated by absorption of photons of 568nm wavelength, which leads to isomerization of the Schiff base (SB) in retinal forming the K state. This moves SB away from Asp85 and Asp212, causing H+ transfer from the SB to Asp85 forming the M1 state. The protein then shifts to the M2 state by separating Glu204 from Glu194 which releases a proton from Glu204 into the external medium. The SB is reprotonated by Asp96 which forms the N state. It is important that the second proton comes from Asp96 since its deprotonated state is unstable and rapidly reprotonated with a proton from the cytosol. The protonation of Asp85 and Asp96 causes re-isomerization of the SB, forming the O state. Finally, bacteriorhodopsin returns to its resting state when Asp85 releases its proton to Glu204.
Photophosphorylation
PSII also relies on light to drive the formation of proton gradients in chloroplasts, however, PSII utilizes vectorial redox chemistry to achieve this goal. Rather than physically transporting protons through the protein, reactions requiring the binding of protons will occur on the extracellular side while reactions requiring the release of protons will occur on the intracellular side. Absorption of photons of 680nm wavelength is used to excite two electrons in P680 to a higher energy level. These higher energy electrons are transferred to protein-bound plastoquinone (PQA) and then to unbound plastoquinone (PQB). This reduces plastoquinone (PQ) to plastoquinol (PQH2) which is released from PSII after gaining two protons from the stroma. The electrons in P680 are replenished by oxidizing water through the oxygen-evolving complex (OEC). This results in release of O2 and H+ into the lumen, for a total reaction of
After being released from PSII, PQH2 travels to the cytochrome b6f complex, which then transfers two electrons from PQH2 to plastocyanin in two separate reactions. The process that occurs is similar to the Q-cycle in Complex III of the electron transport chain. In the first reaction, PQH2 binds to the complex on the lumen side and one electron is transferred to the iron-sulfur center which then transfers it to cytochrome f which then transfers it to plastocyanin. The second electron is transferred to heme bL which then transfers it to heme bH which then transfers it to PQ. In the second reaction, a second PQH2 gets oxidized, adding an electron to another plastocyanin and PQ. Both reactions together transfer four protons into the lumen.
Oxidative phosphorylation
In the electron transport chain, complex I (CI) catalyzes the reduction of ubiquinone (UQ) to ubiquinol (UQH2) by the transfer of two electrons from reduced nicotinamide adenine dinucleotide (NADH) which translocates four protons from the mitochondrial matrix to the IMS:
Complex III (CIII) catalyzes the Q-cycle. The first step involving the transfer of two electrons from the UQH2 reduced by CI to two molecules of oxidized cytochrome c at the Qo site. In the second step, two more electrons reduce UQ to UQH2 at the Qi site. The total reaction is:
Complex IV (CIV) catalyzes the transfer of two electrons from the cytochrome c reduced by CIII to one half of a full oxygen. Utilizing one full oxygen in oxidative phosphorylation requires the transfer of four electrons. The oxygen will then consume four protons from the matrix to form water while another four protons are pumped into the IMS, to give a total reaction
See also
Concentration cell
Transmembrane potential difference
Action potential
Cell potential
Electrodiffusion
Galvanic cell
Electrochemical cell
Proton exchange membrane
Reversal potential
References
Stephen T. Abedon, "Important words and concepts from Chapter 8, Campbell & Reece, 2002 (1/14/2005)", for Biology 113 at the Ohio State University
Cellular respiration
Electrochemical concepts
Electrophysiology
Membrane biology
Physical quantities
Thermodynamics | 0.787335 | 0.991036 | 0.780278 |
Uncertainty quantification | Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.
Many problems in the natural sciences and engineering are also rife with sources of uncertainty. Computer experiments on computer simulations are the most common approach to study problems in uncertainty quantification.
Sources
Uncertainty can enter mathematical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider:
Parameter This comes from the model parameters that are inputs to the computer model (mathematical model) but whose exact values are unknown to experimentalists and cannot be controlled in physical experiments, or whose values cannot be exactly inferred by statistical methods. Some examples of this are the local free-fall acceleration in a falling object experiment, various material properties in a finite element analysis for engineering, and multiplier uncertainty in the context of macroeconomic policy optimization.
Parametric This comes from the variability of input variables of the model. For example, the dimensions of a work piece in a process of manufacture may not be exactly as designed and instructed, which would cause variability in its performance.
Structural uncertainty Also known as model inadequacy, model bias, or model discrepancy, this comes from the lack of knowledge of the underlying physics in the problem. It depends on how accurately a mathematical model describes the true system for a real-life situation, considering the fact that models are almost always only approximations to reality. One example is when modeling the process of a falling object using the free-fall model; the model itself is inaccurate since there always exists air friction. In this case, even if there is no unknown parameter in the model, a discrepancy is still expected between the model and true physics.
Algorithmic Also known as numerical uncertainty, or discrete uncertainty. This type comes from numerical errors and numerical approximations per implementation of the computer model. Most models are too complicated to solve exactly. For example, the finite element method or finite difference method may be used to approximate the solution of a partial differential equation (which introduces numerical errors). Other examples are numerical integration and infinite sum truncation that are necessary approximations in numerical implementation.
Experimental Also known as observation error, this comes from the variability of experimental measurements. The experimental uncertainty is inevitable and can be noticed by repeating a measurement for many times using exactly the same settings for all inputs/variables.
Interpolation This comes from a lack of available data collected from computer model simulations and/or experimental measurements. For other input settings that don't have simulation data or experimental measurements, one must interpolate or extrapolate in order to predict the corresponding responses.
Aleatoric and epistemic
Uncertainty is sometimes classified into two categories, prominently seen in medical applications.
Aleatoric Aleatoric uncertainty is also known as stochastic uncertainty, and is representative of unknowns that differ each time we run the same experiment. For example, a single arrow shot with a mechanical bow that exactly duplicates each launch (the same acceleration, altitude, direction and final velocity) will not all impact the same point on the target due to random and complicated vibrations of the arrow shaft, the knowledge of which cannot be determined sufficiently to eliminate the resulting scatter of impact points. The argument here is obviously in the definition of "cannot". Just because we cannot measure sufficiently with our currently available measurement devices does not preclude necessarily the existence of such information, which would move this uncertainty into the below category. Aleatoric is derived from the Latin alea or dice, referring to a game of chance.
Epistemic uncertainty Epistemic uncertainty is also known as systematic uncertainty, and is due to things one could in principle know but does not in practice. This may be because a measurement is not accurate, because the model neglects certain effects, or because particular data have been deliberately hidden. An example of a source of this uncertainty would be the drag in an experiment designed to measure the acceleration of gravity near the earth's surface. The commonly used gravitational acceleration of 9.8 m/s² ignores the effects of air resistance, but the air resistance for the object could be measured and incorporated into the experiment to reduce the resulting uncertainty in the calculation of the gravitational acceleration.
Combined occurrence and interaction of aleatoric and epistemic uncertainty Aleatoric and epistemic uncertainty can also occur simultaneously in a single term E.g., when experimental parameters show aleatoric uncertainty, and those experimental parameters are input to a computer simulation. If then for the uncertainty quantification a surrogate model, e.g. a Gaussian process or a Polynomial Chaos Expansion, is learnt from computer experiments, this surrogate exhibits epistemic uncertainty that depends on or interacts with the aleatoric uncertainty of the experimental parameters. Such an uncertainty cannot solely be classified as aleatoric or epistemic any more, but is a more general inferential uncertainty.
In real life applications, both kinds of uncertainties are present. Uncertainty quantification intends to explicitly express both types of uncertainty separately. The quantification for the aleatoric uncertainties can be relatively straightforward, where traditional (frequentist) probability is the most basic form. Techniques such as the Monte Carlo method are frequently used. A probability distribution can be represented by its moments (in the Gaussian case, the mean and covariance suffice, although, in general, even knowledge of all moments to arbitrarily high order still does not specify the distribution function uniquely), or more recently, by techniques such as Karhunen–Loève and polynomial chaos expansions. To evaluate epistemic uncertainties, the efforts are made to understand the (lack of) knowledge of the system, process or mechanism. Epistemic uncertainty is generally understood through the lens of Bayesian probability, where probabilities are interpreted as indicating how certain a rational person could be regarding a specific claim.
Mathematical perspective
In mathematics, uncertainty is often characterized in terms of a probability distribution. From that perspective, epistemic uncertainty means not being certain what the relevant probability distribution is, and aleatoric uncertainty means not being certain what a random sample drawn from a probability distribution will be.
Types of problems
There are two major types of problems in uncertainty quantification: one is the forward propagation of uncertainty (where the various sources of uncertainty are propagated through the model to predict the overall uncertainty in the system response) and the other is the inverse assessment of model uncertainty and parameter uncertainty (where the model parameters are calibrated simultaneously using test data). There has been a proliferation of research on the former problem and a majority of uncertainty analysis techniques were developed for it. On the other hand, the latter problem is drawing increasing attention in the engineering design community, since uncertainty quantification of a model and the subsequent predictions of the true system response(s) are of great interest in designing robust systems.
Forward
Uncertainty propagation is the quantification of uncertainties in system output(s) propagated from uncertain inputs. It focuses on the influence on the outputs from the parametric variability listed in the sources of uncertainty. The targets of uncertainty propagation analysis can be:
To evaluate low-order moments of the outputs, i.e. mean and variance.
To evaluate the reliability of the outputs. This is especially useful in reliability engineering where outputs of a system are usually closely related to the performance of the system.
To assess the complete probability distribution of the outputs. This is useful in the scenario of utility optimization where the complete distribution is used to calculate the utility.
Inverse
Given some experimental measurements of a system and some computer simulation results from its mathematical model, inverse uncertainty quantification estimates the discrepancy between the experiment and the mathematical model (which is called bias correction), and estimates the values of unknown parameters in the model if there are any (which is called parameter calibration or simply calibration). Generally this is a much more difficult problem than forward uncertainty propagation; however it is of great importance since it is typically implemented in a model updating process. There are several scenarios in inverse uncertainty quantification:
Bias correction only
Bias correction quantifies the model inadequacy, i.e. the discrepancy between the experiment and the mathematical model. The general model updating formula for bias correction is:
where denotes the experimental measurements as a function of several input variables , denotes the computer model (mathematical model) response, denotes the additive discrepancy function (aka bias function), and denotes the experimental uncertainty. The objective is to estimate the discrepancy function , and as a by-product, the resulting updated model is . A prediction confidence interval is provided with the updated model as the quantification of the uncertainty.
Parameter calibration only
Parameter calibration estimates the values of one or more unknown parameters in a mathematical model. The general model updating formulation for calibration is:
where denotes the computer model response that depends on several unknown model parameters , and denotes the true values of the unknown parameters in the course of experiments. The objective is to either estimate , or to come up with a probability distribution of that encompasses the best knowledge of the true parameter values.
Bias correction and parameter calibration
It considers an inaccurate model with one or more unknown parameters, and its model updating formulation combines the two together:
It is the most comprehensive model updating formulation that includes all possible sources of uncertainty, and it requires the most effort to solve.
Selective methodologies
Much research has been done to solve uncertainty quantification problems, though a majority of them deal with uncertainty propagation. During the past one to two decades, a number of approaches for inverse uncertainty quantification problems have also been developed and have proved to be useful for most small- to medium-scale problems.
Forward propagation
Existing uncertainty propagation approaches include probabilistic approaches and non-probabilistic approaches. There are basically six categories of probabilistic approaches for uncertainty propagation:
Simulation-based methods: Monte Carlo simulations, importance sampling, adaptive sampling, etc.
General surrogate-based methods: In a non-instrusive approach, a surrogate model is learnt in order to replace the experiment or the simulation with a cheap and fast approximation. Surrogate-based methods can also be employed in a fully Bayesian fashion. This approach has proven particularly powerful when the cost of sampling, e.g. computationally expensive simulations, is prohibitively high.
Local expansion-based methods: Taylor series, perturbation method, etc. These methods have advantages when dealing with relatively small input variability and outputs that don't express high nonlinearity. These linear or linearized methods are detailed in the article Uncertainty propagation.
Functional expansion-based methods: Neumann expansion, orthogonal or Karhunen–Loeve expansions (KLE), with polynomial chaos expansion (PCE) and wavelet expansions as special cases.
Most probable point (MPP)-based methods: first-order reliability method (FORM) and second-order reliability method (SORM).
Numerical integration-based methods: Full factorial numerical integration (FFNI) and dimension reduction (DR).
For non-probabilistic approaches, interval analysis, Fuzzy theory, Possibility theory and evidence theory are among the most widely used.
The probabilistic approach is considered as the most rigorous approach to uncertainty analysis in engineering design due to its consistency with the theory of decision analysis. Its cornerstone is the calculation of probability density functions for sampling statistics. This can be performed rigorously for random variables that are obtainable as transformations of Gaussian variables, leading to exact confidence intervals.
Inverse uncertainty
Frequentist
In regression analysis and least squares problems, the standard error of parameter estimates is readily available, which can be expanded into a confidence interval.
Bayesian
Several methodologies for inverse uncertainty quantification exist under the Bayesian framework. The most complicated direction is to aim at solving problems with both bias correction and parameter calibration. The challenges of such problems include not only the influences from model inadequacy and parameter uncertainty, but also the lack of data from both computer simulations and experiments. A common situation is that the input settings are not the same over experiments and simulations. Another common situation is that parameters derived from experiments are input to simulations. For computationally expensive simulations, then often a surrogate model, e.g. a Gaussian process or a Polynomial Chaos Expansion, is necessary, defining an inverse problem for finding the surrogate model that best approximates the simulations.
Modular approach
An approach to inverse uncertainty quantification is the modular Bayesian approach. The modular Bayesian approach derives its name from its four-module procedure. Apart from the current available data, a prior distribution of unknown parameters should be assigned.
Module 1: Gaussian process modeling for the computer model
To address the issue from lack of simulation results, the computer model is replaced with a Gaussian process (GP) model
where
is the dimension of input variables, and is the dimension of unknown parameters. While is pre-defined, , known as hyperparameters of the GP model, need to be estimated via maximum likelihood estimation (MLE). This module can be considered as a generalized kriging method.
Module 2: Gaussian process modeling for the discrepancy function
Similarly with the first module, the discrepancy function is replaced with a GP model
where
Together with the prior distribution of unknown parameters, and data from both computer models and experiments, one can derive the maximum likelihood estimates for . At the same time, from Module 1 gets updated as well.
Module 3: Posterior distribution of unknown parameters
Bayes' theorem is applied to calculate the posterior distribution of the unknown parameters:
where includes all the fixed hyperparameters in previous modules.
Module 4: Prediction of the experimental response and discrepancy function
Full approach
Fully Bayesian approach requires that not only the priors for unknown parameters but also the priors for the other hyperparameters should be assigned. It follows the following steps:
Derive the posterior distribution ;
Integrate out and obtain . This single step accomplishes the calibration;
Prediction of the experimental response and discrepancy function.
However, the approach has significant drawbacks:
For most cases, is a highly intractable function of . Hence the integration becomes very troublesome. Moreover, if priors for the other hyperparameters are not carefully chosen, the complexity in numerical integration increases even more.
In the prediction stage, the prediction (which should at least include the expected value of system responses) also requires numerical integration. Markov chain Monte Carlo (MCMC) is often used for integration; however it is computationally expensive.
The fully Bayesian approach requires a huge amount of calculations and may not yet be practical for dealing with the most complicated modelling situations.
Known issues
The theories and methodologies for uncertainty propagation are much better established, compared with inverse uncertainty quantification. For the latter, several difficulties remain unsolved:
Dimensionality issue: The computational cost increases dramatically with the dimensionality of the problem, i.e. the number of input variables and/or the number of unknown parameters.
Identifiability issue: Multiple combinations of unknown parameters and discrepancy function can yield the same experimental prediction. Hence different values of parameters cannot be distinguished/identified. This issue is circumvented in a Bayesian approach, where such combinations are averaged over.
Incomplete model response: Refers to a model not having a solution for some combinations of the input variables.
Quantifying uncertainty in the input quantities: Crucial events missing in the available data or critical quantities unidentified to analysts due to, e.g., limitations in existing models.
Little consideration of the impact of choices made by analysts.
See also
Computer experiment
Further research is needed
Quantification of margins and uncertainties
Probabilistic numerics
Bayesian regression
Bayesian probability
References
Applied mathematics
Mathematical modeling
Operations research
Statistical theory | 0.786598 | 0.991934 | 0.780253 |
General chemistry | General chemistry (sometimes referred to as "gen chem") is offered by colleges and universities as an introductory level chemistry course usually taken by students during their first year. The course is usually run with a concurrent lab section that gives students an opportunity to experience a laboratory environment and carry out experiments with the material learned in the course. These labs can consist of acid-base titrations, kinetics, equilibrium reactions, and electrochemical reactions. Chemistry majors as well as students across STEM majors such as biology, biochemistry, biomedicine, physics, and engineering are usually required to complete one year of general chemistry as well.
Concepts taught
The concepts taught in a typical general chemistry course are as follows:
Stoichiometry
Conservation of energy
Conservation of mass
Elementary atomic theory
Periodic table and periodicity
Law of constant composition
Gas laws
Nuclear chemistry
Solubility
Acid-base chemistry
Chemical bonding
Chemical kinetics
Thermodynamics
Electrochemistry
Chemical equilibria
Pre-medical track
Students in colleges and universities looking to follow the "pre-medical" track are required to pass general chemistry as the Association of American Medical Colleges requires at least one full year of chemistry. In order for students to apply to medical school, they must pass the medical college admission test, or MCAT, which consists of a section covering the foundations of general chemistry. General chemistry covers many of the principal foundations that apply to medicine and the human body that is essential in our current understanding and practice of medicine.
Topics of general chemistry covered by the AAMC Medical College Admissions Test
Acids and bases
Atomic structure
Bonding and chemical interactions
Chemical kinetics
Electrochemistry
Equilibrium
Solutions
Stoichiometry
The gas phase
Thermochemistry
Redox reactions
"Weed out course"
Students who are enrolled in general chemistry often desire to become doctors, researchers, and educators. Because of the demands of these fields, professors believe that the level of rigor that is associated with general chemistry should be elevated from that of a typical introductory course. This has led to this course to gain the title of a "weed out course" where students drop out from their respected major due to the level of difficulty. Students can have different perceptions of the course based on their experiences, or lack thereof, in high school chemistry courses. Students who enroll in AP chemistry in high school, a course that mirrors what is covered in college, could be perceived as having an advantage over students who do not come to college with a strong chemistry background. Students who wish to be competitive in applying to medical schools try to achieve success in general chemistry as the average GPA for medical school matriculants was 3.71 in 2017. This makes a simply passing grade not acceptable for students with medical school aspirations. General chemistry professors have been known to make tests worth a large portion of the course, and make them more challenging than the material presents itself as. Grade deflation, purposely adjusting the grades of a course to be lower, is also an issue of general chemistry courses at the undergraduate level.
References
External links
Chemistry education
zh-yue:普通化學 | 0.799492 | 0.975868 | 0.780199 |
Catalysis | Catalysis is the increase in rate of a chemical reaction due to an added substance known as a catalyst. Catalysts are not consumed by the reaction and remain unchanged after it. If the reaction is rapid and the catalyst recycles quickly, very small amounts of catalyst often suffice; mixing, surface area, and temperature are important factors in reaction rate. Catalysts generally react with one or more reactants to form intermediates that subsequently give the final reaction product, in the process of regenerating the catalyst.
The rate increase occurs because the catalyst allows the reaction to occur by an alternative mechanism which may be much faster than the non-catalyzed mechanism. However the non-catalyzed mechanism does remain possible, so that the total rate (catalyzed plus non-catalyzed) can only increase in the presence of the catalyst and never decrease.
Catalysis may be classified as either homogeneous, whose components are dispersed in the same phase (usually gaseous or liquid) as the reactant, or heterogeneous, whose components are not in the same phase. Enzymes and other biocatalysts are often considered as a third category.
Catalysis is ubiquitous in chemical industry of all kinds. Estimates are that 90% of all commercially produced chemical products involve catalysts at some stage in the process of their manufacture.
The term "catalyst" is derived from Greek , kataluein, meaning "loosen" or "untie". The concept of catalysis was invented by chemist Elizabeth Fulhame, based on her novel work in oxidation-reduction experiments.
General principles
Example
An illustrative example is the effect of catalysts to speed the decomposition of hydrogen peroxide into water and oxygen:
2 HO → 2 HO + O
This reaction proceeds because the reaction products are more stable than the starting compound, but this decomposition is so slow that hydrogen peroxide solutions are commercially available. In the presence of a catalyst such as manganese dioxide this reaction proceeds much more rapidly. This effect is readily seen by the effervescence of oxygen. The catalyst is not consumed in the reaction, and may be recovered unchanged and re-used indefinitely. Accordingly, manganese dioxide is said to catalyze this reaction. In living organisms, this reaction is catalyzed by enzymes (proteins that serve as catalysts) such as catalase.
Another example is the effect of catalysts on air pollution and reducing the amount of carbon monoxide. Development of active and selective catalysts for the conversion of carbon monoxide into desirable products is one of the most important roles of catalysts. Using catalysts for hydrogenation of carbon monoxide helps to remove this toxic gas and also attain useful materials.
Units
The SI derived unit for measuring the catalytic activity of a catalyst is the katal, which is quantified in moles per second. The productivity of a catalyst can be described by the turnover number (or TON) and the catalytic activity by the turn over frequency (TOF), which is the TON per time unit. The biochemical equivalent is the enzyme unit. For more information on the efficiency of enzymatic catalysis, see the article on enzymes.
Catalytic reaction mechanisms
In general, chemical reactions occur faster in the presence of a catalyst because the catalyst provides an alternative reaction mechanism (reaction pathway) having a lower activation energy than the non-catalyzed mechanism. In catalyzed mechanisms, the catalyst is regenerated.
As a simple example occurring in the gas phase, the reaction 2 SO2 + O2 → 2 SO3 can be catalyzed by adding nitric oxide. The reaction occurs in two steps:
2NO + O2 → 2NO2 (rate-determining)
NO2 + SO2 → NO + SO3 (fast)
The NO catalyst is regenerated. The overall rate is the rate of the slow step
v = 2k1[NO]2[O2].
An example of heterogeneous catalysis is the reaction of oxygen and hydrogen on the surface of titanium dioxide (TiO, or titania) to produce water. Scanning tunneling microscopy showed that the molecules undergo adsorption and dissociation. The dissociated, surface-bound O and H atoms diffuse together. The intermediate reaction states are: HO, HO, then HO and the reaction product (water molecule dimers), after which the water molecule desorbs from the catalyst surface.
Reaction energetics
Catalysts enable pathways that differ from the uncatalyzed reactions. These pathways have lower activation energy. Consequently, more molecular collisions have the energy needed to reach the transition state. Hence, catalysts can enable reactions that would otherwise be blocked or slowed by a kinetic barrier. The catalyst may increase the reaction rate or selectivity, or enable the reaction at lower temperatures. This effect can be illustrated with an energy profile diagram.
In the catalyzed elementary reaction, catalysts do not change the extent of a reaction: they have no effect on the chemical equilibrium of a reaction. The ratio of the forward and the reverse reaction rates is unaffected (see also thermodynamics). The second law of thermodynamics describes why a catalyst does not change the chemical equilibrium of a reaction. Suppose there was such a catalyst that shifted an equilibrium. Introducing the catalyst to the system would result in a reaction to move to the new equilibrium, producing energy. Production of energy is a necessary result since reactions are spontaneous only if Gibbs free energy is produced, and if there is no energy barrier, there is no need for a catalyst. Then, removing the catalyst would also result in a reaction, producing energy; i.e. the addition and its reverse process, removal, would both produce energy. Thus, a catalyst that could change the equilibrium would be a perpetual motion machine, a contradiction to the laws of thermodynamics. Thus, catalysts do not alter the equilibrium constant. (A catalyst can however change the equilibrium concentrations by reacting in a subsequent step. It is then consumed as the reaction proceeds, and thus it is also a reactant. Illustrative is the base-catalyzed hydrolysis of esters, where the produced carboxylic acid immediately reacts with the base catalyst and thus the reaction equilibrium is shifted towards hydrolysis.)
The catalyst stabilizes the transition state more than it stabilizes the starting material. It decreases the kinetic barrier by decreasing the difference in energy between starting material and the transition state. It does not change the energy difference between starting materials and products (thermodynamic barrier), or the available energy (this is provided by the environment as heat or light).
Related concepts
Some so-called catalysts are really precatalysts. Precatalysts convert to catalysts in the reaction. For example, Wilkinson's catalyst RhCl(PPh) loses one triphenylphosphine ligand before entering the true catalytic cycle. Precatalysts are easier to store but are easily activated in situ. Because of this preactivation step, many catalytic reactions involve an induction period.
In cooperative catalysis, chemical species that improve catalytic activity are called cocatalysts or promoters.
In tandem catalysis two or more different catalysts are coupled in a one-pot reaction.
In autocatalysis, the catalyst is a product of the overall reaction, in contrast to all other types of catalysis considered in this article. The simplest example of autocatalysis is a reaction of type A + B → 2 B, in one or in several steps. The overall reaction is just A → B, so that B is a product. But since B is also a reactant, it may be present in the rate equation and affect the reaction rate. As the reaction proceeds, the concentration of B increases and can accelerate the reaction as a catalyst. In effect, the reaction accelerates itself or is autocatalyzed. An example is the hydrolysis of an ester such as aspirin to a carboxylic acid and an alcohol. In the absence of added acid catalysts, the carboxylic acid product catalyzes the hydrolysis.
Switchable catalysis refers to a type of catalysis where the catalyst can be toggled between different ground states possessing distinct reactivity, typically by applying an external stimulus. This ability to reversibly switch the catalyst allows for spatiotemporal control over catalytic activity and selectivity. The external stimuli used to switch the catalyst can include changes in temperature, pH, light, electric fields, or the addition of chemical agents.
A true catalyst can work in tandem with a sacrificial catalyst. The true catalyst is consumed in the elementary reaction and turned into a deactivated form.
The sacrificial catalyst regenerates the true catalyst for another cycle. The sacrificial catalyst is consumed in the reaction, and as such, it is not really a catalyst, but a reagent. For example, osmium tetroxide (OsO4) is a good reagent for dihydroxylation, but it is highly toxic and expensive. In Upjohn dihydroxylation, the sacrificial catalyst N-methylmorpholine N-oxide (NMMO) regenerates OsO4, and only catalytic quantities of OsO4 are needed.
Classification
Catalysis may be classified as either homogeneous or heterogeneous. A homogeneous catalysis is one whose components are dispersed in the same phase (usually gaseous or liquid) as the reactant's molecules. A heterogeneous catalysis is one where the reaction components are not in the same phase. Enzymes and other biocatalysts are often considered as a third category. Similar mechanistic principles apply to heterogeneous, homogeneous, and biocatalysis.
Heterogeneous catalysis
Heterogeneous catalysts act in a different phase than the reactants. Most heterogeneous catalysts are solids that act on substrates in a liquid or gaseous reaction mixture. Important heterogeneous catalysts include zeolites, alumina, higher-order oxides, graphitic carbon, transition metal oxides, metals such as Raney nickel for hydrogenation, and vanadium(V) oxide for oxidation of sulfur dioxide into sulfur trioxide by the contact process.
Diverse mechanisms for reactions on surfaces are known, depending on how the adsorption takes place (Langmuir-Hinshelwood, Eley-Rideal, and Mars-van Krevelen). The total surface area of a solid has an important effect on the reaction rate. The smaller the catalyst particle size, the larger the surface area for a given mass of particles.
A heterogeneous catalyst has active sites, which are the atoms or crystal faces where the substrate actually binds. Active sites are atoms but are often described as a facet (edge, surface, step, etc.) of a solid. Most of the volume but also most of the surface of a heterogeneous catalyst may be catalytically inactive. Finding out the nature of the active site is technically challenging.
For example, the catalyst for the Haber process for the synthesis of ammonia from nitrogen and hydrogen is often described as iron. But detailed studies and many optimizations have led to catalysts that are mixtures of iron-potassium-calcium-aluminum-oxide. The reacting gases adsorb onto active sites on the iron particles. Once physically adsorbed, the reagents partially or wholly dissociate and form new bonds. In this way the particularly strong triple bond in nitrogen is broken, which would be extremely uncommon in the gas phase due to its high activation energy. Thus, the activation energy of the overall reaction is lowered, and the rate of reaction increases. Another place where a heterogeneous catalyst is applied is in the oxidation of sulfur dioxide on vanadium(V) oxide for the production of sulfuric acid. Many heterogeneous catalysts are in fact nanomaterials.
Heterogeneous catalysts are typically "supported," which means that the catalyst is dispersed on a second material that enhances the effectiveness or minimizes its cost. Supports prevent or minimize agglomeration and sintering of small catalyst particles, exposing more surface area, thus catalysts have a higher specific activity (per gram) on support. Sometimes the support is merely a surface on which the catalyst is spread to increase the surface area. More often, the support and the catalyst interact, affecting the catalytic reaction. Supports can also be used in nanoparticle synthesis by providing sites for individual molecules of catalyst to chemically bind. Supports are porous materials with a high surface area, most commonly alumina, zeolites or various kinds of activated carbon. Specialized supports include silicon dioxide, titanium dioxide, calcium carbonate, and barium sulfate.
Electrocatalysts
In the context of electrochemistry, specifically in fuel cell engineering, various metal-containing catalysts are used to enhance the rates of the half reactions that comprise the fuel cell. One common type of fuel cell electrocatalyst is based upon nanoparticles of platinum that are supported on slightly larger carbon particles. When in contact with one of the electrodes in a fuel cell, this platinum increases the rate of oxygen reduction either to water or to hydroxide or hydrogen peroxide.
Homogeneous catalysis
Homogeneous catalysts function in the same phase as the reactants. Typically homogeneous catalysts are dissolved in a solvent with the substrates. One example of homogeneous catalysis involves the influence of H on the esterification of carboxylic acids, such as the formation of methyl acetate from acetic acid and methanol. High-volume processes requiring a homogeneous catalyst include hydroformylation, hydrosilylation, hydrocyanation. For inorganic chemists, homogeneous catalysis is often synonymous with organometallic catalysts. Many homogeneous catalysts are however not organometallic, illustrated by the use of cobalt salts that catalyze the oxidation of p-xylene to terephthalic acid.
Organocatalysis
Whereas transition metals sometimes attract most of the attention in the study of catalysis, small organic molecules without metals can also exhibit catalytic properties, as is apparent from the fact that many enzymes lack transition metals. Typically, organic catalysts require a higher loading (amount of catalyst per unit amount of reactant, expressed in mol% amount of substance) than transition metal(-ion)-based catalysts, but these catalysts are usually commercially available in bulk, helping to lower costs. In the early 2000s, these organocatalysts were considered "new generation" and are competitive to traditional metal(-ion)-containing catalysts. Organocatalysts are supposed to operate akin to metal-free enzymes utilizing, e.g., non-covalent interactions such as hydrogen bonding. The discipline organocatalysis is divided into the application of covalent (e.g., proline, DMAP) and non-covalent (e.g., thiourea organocatalysis) organocatalysts referring to the preferred catalyst-substrate binding and interaction, respectively. The Nobel Prize in Chemistry 2021 was awarded jointly to Benjamin List and David W.C. MacMillan "for the development of asymmetric organocatalysis."
Photocatalysts
Photocatalysis is the phenomenon where the catalyst can receive light to generate an excited state that effect redox reactions. Singlet oxygen is usually produced by photocatalysis. Photocatalysts are components of dye-sensitized solar cells.
Enzymes and biocatalysts
In biology, enzymes are protein-based catalysts in metabolism and catabolism. Most biocatalysts are enzymes, but other non-protein-based classes of biomolecules also exhibit catalytic properties including ribozymes, and synthetic deoxyribozymes.
Biocatalysts can be thought of as an intermediate between homogeneous and heterogeneous catalysts, although strictly speaking soluble enzymes are homogeneous catalysts and membrane-bound enzymes are heterogeneous. Several factors affect the activity of enzymes (and other catalysts) including temperature, pH, the concentration of enzymes, substrate, and products. A particularly important reagent in enzymatic reactions is water, which is the product of many bond-forming reactions and a reactant in many bond-breaking processes.
In biocatalysis, enzymes are employed to prepare many commodity chemicals including high-fructose corn syrup and acrylamide.
Some monoclonal antibodies whose binding target is a stable molecule that resembles the transition state of a chemical reaction can function as weak catalysts for that chemical reaction by lowering its activation energy. Such catalytic antibodies are sometimes called "abzymes".
Significance
Estimates are that 90% of all commercially produced chemical products involve catalysts at some stage in the process of their manufacture. In 2005, catalytic processes generated about $900 billion in products worldwide. Catalysis is so pervasive that subareas are not readily classified. Some areas of particular concentration are surveyed below.
Energy processing
Petroleum refining makes intensive use of catalysis for alkylation, catalytic cracking (breaking long-chain hydrocarbons into smaller pieces), naphtha reforming and steam reforming (conversion of hydrocarbons into synthesis gas). Even the exhaust from the burning of fossil fuels is treated via catalysis: Catalytic converters, typically composed of platinum and rhodium, break down some of the more harmful byproducts of automobile exhaust.
2 CO + 2 NO → 2 CO + N
With regard to synthetic fuels, an old but still important process is the Fischer-Tropsch synthesis of hydrocarbons from synthesis gas, which itself is processed via water-gas shift reactions, catalyzed by iron. The Sabatier reaction produces methane from carbon dioxide and hydrogen. Biodiesel and related biofuels require processing via both inorganic and biocatalysts.
Fuel cells rely on catalysts for both the anodic and cathodic reactions.
Catalytic heaters generate flameless heat from a supply of combustible fuel.
Bulk chemicals
Some of the largest-scale chemicals are produced via catalytic oxidation, often using oxygen. Examples include nitric acid (from ammonia), sulfuric acid (from sulfur dioxide to sulfur trioxide by the contact process), terephthalic acid from p-xylene, acrylic acid from propylene or propane and acrylonitrile from propane and ammonia.
The production of ammonia is one of the largest-scale and most energy-intensive processes. In the Haber process nitrogen is combined with hydrogen over an iron oxide catalyst. Methanol is prepared from carbon monoxide or carbon dioxide but using copper-zinc catalysts.
Bulk polymers derived from ethylene and propylene are often prepared via Ziegler-Natta catalysis. Polyesters, polyamides, and isocyanates are derived via acid-base catalysis.
Most carbonylation processes require metal catalysts, examples include the Monsanto acetic acid process and hydroformylation.
Fine chemicals
Many fine chemicals are prepared via catalysis; methods include those of heavy industry as well as more specialized processes that would be prohibitively expensive on a large scale. Examples include the Heck reaction, and Friedel–Crafts reactions. Because most bioactive compounds are chiral, many pharmaceuticals are produced by enantioselective catalysis (catalytic asymmetric synthesis). (R)-1,2-Propandiol, the precursor to the antibacterial levofloxacin, can be synthesized efficiently from hydroxyacetone by using catalysts based on BINAP-ruthenium complexes, in Noyori asymmetric hydrogenation:
Food processing
One of the most obvious applications of catalysis is the hydrogenation (reaction with hydrogen gas) of fats using nickel catalyst to produce margarine. Many other foodstuffs are prepared via biocatalysis (see below).
Environment
Catalysis affects the environment by increasing the efficiency of industrial processes, but catalysis also plays a direct role in the environment. A notable example is the catalytic role of chlorine free radicals in the breakdown of ozone. These radicals are formed by the action of ultraviolet radiation on chlorofluorocarbons (CFCs).
Cl + O → ClO + O
ClO + O → Cl + O
History
The term "catalyst", broadly defined as anything that increases the rate of a process, is derived from Greek καταλύειν, meaning "to annul," or "to untie," or "to pick up". The concept of catalysis was invented by chemist Elizabeth Fulhame and described in a 1794 book, based on her novel work in oxidation–reduction reactions. The first chemical reaction in organic chemistry that knowingly used a catalyst was studied in 1811 by Gottlieb Kirchhoff, who discovered the acid-catalyzed conversion of starch to glucose. The term catalysis was later used by Jöns Jakob Berzelius in 1835 to describe reactions that are accelerated by substances that remain unchanged after the reaction. Fulhame, who predated Berzelius, did work with water as opposed to metals in her reduction experiments. Other 18th century chemists who worked in catalysis were Eilhard Mitscherlich who referred to it as contact processes, and Johann Wolfgang Döbereiner who spoke of contact action. He developed Döbereiner's lamp, a lighter based on hydrogen and a platinum sponge, which became a commercial success in the 1820s that lives on today. Humphry Davy discovered the use of platinum in catalysis. In the 1880s, Wilhelm Ostwald at Leipzig University started a systematic investigation into reactions that were catalyzed by the presence of acids and bases, and found that chemical reactions occur at finite rates and that these rates can be used to determine the strengths of acids and bases. For this work, Ostwald was awarded the 1909 Nobel Prize in Chemistry. Vladimir Ipatieff performed some of the earliest industrial scale reactions, including the discovery and commercialization of oligomerization and the development of catalysts for hydrogenation.
Inhibitors, poisons, and promoters
An added substance that lowers the rate is called a reaction inhibitor if reversible and catalyst poisons if irreversible. Promoters are substances that increase the catalytic activity, even though they are not catalysts by themselves.
Inhibitors are sometimes referred to as "negative catalysts" since they decrease the reaction rate. However the term inhibitor is preferred since they do not work by introducing a reaction path with higher activation energy; this would not lower the rate since the reaction would continue to occur by the non-catalyzed path. Instead, they act either by deactivating catalysts or by removing reaction intermediates such as free radicals. In heterogeneous catalysis, coking inhibits the catalyst, which becomes covered by polymeric side products.
The inhibitor may modify selectivity in addition to rate. For instance, in the hydrogenation of alkynes to alkenes, a palladium (Pd) catalyst partly "poisoned" with lead(II) acetate (Pb(CHCO)) can be used (Lindlar catalyst). Without the deactivation of the catalyst, the alkene produced would be further hydrogenated to alkane.
The inhibitor can produce this effect by, e.g., selectively poisoning only certain types of active sites. Another mechanism is the modification of surface geometry. For instance, in hydrogenation operations, large planes of metal surface function as sites of hydrogenolysis catalysis while sites catalyzing hydrogenation of unsaturates are smaller. Thus, a poison that covers the surface randomly will tend to lower the number of uncontaminated large planes but leave proportionally smaller sites free, thus changing the hydrogenation vs. hydrogenolysis selectivity. Many other mechanisms are also possible.
Promoters can cover up the surface to prevent the production of a mat of coke, or even actively remove such material (e.g., rhenium on platinum in platforming). They can aid the dispersion of the catalytic material or bind to reagents.
See also
References
External links
Science Aid: Catalysts Page for high school level science
W.A. Herrmann Technische Universität presentation
Alumite Catalyst, Kameyama-Sakurai Laboratory, Japan
Inorganic Chemistry and Catalysis Group, Utrecht University, The Netherlands
Centre for Surface Chemistry and Catalysis
Carbons & Catalysts Group, University of Concepcion, Chile
Center for Enabling New Technologies Through Catalysis, An NSF Center for Chemical Innovation, USA
"Bubbles turn on chemical catalysts" , Science News magazine online, April 6, 2009.
Chemical kinetics
Articles containing video clips | 0.782289 | 0.997256 | 0.780142 |
Catabolism | Catabolism is the set of metabolic pathways that breaks down molecules into smaller units that are either oxidized to release energy or used in other anabolic reactions. Catabolism breaks down large molecules (such as polysaccharides, lipids, nucleic acids, and proteins) into smaller units (such as monosaccharides, fatty acids, nucleotides, and amino acids, respectively). Catabolism is the breaking-down aspect of metabolism, whereas anabolism is the building-up aspect.
Cells use the monomers released from breaking down polymers to either construct new polymer molecules or degrade the monomers further to simple waste products, releasing energy. Cellular wastes include lactic acid, acetic acid, carbon dioxide, ammonia, and urea. The formation of these wastes is usually an oxidation process involving a release of chemical free energy, some of which is lost as heat, but the rest of which is used to drive the synthesis of adenosine triphosphate (ATP). This molecule acts as a way for the cell to transfer the energy released by catabolism to the energy-requiring reactions that make up anabolism.
Catabolism is a destructive metabolism and anabolism is a constructive metabolism. Catabolism, therefore, provides the chemical energy necessary for the maintenance and growth of cells. Examples of catabolic processes include glycolysis, the citric acid cycle, the breakdown of muscle protein in order to use amino acids as substrates for gluconeogenesis, the breakdown of fat in adipose tissue to fatty acids, and oxidative deamination of neurotransmitters by monoamine oxidase.
Catabolic hormones
There are many signals that control catabolism. Most of the known signals are hormones and the molecules involved in metabolism itself. Endocrinologists have traditionally classified many of the hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The so-called classic catabolic hormones known since the early 20th century are cortisol, glucagon, and adrenaline (and other catecholamines). In recent decades, many more hormones with at least some catabolic effects have been discovered, including cytokines, orexin (known as hypocretin), and melatonin.
Etymology
The word catabolism is from Neo-Latin, which got the roots from Greek: κάτω kato, "downward" and βάλλειν ballein, "to throw".
See also
Autophagy
Dehydration synthesis
Hydrolysis
Nocturnal post absorptive catabolism
Psilacetin § Pharmacology
Sarcopenia
References
External links
Metabolism | 0.783336 | 0.995921 | 0.78014 |
Computational science | Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science, and more specifically the Computer Sciences, which uses advanced computing capabilities to understand and solve complex physical problems. While this discussion typically extenuates into Visual Computation, this research field of study will typically include the following research categorizations.
Algorithms (numerical and non-numerical): mathematical models, computational models, and computer simulations developed to solve sciences (physical, biological, social), engineering, and humanities problems
Computer hardware that develops and optimizes the advanced system hardware, firmware, networking, and data management components needed to solve computationally demanding problems
The computing infrastructure that supports both the science and engineering problem solving and the developmental computer and information science
In practical use, it is typically the application of computer simulation and other forms of computation from numerical analysis and theoretical computer science to solve problems in various scientific disciplines. The field is different from theory and laboratory experiments, which are the traditional forms of science and engineering. The scientific computing approach is to gain understanding through the analysis of mathematical models implemented on computers. Scientists and engineers develop computer programs and application software that model systems being studied and run these programs with various sets of input parameters. The essence of computational science is the application of numerical algorithms and computational mathematics. In some cases, these models require massive amounts of calculations (usually floating-point) and are often executed on supercomputers or distributed computing platforms.
The computational scientist
The term computational scientist is used to describe someone skilled in scientific computing. Such a person is usually a scientist, an engineer, or an applied mathematician who applies high-performance computing in different ways to advance the state-of-the-art in their respective applied disciplines in physics, chemistry, or engineering.
Computational science is now commonly considered a third mode of science , complementing and adding to experimentation/observation and theory (see image). Here, one defines a system as a potential source of data, an experiment as a process of extracting data from a system by exerting it through its inputs and a model (M) for a system (S) and an experiment (E) as anything to which E can be applied in order to answer questions about S. A computational scientist should be capable of:
recognizing complex problems
adequately conceptualizing the system containing these problems
designing a framework of algorithms suitable for studying this system: the simulation
choosing a suitable computing infrastructure (parallel computing/grid computing/supercomputers)
hereby, maximizing the computational power of the simulation
assessing to what level the output of the simulation resembles the systems: the model is validated
adjusting the conceptualization of the system accordingly
repeat the cycle until a suitable level of validation is obtained: the computational scientist trusts that the simulation generates adequately realistic results for the system under the studied conditions
Substantial effort in computational sciences has been devoted to developing algorithms, efficient implementation in programming languages, and validating computational results. A collection of problems and solutions in computational science can be found in Steeb, Hardy, Hardy, and Stoop (2004).
Philosophers of science addressed the question to what degree computational science qualifies as science, among them Humphreys and Gelfert. They address the general question of epistemology: how does gain insight from such computational science approaches? Tolk uses these insights to show the epistemological constraints of computer-based simulation research. As computational science uses mathematical models representing the underlying theory in executable form, in essence, they apply modeling (theory building) and simulation (implementation and execution). While simulation and computational science are our most sophisticated way to express our knowledge and understanding, they also come with all constraints and limits already known for computational solutions.
Applications of computational science
Problem domains for computational science/scientific computing include:
Predictive computational science
Predictive computational science is a scientific discipline concerned with the formulation, calibration, numerical solution, and validation of mathematical models designed to predict specific aspects of physical events, given initial and boundary conditions, and a set of characterizing parameters and associated uncertainties. In typical cases, the predictive statement is formulated in terms of probabilities. For example, given a mechanical component and a periodic loading condition, "the probability is (say) 90% that the number of cycles at failure (Nf) will be in the interval N1<Nf<N2".
Urban complex systems
Cities are massively complex systems created by humans, made up of humans, and governed by humans. Trying to predict, understand and somehow shape the development of cities in the future requires complex thinking and computational models and simulations to help mitigate challenges and possible disasters. The focus of research in urban complex systems is, through modeling and simulation, to build a greater understanding of city dynamics and help prepare for the coming urbanization.
Computational finance
In financial markets, huge volumes of interdependent assets are traded by a large number of interacting market participants in different locations and time zones. Their behavior is of unprecedented complexity and the characterization and measurement of the risk inherent to this highly diverse set of instruments is typically based on complicated mathematical and computational models. Solving these models exactly in closed form, even at a single instrument level, is typically not possible, and therefore we have to look for efficient numerical algorithms. This has become even more urgent and complex recently, as the credit crisis has clearly demonstrated the role of cascading effects going from single instruments through portfolios of single institutions to even the interconnected trading network. Understanding this requires a multi-scale and holistic approach where interdependent risk factors such as market, credit, and liquidity risk are modeled simultaneously and at different interconnected scales.
Computational biology
Exciting new developments in biotechnology are now revolutionizing biology and biomedical research. Examples of these techniques are high-throughput sequencing, high-throughput quantitative PCR, intra-cellular imaging, in-situ hybridization of gene expression, three-dimensional imaging techniques like Light Sheet Fluorescence Microscopy, and Optical Projection (micro)-Computer Tomography. Given the massive amounts of complicated data that is generated by these techniques, their meaningful interpretation, and even their storage, form major challenges calling for new approaches. Going beyond current bioinformatics approaches, computational biology needs to develop new methods to discover meaningful patterns in these large data sets. Model-based reconstruction of gene networks can be used to organize the gene expression data in a systematic way and to guide future data collection. A major challenge here is to understand how gene regulation is controlling fundamental biological processes like biomineralization and embryogenesis. The sub-processes like gene regulation, organic molecules interacting with the mineral deposition process, cellular processes, physiology, and other processes at the tissue and environmental levels are linked. Rather than being directed by a central control mechanism, biomineralization and embryogenesis can be viewed as an emergent behavior resulting from a complex system in which several sub-processes on very different temporal and spatial scales (ranging from nanometer and nanoseconds to meters and years) are connected into a multi-scale system. One of the few available options to understand such systems is by developing a multi-scale model of the system.
Complex systems theory
Using information theory, non-equilibrium dynamics, and explicit simulations, computational systems theory tries to uncover the true nature of complex adaptive systems.
Computational science and engineering
Computational science and engineering (CSE) is a relatively new discipline that deals with the development and application of computational models and simulations, often coupled with high-performance computing, to solve complex physical problems arising in engineering analysis and design (computational engineering) as well as natural phenomena (computational science). CSE has become accepted amongst scientists, engineers and academics as the "third mode of discovery" (next to theory and experimentation). In many fields, computer simulation is integral and therefore essential to business and research. Computer simulation provides the capability to enter fields that are either inaccessible to traditional experimentation or where carrying out traditional empirical inquiries is prohibitively expensive. CSE should neither be confused with pure computer science, nor with computer engineering, although a wide domain in the former is used in CSE (e.g., certain algorithms, data structures, parallel programming, high-performance computing), and some problems in the latter can be modeled and solved with CSE methods (as an application area).
Methods and algorithms
Algorithms and mathematical methods used in computational science are varied. Commonly applied methods include:
Computer algebra, including symbolic computation in fields such as statistics, equation solving, algebra, calculus, geometry, linear algebra, tensor analysis (multilinear algebra), optimization
Numerical analysis, including Computing derivatives by finite differences
Application of Taylor series as convergent and asymptotic series
Computing derivatives by Automatic differentiation (AD)
Finite element method for solving PDEs
High order difference approximations via Taylor series and Richardson extrapolation
Methods of integration on a uniform mesh: rectangle rule (also called midpoint rule), trapezoid rule, Simpson's rule
Runge–Kutta methods for solving ordinary differential equations
Newton's method
Discrete Fourier transform
Monte Carlo methods
Numerical linear algebra, including decompositions and eigenvalue algorithms
Linear programming
Branch and cut
Branch and bound
Molecular dynamics, Car–Parrinello molecular dynamics
Space mapping
Time stepping methods for dynamical systems
Historically and today, Fortran remains popular for most applications of scientific computing. Other programming languages and computer algebra systems commonly used for the more mathematical aspects of scientific computing applications include GNU Octave, Haskell, Julia, Maple, Mathematica, MATLAB, Python (with third-party SciPy library), Perl (with third-party PDL library), R, Scilab, and TK Solver. The more computationally intensive aspects of scientific computing will often use some variation of C or Fortran and optimized algebra libraries such as BLAS or LAPACK. In addition, parallel computing is heavily used in scientific computing to find solutions of large problems in a reasonable amount of time. In this framework, the problem is either divided over many cores on a single CPU node (such as with OpenMP), divided over many CPU nodes networked together (such as with MPI), or is run on one or more GPUs (typically using either CUDA or OpenCL).
Computational science application programs often model real-world changing conditions, such as weather, airflow around a plane, automobile body distortions in a crash, the motion of stars in a galaxy, an explosive device, etc. Such programs might create a 'logical mesh' in computer memory where each item corresponds to an area in space and contains information about that space relevant to the model. For example, in weather models, each item might be a square kilometer; with land elevation, current wind direction, humidity, temperature, pressure, etc. The program would calculate the likely next state based on the current state, in simulated time steps, solving differential equations that describe how the system operates, and then repeat the process to calculate the next state.
Conferences and journals
In 2001, the International Conference on Computational Science (ICCS) was first organized. Since then, it has been organized yearly. ICCS is an A-rank conference in the CORE ranking.
The Journal of Computational Science published its first issue in May 2010. The Journal of Open Research Software was launched in 2012.
The ReScience C initiative, which is dedicated to replicating computational results, was started on GitHub in 2015.
Education
At some institutions, a specialization in scientific computation can be earned as a "minor" within another program (which may be at varying levels). However, there are increasingly many bachelor's, master's, and doctoral programs in computational science. The joint degree program master program computational science at the University of Amsterdam and the in computational science was first offered in 2004. In this program, students:
learn to build computational models from real-life observations;
develop skills in turning these models into computational structures and in performing large-scale simulations;
learn theories that will give a firm basis for the analysis of complex systems;
learn to analyze the results of simulations in a virtual laboratory using advanced numerical algorithms.
ETH Zurich offers a bachelor's and master's degree in Computational Science and Engineering. The degree equips students with the ability to understand scientific problem and apply numerical methods to solve such problems. The directions of specializations include Physics, Chemistry, Biology and other Scientific and Engineering disciplines.
George Mason University has offered a multidisciplinary doctorate Ph.D. program in Computational Sciences and Informatics starting from 1992.
The School of Computational and Integrative Sciences, Jawaharlal Nehru University (erstwhile School of Information Technology) also offers a vibrant master's science program for computational science with two specialties: Computational Biology and Complex Systems.
Subfields
Bioinformatics
Car–Parrinello molecular dynamics
Cheminformatics
Chemometrics
Computational archaeology
Computational astrophysics
Computational biology
Computational chemistry
Computational materials science
Computational economics
Computational electromagnetics
Computational engineering
Computational finance
Computational fluid dynamics
Computational forensics
Computational geophysics
Computational history
Computational informatics
Computational intelligence
Computational law
Computational linguistics
Computational mathematics
Computational mechanics
Computational neuroscience
Computational particle physics
Computational physics
Computational sociology
Computational statistics
Computational sustainability
Computer algebra
Computer simulation
Financial modeling
Geographic information science
Geographic information system (GIS)
High-performance computing
Machine learning
Network analysis
Neuroinformatics
Numerical linear algebra
Numerical weather prediction
Pattern recognition
Scientific visualization
Simulation
See also
Computational science and engineering
Modeling and simulation
Comparison of computer algebra systems
Differentiable programming
List of molecular modeling software
List of numerical analysis software
List of statistical packages
Timeline of scientific computing
Simulated reality
Extensions for Scientific Computation (XSC)
References
Additional sources
E. Gallopoulos and A. Sameh, "CSE: Content and Product". IEEE Computational Science and Engineering Magazine, 4(2):39–43 (1997)
G. Hager and G. Wellein, Introduction to High Performance Computing for Scientists and Engineers, Chapman and Hall (2010)
A.K. Hartmann, Practical Guide to Computer Simulations, World Scientific (2009)
Journal Computational Methods in Science and Technology (open access), Polish Academy of Sciences
Journal Computational Science and Discovery, Institute of Physics
R.H. Landau, C.C. Bordeianu, and M. Jose Paez, A Survey of Computational Physics: Introductory Computational Science, Princeton University Press (2008)
External links
Journal of Computational Science
The Journal of Open Research Software
The National Center for Computational Science at Oak Ridge National Laboratory
Applied mathematics
Computational fields of study | 0.785292 | 0.993318 | 0.780045 |
Differential calculus | In mathematics, differential calculus is a subfield of calculus that studies the rates at which quantities change. It is one of the two traditional divisions of calculus, the other being integral calculus—the study of the area beneath a curve.
The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, and their applications. The derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation. Geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a single real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point.
Differential calculus and integral calculus are connected by the fundamental theorem of calculus. This states that differentiation is the reverse process to integration.
Differentiation has applications in nearly all quantitative disciplines. In physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of the velocity with respect to time is acceleration. The derivative of the momentum of a body with respect to time equals the force applied to the body; rearranging this derivative statement leads to the famous equation associated with Newton's second law of motion. The reaction rate of a chemical reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials and design factories.
Derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena. Derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra.
Derivative
The derivative of at the point is the slope of the tangent to . In order to gain an intuition for this, one must first be familiar with finding the slope of a linear equation, written in the form . The slope of an equation is its steepness. It can be found by picking any two points and dividing the change in by the change in , meaning that . For, the graph of has a slope of , as shown in the diagram below:
For brevity, is often written as , with being the Greek letter delta, meaning 'change in'. The slope of a linear equation is constant, meaning that the steepness is the same everywhere. However, many graphs such as vary in their steepness. This means that you can no longer pick any two arbitrary points and compute the slope. Instead, the slope of the graph can be computed by considering the tangent line—a line that 'just touches' a particular point. The slope of a curve at a particular point is equal to the slope of the tangent to that point. For example, has a slope of at because the slope of the tangent line to that point is equal to :
The derivative of a function is then simply the slope of this tangent line. Even though the tangent line only touches a single point at the point of tangency, it can be approximated by a line that goes through two points. This is known as a secant line. If the two points that the secant line goes through are close together, then the secant line closely resembles the tangent line, and, as a result, its slope is also very similar:
The advantage of using a secant line is that its slope can be calculated directly. Consider the two points on the graph and , where is a small number. As before, the slope of the line passing through these two points can be calculated with the formula . This gives
As gets closer and closer to , the slope of the secant line gets closer and closer to the slope of the tangent line. This is formally written as
The above expression means 'as gets closer and closer to 0, the slope of the secant line gets closer and closer to a certain value'. The value that is being approached is the derivative of ; this can be written as . If , the derivative can also be written as , with representing an infinitesimal change. For example, represents an infinitesimal change in x. In summary, if , then the derivative of is
provided such a limit exists. We have thus succeeded in properly defining the derivative of a function, meaning that the 'slope of the tangent line' now has a precise mathematical meaning. Differentiating a function using the above definition is known as differentiation from first principles. Here is a proof, using differentiation from first principles, that the derivative of is :
As approaches , approaches . Therefore, . This proof can be generalised to show that if and are constants. This is known as the power rule. For example, . However, many other functions cannot be differentiated as easily as polynomial functions, meaning that sometimes further techniques are needed to find the derivative of a function. These techniques include the chain rule, product rule, and quotient rule. Other functions cannot be differentiated at all, giving rise to the concept of differentiability.
A closely related concept to the derivative of a function is its differential. When and are real variables, the derivative of at is the slope of the tangent line to the graph of at . Because the source and target of are one-dimensional, the derivative of is a real number. If and are vectors, then the best linear approximation to the graph of depends on how changes in several directions at once. Taking the best linear approximation in a single direction determines a partial derivative, which is usually denoted . The linearization of in all directions at once is called the total derivative.
History of differentiation
The concept of a derivative in the sense of a tangent line is a very old one, familiar to ancient Greek mathematicians such as Euclid (c. 300 BC), Archimedes (c. 287–212 BC), and Apollonius of Perga (c. 262–190 BC). Archimedes also made use of indivisibles, although these were primarily used to study areas and volumes rather than derivatives and tangents (see The Method of Mechanical Theorems).
The use of infinitesimals to compute rates of change was developed significantly by Bhāskara II (1114–1185); indeed, it has been argued that many of the key notions of differential calculus can be found in his work, such as "Rolle's theorem".
The mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), in his Treatise on Equations, established conditions for some cubic equations to have solutions, by finding the maxima of appropriate cubic polynomials. He obtained, for example, that the maximum (for positive ) of the cubic occurs when , and concluded therefrom that the equation has exactly one positive solution when , and two positive solutions whenever . The historian of science, Roshdi Rashed, has argued that al-Tūsī must have used the derivative of the cubic to obtain this result. Rashed's conclusion has been contested by other scholars, however, who argue that he could have obtained the result by other methods which do not require the derivative of the function to be known.
The modern development of calculus is usually credited to Isaac Newton (1643–1727) and Gottfried Wilhelm Leibniz (1646–1716), who provided independent and unified approaches to differentiation and derivatives. The key insight, however, that earned them this credit, was the fundamental theorem of calculus relating differentiation and integration: this rendered obsolete most previous methods for computing areas and volumes. For their ideas on derivatives, both Newton and Leibniz built on significant earlier work by mathematicians such as Pierre de Fermat (1607-1665), Isaac Barrow (1630–1677), René Descartes (1596–1650), Christiaan Huygens (1629–1695), Blaise Pascal (1623–1662) and John Wallis (1616–1703). Regarding Fermat's influence, Newton once wrote in a letter that "I had the hint of this method [of fluxions] from Fermat's way of drawing tangents, and by applying it to abstract equations, directly and invertedly, I made it general." Isaac Barrow is generally given credit for the early development of the derivative. Nevertheless, Newton and Leibniz remain key figures in the history of differentiation, not least because Newton was the first to apply differentiation to theoretical physics, while Leibniz systematically developed much of the notation still used today.
Since the 17th century many mathematicians have contributed to the theory of differentiation. In the 19th century, calculus was put on a much more rigorous footing by mathematicians such as Augustin Louis Cauchy (1789–1857), Bernhard Riemann (1826–1866), and Karl Weierstrass (1815–1897). It was also during this period that the differentiation was generalized to Euclidean space and the complex plane.
The 20th century brought two major steps towards our present understanding and practice of derivation : Lebesgue integration, besides extending integral calculus to many more functions, clarified the relation between derivation and integration with the notion of absolute continuity. Later the theory of distributions (after Laurent Schwartz) extended derivation to generalized functions (e.g., the Dirac delta function previously introduced in Quantum Mechanics) and became fundamental to nowadays applied analysis especially by the use of weak solutions to partial differential equations.
Applications of derivatives
Optimization
If is a differentiable function on (or an open interval) and is a local maximum or a local minimum of , then the derivative of at is zero. Points where are called critical points or stationary points (and the value of at is called a critical value). If is not assumed to be everywhere differentiable, then points at which it fails to be differentiable are also designated critical points.
If is twice differentiable, then conversely, a critical point of can be analysed by considering the second derivative of at :
if it is positive, is a local minimum;
if it is negative, is a local maximum;
if it is zero, then could be a local minimum, a local maximum, or neither. (For example, has a critical point at , but it has neither a maximum nor a minimum there, whereas has a critical point at and a minimum and a maximum, respectively, there.)
This is called the second derivative test. An alternative approach, called the first derivative test, involves considering the sign of the on each side of the critical point.
Taking derivatives and solving for critical points is therefore often a simple way to find local minima or maxima, which can be useful in optimization. By the extreme value theorem, a continuous function on a closed interval must attain its minimum and maximum values at least once. If the function is differentiable, the minima and maxima can only occur at critical points or endpoints.
This also has applications in graph sketching: once the local minima and maxima of a differentiable function have been found, a rough plot of the graph can be obtained from the observation that it will be either increasing or decreasing between critical points.
In higher dimensions, a critical point of a scalar valued function is a point at which the gradient is zero. The second derivative test can still be used to analyse critical points by considering the eigenvalues of the Hessian matrix of second partial derivatives of the function at the critical point. If all of the eigenvalues are positive, then the point is a local minimum; if all are negative, it is a local maximum. If there are some positive and some negative eigenvalues, then the critical point is called a "saddle point", and if none of these cases hold (i.e., some of the eigenvalues are zero) then the test is considered to be inconclusive.
Calculus of variations
One example of an optimization problem is: Find the shortest curve between two points on a surface, assuming that the curve must also lie on the surface. If the surface is a plane, then the shortest curve is a line. But if the surface is, for example, egg-shaped, then the shortest path is not immediately clear. These paths are called geodesics, and one of the most fundamental problems in the calculus of variations is finding geodesics. Another example is: Find the smallest area surface filling in a closed curve in space. This surface is called a minimal surface and it, too, can be found using the calculus of variations.
Physics
Calculus is of vital importance in physics: many physical processes are described by equations involving derivatives, called differential equations. Physics is particularly concerned with the way quantities change and develop over time, and the concept of the "time derivative" — the rate of change over time — is essential for the precise definition of several important concepts. In particular, the time derivatives of an object's position are significant in Newtonian physics:
velocity is the derivative (with respect to time) of an object's displacement (distance from the original position)
acceleration is the derivative (with respect to time) of an object's velocity, that is, the second derivative (with respect to time) of an object's position.
For example, if an object's position on a line is given by
then the object's velocity is
and the object's acceleration is
which is constant.
Differential equations
A differential equation is a relation between a collection of functions and their derivatives. An ordinary differential equation is a differential equation that relates functions of one variable to their derivatives with respect to that variable. A partial differential equation is a differential equation that relates functions of more than one variable to their partial derivatives. Differential equations arise naturally in the physical sciences, in mathematical modelling, and within mathematics itself. For example, Newton's second law, which describes the relationship between acceleration and force, can be stated as the ordinary differential equation
The heat equation in one space variable, which describes how heat diffuses through a straight rod, is the partial differential equation
Here is the temperature of the rod at position and time and is a constant that depends on how fast heat diffuses through the rod.
Mean value theorem
The mean value theorem gives a relationship between values of the derivative and values of the original function. If is a real-valued function and and are numbers with , then the mean value theorem says that under mild hypotheses, the slope between the two points and is equal to the slope of the tangent line to at some point between and . In other words,
In practice, what the mean value theorem does is control a function in terms of its derivative. For instance, suppose that has derivative equal to zero at each point. This means that its tangent line is horizontal at every point, so the function should also be horizontal. The mean value theorem proves that this must be true: The slope between any two points on the graph of must equal the slope of one of the tangent lines of . All of those slopes are zero, so any line from one point on the graph to another point will also have slope zero. But that says that the function does not move up or down, so it must be a horizontal line. More complicated conditions on the derivative lead to less precise but still highly useful information about the original function.
Taylor polynomials and Taylor series
The derivative gives the best possible linear approximation of a function at a given point, but this can be very different from the original function. One way of improving the approximation is to take a quadratic approximation. That is to say, the linearization of a real-valued function at the point is a linear polynomial , and it may be possible to get a better approximation by considering a quadratic polynomial . Still better might be a cubic polynomial , and this idea can be extended to arbitrarily high degree polynomials. For each one of these polynomials, there should be a best possible choice of coefficients , , , and that makes the approximation as good as possible.
In the neighbourhood of , for the best possible choice is always , and for the best possible choice is always . For , , and higher-degree coefficients, these coefficients are determined by higher derivatives of . should always be , and should always be . Using these coefficients gives the Taylor polynomial of . The Taylor polynomial of degree is the polynomial of degree which best approximates , and its coefficients can be found by a generalization of the above formulas. Taylor's theorem gives a precise bound on how good the approximation is. If is a polynomial of degree less than or equal to , then the Taylor polynomial of degree equals .
The limit of the Taylor polynomials is an infinite series called the Taylor series. The Taylor series is frequently a very good approximation to the original function. Functions which are equal to their Taylor series are called analytic functions. It is impossible for functions with discontinuities or sharp corners to be analytic; moreover, there exist smooth functions which are also not analytic.
Implicit function theorem
Some natural geometric shapes, such as circles, cannot be drawn as the graph of a function. For instance, if , then the circle is the set of all pairs such that . This set is called the zero set of , and is not the same as the graph of , which is a paraboloid. The implicit function theorem converts relations such as into functions. It states that if is continuously differentiable, then around most points, the zero set of looks like graphs of functions pasted together. The points where this is not true are determined by a condition on the derivative of . The circle, for instance, can be pasted together from the graphs of the two functions . In a neighborhood of every point on the circle except and , one of these two functions has a graph that looks like the circle. (These two functions also happen to meet and , but this is not guaranteed by the implicit function theorem.)
The implicit function theorem is closely related to the inverse function theorem, which states when a function looks like graphs of invertible functions pasted together.
See also
Differential (calculus)
Numerical differentiation
Techniques for differentiation
List of calculus topics
Notation for differentiation
Notes
References
Citations
Works cited
Other sources
Boman, Eugene, and Robert Rogers. Differential Calculus: From Practice to Theory. 2022, personal.psu.edu/ecb5/DiffCalc.pdf .
Calculus | 0.782727 | 0.996503 | 0.77999 |
Scholarly method | The scholarly method or scholarship is the body of principles and practices used by scholars and academics to make their claims about their subjects of expertise as valid and trustworthy as possible, and to make them known to the scholarly public. It comprises the methods that systemically advance the teaching, research, and practice of a scholarly or academic field of study through rigorous inquiry. Scholarship is creative, can be documented, can be replicated or elaborated, and can be and is peer reviewed through various methods. The scholarly method includes the subcategories of the scientific method, with which scientists bolster their claims, and the historical method, with which historians verify their claims.
Methods
The historical method comprises the techniques and guidelines by which historians research primary sources and other evidence, and then write history. The question of the nature, and indeed the possibility, of sound historical method is raised in the philosophy of history, as a question of epistemology. History guidelines commonly used by historians in their work require external criticism, internal criticism, and synthesis.
The empirical method is generally taken to mean the collection of data on which to base a hypothesis or derive a conclusion in science. It is part of the scientific method, but is often mistakenly assumed to be synonymous with other methods. The empirical method is not sharply defined and is often contrasted with the precision of experiments, where data emerges from the systematic manipulation of variables. The experimental method investigates causal relationships among variables. An experiment is a cornerstone of the empirical approach to acquiring data about the world and is used in both natural sciences and social sciences. An experiment can be used to help solve practical problems and to support or negate theoretical assumptions.
The scientific method refers to a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on gathering observable, empirical and measurable evidence subject to specific principles of reasoning. A scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses.
See also
Academia
Academic authorship
Academic publishing
Discipline (academia)
Doctor (title)
Ethics
Historical revisionism
History of scholarship
Manual of style
Professor
Source criticism
Urtext edition
Wissenschaft
References
Academia
Methodology | 0.78732 | 0.99055 | 0.779879 |
Equilibrium chemistry | Equilibrium chemistry is concerned with systems in chemical equilibrium. The unifying principle is that the free energy of a system at equilibrium is the minimum possible, so that the slope of the free energy with respect to the reaction coordinate is zero. This principle, applied to mixtures at equilibrium provides a definition of an equilibrium constant. Applications include acid–base, host–guest, metal–complex, solubility, partition, chromatography and redox equilibria.
Thermodynamic equilibrium
A chemical system is said to be in equilibrium when the quantities of the chemical entities involved do not and cannot change in time without the application of an external influence. In this sense a system in chemical equilibrium is in a stable state. The system at chemical equilibrium will be at a constant temperature, pressure or volume and a composition. It will be insulated from exchange of heat with the surroundings, that is, it is a closed system. A change of temperature, pressure (or volume) constitutes an external influence and the equilibrium quantities will change as a result of such a change. If there is a possibility that the composition might change, but the rate of change is negligibly slow, the system is said to be in a metastable state. The equation of chemical equilibrium can be expressed symbolically as
reactant(s) product(s)
The sign means "are in equilibrium with". This definition refers to macroscopic properties. Changes do occur at the microscopic level of atoms and molecules, but to such a minute extent that they are not measurable and in a balanced way so that the macroscopic quantities do not change. Chemical equilibrium is a dynamic state in which forward and backward reactions proceed at such rates that the macroscopic composition of the mixture is constant. Thus, equilibrium sign symbolizes the fact that reactions occur in both forward and backward directions.
A steady state, on the other hand, is not necessarily an equilibrium state in the chemical sense. For example, in a radioactive decay chain the concentrations of intermediate isotopes are constant because the rate of production is equal to the rate of decay. It is not a chemical equilibrium because the decay process occurs in one direction only.
Thermodynamic equilibrium is characterized by the free energy for the whole (closed) system being a minimum. For systems at constant volume the Helmholtz free energy is minimum and for systems at constant pressure the Gibbs free energy is minimum. Thus a metastable state is one for which the free energy change between reactants and products is not minimal even though the composition does not change in time.
The existence of this minimum is due to the free energy of mixing of reactants and products being always negative. For ideal solutions the enthalpy of mixing is zero, so the minimum exists because the entropy of mixing is always positive. The slope of the reaction free energy, δGr with respect to the reaction coordinate, ξ, is zero when the free energy is at its minimum value.
Equilibrium constant
Chemical potential is the partial molar free energy. The potential, μi, of the ith species in a chemical reaction is the partial derivative of the free energy with respect to the number of moles of that species, Ni:
A general chemical equilibrium can be written as
nj are the stoichiometric coefficients of the reactants in the equilibrium equation, and mj are the coefficients of the products. The value of δGr for these reactions is a function of the chemical potentials of all the species.
The chemical potential, μi, of the ith species can be calculated in terms of its activity, ai.
μ is the standard chemical potential of the species, R is the gas constant and T is the temperature. Setting the sum for the reactants j to be equal to the sum for the products, k, so that δGr(Eq) = 0:
Rearranging the terms,
This relates the standard Gibbs free energy change, ΔGo to an equilibrium constant, K, the reaction quotient of activity values at equilibrium.
It follows that any equilibrium of this kind can be characterized either by the standard free energy change or by the equilibrium constant. In practice concentrations are more useful than activities. Activities can be calculated from concentrations if the activity coefficient are known, but this is rarely the case. Sometimes activity coefficients can be calculated using, for example, Pitzer equations or Specific ion interaction theory. Otherwise conditions must be adjusted so that activity coefficients do not vary much. For ionic solutions this is achieved by using a background ionic medium at a high concentration relative to the concentrations of the species in equilibrium.
If activity coefficients are unknown they may be subsumed into the equilibrium constant, which becomes a concentration quotient. Each activity ai is assumed to be the product of a concentration, [Ai], and an activity coefficient, γi:
This expression for activity is placed in the expression defining the equilibrium constant.
By setting the quotient of activity coefficients, Γ, equal to one, the equilibrium constant is defined as a quotient of concentrations.
In more familiar notation, for a general equilibrium
α A + β B ... σ S + τ T ...
This definition is much more practical, but an equilibrium constant defined in terms of concentrations is dependent on conditions. In particular, equilibrium constants for species in aqueous solution are dependent on ionic strength, as the quotient of activity coefficients varies with the ionic strength of the solution.
The values of the standard free energy change and of the equilibrium constant are temperature dependent. To a first approximation, the van 't Hoff equation may be used.
This shows that when the reaction is exothermic (ΔHo, the standard enthalpy change, is negative), then K decreases with increasing temperature, in accordance with Le Châtelier's principle. The approximation involved is that the standard enthalpy change, ΔHo, is independent of temperature, which is a good approximation only over a small temperature range. Thermodynamic arguments can be used to show that
where Cp is the heat capacity at constant pressure.
Equilibria involving gases
When dealing with gases, fugacity, f, is used rather than activity. However, whereas activity is dimensionless, fugacity has the dimension of pressure. A consequence is that chemical potential has to be defined in terms of a standard pressure, po:
By convention po is usually taken to be 1 bar.
Fugacity can be expressed as the product of partial pressure, p, and a fugacity coefficient, Φ:
Fugacity coefficients are dimensionless and can be obtained experimentally at specific temperature and pressure, from measurements of deviations from ideal gas behaviour. Equilibrium constants are defined in terms of fugacity. If the gases are at sufficiently low pressure that they behave as ideal gases, the equilibrium constant can be defined as a quotient of partial pressures.
An example of gas-phase equilibrium is provided by the Haber–Bosch process of ammonia synthesis.
N2 + 3 H2 2 NH3;
This reaction is strongly exothermic, so the equilibrium constant decreases with temperature. However, a temperature of around 400 °C is required in order to achieve a reasonable rate of reaction with currently available catalysts. Formation of ammonia is also favoured by high pressure, as the volume decreases when the reaction takes place. The same reaction, nitrogen fixation, occurs at ambient temperatures in nature, when the catalyst is an enzyme such as nitrogenase. Much energy is needed initially to break the nitrogen–nitrogen triple bond even though the overall reaction is exothermic.
Gas-phase equilibria occur during combustion and were studied as early as 1943 in connection with the development of the V2 rocket engine.
The calculation of composition for a gaseous equilibrium at constant pressure is often carried out using ΔG values, rather than equilibrium constants.
Multiple equilibria
Two or more equilibria can exist at the same time. When this is so, equilibrium constants can be ascribed to individual equilibria, but they are not always unique. For example, three equilibrium constants can be defined for a dibasic acid, H2A.
A2− + H+ HA−;
HA− + H+ H2A;
A2− + 2 H+ H2A;
The three constants are not independent of each other and it is easy to see that . The constants K1 and K2 are stepwise constants and β is an example of an overall constant.
Speciation
The concentrations of species in equilibrium are usually calculated under the assumption that activity coefficients are either known or can be ignored. In this case, each equilibrium constant for the formation of a complex in a set of multiple equilibria can be defined as follows
α A + β B ... AαBβ...;
The concentrations of species containing reagent A are constrained by a condition of mass-balance, that is, the total (or analytical) concentration, which is the sum of all species' concentrations, must be constant. There is one mass-balance equation for each reagent of the type
There are as many mass-balance equations as there are reagents, A, B..., so if the equilibrium constant values are known, there are n mass-balance equations in n unknowns, [A], [B]..., the so-called free reagent concentrations. Solution of these equations gives all the information needed to calculate the concentrations of all the species.
Thus, the importance of equilibrium constants lies in the fact that, once their values have been determined by experiment, they can be used to calculate the concentrations, known as the speciation, of mixtures that contain the relevant species.
Determination
There are five main types of experimental data that are used for the determination of solution equilibrium constants. Potentiometric data obtained with a glass electrode are the most widely used with aqueous solutions. The others are Spectrophotometric, Fluorescence (luminescence) measurements and NMR chemical shift measurements; simultaneous measurement of K and ΔH for 1:1 adducts in biological systems is routinely carried out using Isothermal Titration Calorimetry.
The experimental data will comprise a set of data points. At the i'th data point, the analytical concentrations of the reactants, TA(i), TB(i) etc. will be experimentally known quantities and there will be one or more measured quantities, yi, that depend in some way on the analytical concentrations and equilibrium constants. A general computational procedure has three main components.
Definition of a chemical model of the equilibria. The model consists of a list of reagents, A, B, etc. and the complexes formed from them, with stoichiometries ApBq... Known or estimated values of the equilibrium constants for the formation of all complexes must be supplied.
Calculation of the concentrations of all the chemical species in each solution. The free concentrations are calculated by solving the equations of mass-balance, and the concentrations of the complexes are calculated using the equilibrium constant definitions. A quantity corresponding to the observed quantity can then be calculated using physical principles such as the Nernst potential or Beer-Lambert law which relate the calculated quantity to the concentrations of the species.
Refinement of the equilibrium constants. Usually a Non-linear least squares procedure is used. A weighted sum of squares, U, is minimized. The weights, wi and quantities y may be vectors. Values of the equilibrium constants are refined in an iterative procedure.
Acid–base equilibria
Brønsted and Lowry characterized an acid–base equilibrium as involving a proton exchange reaction:
acid + base conjugate base + conjugate acid.
An acid is a proton donor; the proton is transferred to the base, a proton acceptor, creating a conjugate acid. For aqueous solutions of an acid HA, the base is water; the conjugate base is A− and the conjugate acid is the solvated hydrogen ion. In solution chemistry, it is usual to use H+ as an abbreviation for the solvated hydrogen ion, regardless of the solvent. In aqueous solution H+ denotes a solvated hydronium ion.
The Brønsted–Lowry definition applies to other solvents, such as dimethyl sulfoxide: the solvent S acts as a base, accepting a proton and forming the conjugate acid SH+. A broader definition of acid dissociation includes hydrolysis, in which protons are produced by the splitting of water molecules. For example, boric acid, , acts as a weak acid, even though it is not a proton donor, because of the hydrolysis equilibrium
+ + H+.
Similarly, metal ion hydrolysis causes ions such as to behave as weak acids:
+ .
Acid–base equilibria are important in a very wide range of applications, such as acid–base homeostasis, ocean acidification, pharmacology and analytical chemistry.
Host–guest equilibria
A host–guest complex, also known as a donor–acceptor complex, may be formed from a Lewis base, B, and a Lewis acid, A. The host may be either a donor or an acceptor. In biochemistry host–guest complexes are known as receptor-ligand complexes; they are formed primarily by non-covalent bonding. Many host–guest complexes has 1:1 stoichiometry, but many others have more complex structures. The general equilibrium can be written as
p A + q B ApBq
The study of these complexes is important for supramolecular chemistry and molecular recognition. The objective of these studies is often to find systems with a high binding selectivity of a host (receptor) for a particular target molecule or ion, the guest or ligand. An application is the development of chemical sensors. Finding a drug which either blocks a receptor, an antagonist which forms a strong complex the receptor, or activate it, an agonist, is an important pathway to drug discovery.
Complexes of metals
The formation of a complex between a metal ion, M, and a ligand, L, is in fact usually a substitution reaction. For example, In aqueous solutions, metal ions will be present as aquo ions, so the reaction for the formation of the first complex could be written as
[M(H2O)n] + L [M(H2O)n−1L] + H2O
However, since water is in vast excess, the concentration of water is usually assumed to be constant and is omitted from equilibrium constant expressions. Often, the metal and the ligand are in competition for protons. For the equilibrium
p M + q L + r H MpLqHr
a stability constant can be defined as follows:
The definition can easily be extended to include any number of reagents. It includes hydroxide complexes because the concentration of the hydroxide ions is related to the concentration of hydrogen ions by the self-ionization of water
Stability constants defined in this way, are association constants. This can lead to some confusion as pKa values are dissociation constants. In general purpose computer programs it is customary to define all constants as association constants. The relationship between the two types of constant is given in association and dissociation constants.
In biochemistry, an oxygen molecule can bind to an iron(II) atom in a heme prosthetic group in hemoglobin. The equilibrium is usually written, denoting hemoglobin by Hb, as
Hb + O2 HbO2
but this representation is incomplete as the Bohr effect shows that the equilibrium concentrations are pH-dependent. A better representation would be
[HbH]+ + O2 HbO2 + H+
as this shows that when hydrogen ion concentration increases the equilibrium is shifted to the left in accordance with Le Châtelier's principle. Hydrogen ion concentration can be increased by the presence of carbon dioxide, which behaves as a weak acid.
H2O + CO2 + H+
The iron atom can also bind to other molecules such as carbon monoxide. Cigarette smoke contains some carbon monoxide so the equilibrium
HbO2 + CO + O2
is established in the blood of cigarette smokers.
Chelation therapy is based on the principle of using chelating ligands with a high binding selectivity for a particular metal to remove that metal from the human body.
Complexes with polyamino carboxylic acids find a wide range of applications. EDTA in particular is used extensively.
Redox equilibrium
A reduction–oxidation (redox) equilibrium can be handled in exactly the same way as any other chemical equilibrium. For example,
Fe2+ + Ce4+ Fe3+ + Ce3+;
However, in the case of redox reactions it is convenient to split the overall reaction into two half-reactions. In this example
Fe3+ + e− Fe2+
Ce4+ + e− Ce3+
The standard free energy change, which is related to the equilibrium constant by
can be split into two components,
The concentration of free electrons is effectively zero as the electrons are transferred directly from the reductant to the oxidant. The standard electrode potential, E0 for the each half-reaction is related to the standard free energy change by
where n is the number of electrons transferred and F is the Faraday constant. Now, the free energy for an actual reaction is given by
where R is the gas constant and Q a reaction quotient. Strictly speaking Q is a quotient of activities, but it is common practice to use concentrations instead of activities. Therefore:
For any half-reaction, the redox potential of an actual mixture is given by the generalized expression
This is an example of the Nernst equation. The potential is known as a reduction potential. Standard electrode potentials are available in a table of values. Using these values, the actual electrode potential for a redox couple can be calculated as a function of the ratio of concentrations.
The equilibrium potential for a general redox half-reaction (See #Equilibrium constant above for an explanation of the symbols)
α A + β B... + n e− σ S + τ T...
is given by
Use of this expression allows the effect of a species not involved in the redox reaction, such as the hydrogen ion in a half-reaction such as
+ 8 H+ + 5 e− Mn2+ + 4 H2O
to be taken into account.
The equilibrium constant for a full redox reaction can be obtained from the standard redox potentials of the constituent half-reactions. At equilibrium the potential for the two half-reactions must be equal to each other and, of course, the number of electrons exchanged must be the same in the two half reactions.
Redox equilibrium play an important role in the electron transport chain. The various cytochromes in the chain have different standard redox potentials, each one adapted for a specific redox reaction. This allows, for example, atmospheric oxygen to be reduced in photosynthesis. A distinct family of cytochromes, the cytochrome P450 oxidases, are involved in steroidogenesis and detoxification.
Solubility
When a solute forms a saturated solution in a solvent, the concentration of the solute, at a given temperature, is determined by the equilibrium constant at that temperature.
The activity of a pure substance in the solid state is one, by definition, so the expression simplifies to
If the solute does not dissociate the summation is replaced by a single term, but if dissociation occurs, as with ionic substances
For example, with Na2SO4, and so the solubility product is written as
Concentrations, indicated by [...], are usually used in place of activities, but activity must be taken into account of the presence of another salt with no ions in common, the so-called salt effect. When another salt is present that has an ion in common, the common-ion effect comes into play, reducing the solubility of the primary solute.
Partition
When a solution of a substance in one solvent is brought into equilibrium with a second solvent that is immiscible with the first solvent, the dissolved substance may be partitioned between the two solvents. The ratio of concentrations in the two solvents is known as a partition coefficient or distribution coefficient. The partition coefficient is defined as the ratio of the analytical concentrations of the solute in the two phases. By convention the value is reported in logarithmic form.
The partition coefficient is defined at a specified temperature and, if applicable, pH of the aqueous phase. Partition coefficients are very important in pharmacology because they determine the extent to which a substance can pass from the blood (an aqueous solution) through a cell wall which is like an organic solvent. They are usually measured using water and octanol as the two solvents, yielding the so-called octanol-water partition coefficient. Many pharmaceutical compounds are weak acids or weak bases. Such a compound may exist with a different extent of protonation depending on pH and the acid dissociation constant. Because the organic phase has a low dielectric constant the species with no electrical charge will be the most likely one to pass from the aqueous phase to the organic phase. Even at pH 7–7.2, the range of biological pH values, the aqueous phase may support an equilibrium between more than one protonated form. log p is determined from the analytical concentration of the substance in the aqueous phase, that is, the sum of the concentration of the different species in equilibrium.
Solvent extraction is used extensively in separation and purification processes. In its simplest form a reaction is performed in an organic solvent and unwanted by-products are removed by extraction into water at a particular pH.
A metal ion may be extracted from an aqueous phase into an organic phase in which the salt is not soluble, by adding a ligand. The ligand, La−, forms a complex with the metal ion, Mb+, [MLx](b−ax)+ which has a strongly hydrophobic outer surface. If the complex has no electrical charge it will be extracted relatively easily into the organic phase. If the complex is charged, it is extracted as an ion pair. The additional ligand is not always required. For example, uranyl nitrate, UO2(NO3)2, is soluble in diethyl ether because the solvent itself acts as a ligand. This property was used in the past for separating uranium from other metals whose salts are not soluble in ether. Currently extraction into kerosene is preferred, using a ligand such as tri-n-butyl phosphate, TBP. In the PUREX process, which is commonly used in nuclear reprocessing, uranium(VI) is extracted from strong nitric acid as the electrically neutral complex [UO2(TBP)2(NO3)2]. The strong nitric acid provides a high concentration of nitrate ions which pushes the equilibrium in favour of the weak nitrato complex. Uranium is recovered by back-extraction (stripping) into weak nitric acid. Plutonium(IV) forms a similar complex, [PuO2(TBP)2(NO3)2] and the plutonium in this complex can be reduced to separate it from uranium.
Another important application of solvent extraction is in the separation of the lanthanoids. This process also uses TBP and the complexes are extracted into kerosene. Separation is achieved because the stability constant for the formation of the TBP complex increases as the size of the lanthanoid ion decreases.
An instance of ion-pair extraction is in the use of a ligand to enable oxidation by potassium permanganate, KMnO4, in an organic solvent. KMnO4 is not soluble in organic solvents. When a ligand, such as a crown ether is added to an aqueous solution of KMnO4, it forms a hydrophobic complex with the potassium cation which allows the uncharged ion pair [KL]+[MnO4]− to be extracted into the organic solvent. See also: phase-transfer catalysis.
More complex partitioning problems (i.e. 3 or more phases present) can sometimes be handled with a fugacity capacity approach.
Chromatography
In chromatography substances are separated by partition between a stationary phase and a mobile phase. The analyte is dissolved in the mobile phase, and passes over the stationary phase. Separation occurs because of differing affinities of the analytes for the stationary phase. A distribution constant, Kd can be defined as
where as and am are the equilibrium activities in the stationary and mobile phases respectively. It can be shown that the rate of migration, , is related to the distribution constant by
f is a factor which depends on the volumes of the two phases. Thus, the higher the affinity of the solute for the stationary phase, the slower the migration rate.
There is a wide variety of chromatographic techniques, depending on the nature of the stationary and mobile phases. When the stationary phase is solid, the analyte may form a complex with it. A water softener functions by selective complexation with a sulfonate ion exchange resin. Sodium ions form relatively weak complexes with the resin. When hard water is passed through the resin, the divalent ions of magnesium and calcium displace the sodium ions and are retained on the resin, R.
RNa + M2+ RM+ + Na+
The water coming out of the column is relatively rich in sodium ions and poor in calcium and magnesium which are retained on the column. The column is regenerated by passing a strong solution of sodium chloride through it, so that the resin–sodium complex is again formed on the column. Ion-exchange chromatography utilizes a resin such as chelex 100 in which iminodiacetate residues, attached to a polymer backbone, form chelate complexes of differing strengths with different metal ions, allowing the ions such as Cu2+ and Ni2+ to be separated chromatographically.
Another example of complex formation is in chiral chromatography in which is used to separate enantiomers from each other. The stationary phase is itself chiral and forms complexes selectively with the enantiomers. In other types of chromatography with a solid stationary phase, such as thin-layer chromatography the analyte is selectively adsorbed onto the solid.
In gas–liquid chromatography (GLC) the stationary phase is a liquid such as polydimethylsiloxane, coated on a glass tube. Separation is achieved because the various components in the gas have different solubility in the stationary phase. GLC can be used to separate literally hundreds of components in a gas mixture such as cigarette smoke or essential oils, such as lavender oil.
See also
Thermodynamic databases for pure substances
Notes
External links
Chemical Equilibrium Downloadable book
References
A classic book, last reprinted in 1997.
External links
Physical chemistry
Articles containing video clips | 0.805307 | 0.968424 | 0.779878 |
Process simulation | Process simulation is used for the design, development, analysis, and optimization of technical process of simulation of processes such as: chemical plant s, chemical processes, environmental systems, power stations, complex manufacturing operations, biological processes, and similar technical functions.
Main principle
Process simulation is a model-based representation of chemical, physical, biological, and other technical processes and unit operations in software. Basic prerequisites for the model are chemical and physical properties of pure components and mixtures, of reactions, and of mathematical models which, in combination, allow the calculation of process properties by the software.
Process simulation software describes processes in flow diagrams where unit operations are positioned and connected by product or educt streams. The software solves the mass and energy balance to find a stable operating point on specified parameters. The goal of a process simulation is to find optimal conditions for a process. This is essentially an optimization problem which has to be solved in an iterative process.
In the example above the feed stream to the column is defined in terms of its chemical and physical properties. This includes the composition of individual molecular species in the stream; the overall mass flowrate; the streams pressure and temperature. For hydrocarbon systems the Vapor-Liquid Equilibrium Ratios (K-Values) or models that are used to define them are specified by the user. The properties of the column are defined such as the inlet pressure and the number of theoretical plates. The duty of the reboiler and overhead condenser are calculated by the model to achieve a specified composition or other parameter of the bottom and/or top product. The simulation calculates the chemical and physical properties of the product streams, each is assigned a unique number which is used in the mass and energy diagram.
Process simulation uses models which introduce approximations and assumptions but allow the description of a property over a wide range of temperatures and pressures which might not be covered by available real data. Models also allow interpolation and extrapolation - within certain limits - and enable the search for conditions outside the range of known properties.
Modelling
The development of models for a better representation of real processes is the core of the further development of the simulation software. Model development is done through the principles of chemical engineering but also control engineering and for the improvement of mathematical simulation techniques. Process simulation is therefore a field where practitioners from chemistry, physics, computer science, mathematics, and engineering work together.
Efforts are made to develop new and improved models for the calculation of properties. This includes for example the description of
thermophysical properties like vapor pressures, viscosities, caloric data, etc. of pure components and mixtures
properties of different apparatus like reactors, distillation columns, pumps, etc.
chemical reactions and kinetics
environmental and safety-related data
There are two main types of models:
Simple equations and correlations where parameters are fitted to experimental data.
Predictive methods where properties are estimated.
The equations and correlations are normally preferred because they describe the property (almost) exactly. To obtain reliable parameters it is necessary to have experimental data which are usually obtained from factual data banks or, if no data are publicly available, from measurements.
Using predictive methods is more cost effective than experimental work and also than data from data banks. Despite this advantage predicted properties are normally only used in early stages of the process development to find first approximate solutions and to exclude false pathways because these estimation methods normally introduce higher errors than correlations obtained from real data.
Process simulation has encouraged the development of mathematical models in the fields of numerics and the solving of complex problems.
History
The history of process simulation is related to the development of the computer science and of computer hardware and programming languages. Early implementations of partial aspects of chemical processes were introduced in the 1970s when suitable hardware and software (here mainly the programming languages FORTRAN and C) became available. The modelling of chemical properties began much earlier, notably the cubic equation of states and the Antoine equation were precursory developments of the 19th century.
Steady state and dynamic process simulation
Initially process simulation was used to simulate steady state processes. Steady-state models perform a mass and energy balance of a steady state process (a process in an equilibrium state) independent of time.
Dynamic simulation is an extension of steady-state process simulation whereby time-dependence is built into the models via derivative terms i.e. accumulation of mass and energy. The advent of dynamic simulation means that the time-dependent description, prediction and control of real processes in real time has become possible. This includes the description of starting up and shutting down a plant, changes of conditions during a reaction, holdups, thermal changes and more.
Dynamic simulation require increased calculation time and are mathematically more complex than a steady state simulation. It can be seen as a multiple repeated steady state simulation (based on a fixed time step) with constantly changing parameters.
Dynamic simulation can be used in both an online and offline fashion. The online case being model predictive control, where the real-time simulation results are used to predict the changes that would occur for a control input change, and the control parameters are optimised based on the results. Offline process simulation can be used in the design, troubleshooting and optimisation of process plant as well as the conduction of case studies to assess the impacts of process modifications. Dynamic simulation is also used for operator training.
See also
Advanced Simulation Library
Computer simulation
List of chemical process simulators
Software Process simulation
References
Chemical process engineering
Simulation
Industrial design
Process engineering | 0.804871 | 0.96888 | 0.779823 |
Economic model | An economic model is a theoretical construct representing economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified, often mathematical, framework designed to illustrate complex processes. Frequently, economic models posit structural parameters. A model may have various exogenous variables, and those variables may change to create various responses by economic variables. Methodological uses of models include investigation, theorizing, and fitting theories to the world.
Overview
In general terms, economic models have two functions: first as a simplification of and abstraction from observed data, and second as a means of selection of data based on a paradigm of econometric study.
Simplification is particularly important for economics given the enormous complexity of economic processes. This complexity can be attributed to the diversity of factors that determine economic activity; these factors include: individual and cooperative decision processes, resource limitations, environmental and geographical constraints, institutional and legal requirements and purely random fluctuations. Economists therefore must make a reasoned choice of which variables and which relationships between these variables are relevant and which ways of analyzing and presenting this information are useful.
Selection is important because the nature of an economic model will often determine what facts will be looked at and how they will be compiled. For example, inflation is a general economic concept, but to measure inflation requires a model of behavior, so that an economist can differentiate between changes in relative prices and changes in price that are to be attributed to inflation.
In addition to their professional academic interest, uses of models include:
Forecasting economic activity in a way in which conclusions are logically related to assumptions;
Proposing economic policy to modify future economic activity;
Presenting reasoned arguments to politically justify economic policy at the national level, to explain and influence company strategy at the level of the firm, or to provide intelligent advice for household economic decisions at the level of households.
Planning and allocation, in the case of centrally planned economies, and on a smaller scale in logistics and management of businesses.
In finance, predictive models have been used since the 1980s for trading (investment and speculation). For example, emerging market bonds were often traded based on economic models predicting the growth of the developing nation issuing them. Since the 1990s many long-term risk management models have incorporated economic relationships between simulated variables in an attempt to detect high-exposure future scenarios (often through a Monte Carlo method).
A model establishes an argumentative framework for applying logic and mathematics that can be independently discussed and tested and that can be applied in various instances. Policies and arguments that rely on economic models have a clear basis for soundness, namely the validity of the supporting model.
Economic models in current use do not pretend to be theories of everything economic; any such pretensions would immediately be thwarted by computational infeasibility and the incompleteness or lack of theories for various types of economic behavior. Therefore, conclusions drawn from models will be approximate representations of economic facts. However, properly constructed models can remove extraneous information and isolate useful approximations of key relationships. In this way more can be understood about the relationships in question than by trying to understand the entire economic process.
The details of model construction vary with type of model and its application, but a generic process can be identified. Generally, any modelling process has two steps: generating a model, then checking the model for accuracy (sometimes called diagnostics). The diagnostic step is important because a model is only useful to the extent that it accurately mirrors the relationships that it purports to describe. Creating and diagnosing a model is frequently an iterative process in which the model is modified (and hopefully improved) with each iteration of diagnosis and respecification. Once a satisfactory model is found, it should be double checked by applying it to a different data set.
Types of models
According to whether all the model variables are deterministic, economic models can be classified as stochastic or non-stochastic models; according to whether all the variables are quantitative, economic models are classified as discrete or continuous choice model; according to the model's intended purpose/function, it can be classified as
quantitative or qualitative; according to the model's ambit, it can be classified as a general equilibrium model, a partial equilibrium model, or even a non-equilibrium model; according to the economic agent's characteristics, models can be classified as rational agent models, representative agent models etc.
Stochastic models are formulated using stochastic processes. They model economically observable values over time. Most of econometrics is based on statistics to formulate and test hypotheses about these processes or estimate parameters for them. A widely used bargaining class of simple econometric models popularized by Tinbergen and later Wold are autoregressive models, in which the stochastic process satisfies some relation between current and past values. Examples of these are autoregressive moving average models and related ones such as autoregressive conditional heteroskedasticity (ARCH) and GARCH models for the modelling of heteroskedasticity.
Non-stochastic models may be purely qualitative (for example, relating to social choice theory) or quantitative (involving rationalization of financial variables, for example with hyperbolic coordinates, and/or specific forms of functional relationships between variables). In some cases economic predictions in a coincidence of a model merely assert the direction of movement of economic variables, and so the functional relationships are used only stoical in a qualitative sense: for example, if the price of an item increases, then the demand for that item will decrease. For such models, economists often use two-dimensional graphs instead of functions.
Qualitative models – although almost all economic models involve some form of mathematical or quantitative analysis, qualitative models are occasionally used. One example is qualitative scenario planning in which possible future events are played out. Another example is non-numerical decision tree analysis. Qualitative models often suffer from lack of precision.
At a more practical level, quantitative modelling is applied to many areas of economics and several methodologies have evolved more or less independently of each other. As a result, no overall model taxonomy is naturally available. We can nonetheless provide a few examples that illustrate some particularly relevant points of model construction.
An accounting model is one based on the premise that for every credit there is a debit. More symbolically, an accounting model expresses some principle of conservation in the form
algebraic sum of inflows = sinks − sources
This principle is certainly true for money and it is the basis for national income accounting. Accounting models are true by convention, that is any experimental failure to confirm them, would be attributed to fraud, arithmetic error or an extraneous injection (or destruction) of cash, which we would interpret as showing the experiment was conducted improperly.
Optimality and constrained optimization models – Other examples of quantitative models are based on principles such as profit or utility maximization. An example of such a model is given by the comparative statics of taxation on the profit-maximizing firm. The profit of a firm is given by
where is the price that a product commands in the market if it is supplied at the rate , is the revenue obtained from selling the product, is the cost of bringing the product to market at the rate , and is the tax that the firm must pay per unit of the product sold.
The profit maximization assumption states that a firm will produce at the output rate x if that rate maximizes the firm's profit. Using differential calculus we can obtain conditions on x under which this holds. The first order maximization condition for x is
Regarding x as an implicitly defined function of t by this equation (see implicit function theorem), one concludes that the derivative of x with respect to t has the same sign as
which is negative if the second order conditions for a local maximum are satisfied.
Thus the profit maximization model predicts something about the effect of taxation on output, namely that output decreases with increased taxation. If the predictions of the model fail, we conclude that the profit maximization hypothesis was false; this should lead to alternate theories of the firm, for example based on bounded rationality.
Borrowing a notion apparently first used in economics by Paul Samuelson, this model of taxation and the predicted dependency of output on the tax rate, illustrates an operationally meaningful theorem; that is one requiring some economically meaningful assumption that is falsifiable under certain conditions.
Aggregate models. Macroeconomics needs to deal with aggregate quantities such as output, the price level, the interest rate and so on. Now real output is actually a vector of goods and services, such as cars, passenger airplanes, computers, food items, secretarial services, home repair services etc. Similarly price is the vector of individual prices of goods and services. Models in which the vector nature of the quantities is maintained are used in practice, for example Leontief input–output models are of this kind. However, for the most part, these models are computationally much harder to deal with and harder to use as tools for qualitative analysis. For this reason, macroeconomic models usually lump together different variables into a single quantity such as output or price. Moreover, quantitative relationships between these aggregate variables are often parts of important macroeconomic theories. This process of aggregation and functional dependency between various aggregates usually is interpreted statistically and validated by econometrics. For instance, one ingredient of the Keynesian model is a functional relationship between consumption and national income: C = C(Y). This relationship plays an important role in Keynesian analysis.
Problems with economic models
Most economic models rest on a number of assumptions that are not entirely realistic. For example, agents are often assumed to have perfect information, and markets are often assumed to clear without friction. Or, the model may omit issues that are important to the question being considered, such as externalities. Any analysis of the results of an economic model must therefore consider the extent to which these results may be compromised by inaccuracies in these assumptions, and a large literature has grown up discussing problems with economic models, or at least asserting that their results are unreliable.
History
One of the major problems addressed by economic models has been understanding economic growth. An early attempt to provide a technique to approach this came from the French physiocratic school in the eighteenth century. Among these economists, François Quesnay was known particularly for his development and use of tables he called Tableaux économiques. These tables have in fact been interpreted in more modern terminology as a Leontiev model, see the Phillips reference below.
All through the 18th century (that is, well before the founding of modern political economy, conventionally marked by Adam Smith's 1776 Wealth of Nations), simple probabilistic models were used to understand the economics of insurance. This was a natural extrapolation of the theory of gambling, and played an important role both in the development of probability theory itself and in the development of actuarial science. Many of the giants of 18th century mathematics contributed to this field. Around 1730, De Moivre addressed some of these problems in the 3rd edition of The Doctrine of Chances. Even earlier (1709), Nicolas Bernoulli studies problems related to savings and interest in the Ars Conjectandi. In 1730, Daniel Bernoulli studied "moral probability" in his book Mensura Sortis, where he introduced what would today be called "logarithmic utility of money" and applied it to gambling and insurance problems, including a solution of the paradoxical Saint Petersburg problem. All of these developments were summarized by Laplace in his Analytical Theory of Probabilities (1812). Thus, by the time David Ricardo came along he had a well-established mathematical basis to draw from.
Tests of macroeconomic predictions
In the late 1980s, the Brookings Institution compared 12 leading macroeconomic models available at the time. They compared the models' predictions for how the economy would respond to specific economic shocks (allowing the models to control for all the variability in the real world; this was a test of model vs. model, not a test against the actual outcome). Although the models simplified the world and started from a stable, known common parameters the various models gave significantly different answers. For instance, in calculating the impact of a monetary loosening on output some models estimated a 3% change in GDP after one year, and one gave almost no change, with the rest spread between.
Partly as a result of such experiments, modern central bankers no longer have as much confidence that it is possible to 'fine-tune' the economy as they had in the 1960s and early 1970s. Modern policy makers tend to use a less activist approach, explicitly because they lack confidence that their models will actually predict where the economy is going, or the effect of any shock upon it. The new, more humble, approach sees danger in dramatic policy changes based on model predictions, because of several practical and theoretical limitations in current macroeconomic models; in addition to the theoretical pitfalls, (listed above) some problems specific to aggregate modelling are:
Limitations in model construction caused by difficulties in understanding the underlying mechanisms of the real economy. (Hence the profusion of separate models.)
The law of unintended consequences, on elements of the real economy not yet included in the model.
The time lag in both receiving data and the reaction of economic variables to policy makers attempts to 'steer' them (mostly through monetary policy) in the direction that central bankers want them to move. Milton Friedman has vigorously argued that these lags are so long and unpredictably variable that effective management of the macroeconomy is impossible.
The difficulty in correctly specifying all of the parameters (through econometric measurements) even if the structural model and data were perfect.
The fact that all the model's relationships and coefficients are stochastic, so that the error term becomes very large quickly, and the available snapshot of the input parameters is already out of date.
Modern economic models incorporate the reaction of the public and market to the policy maker's actions (through game theory), and this feedback is included in modern models (following the rational expectations revolution and Robert Lucas, Jr.'s Lucas critique of non-microfounded models). If the response to the decision maker's actions (and their credibility) must be included in the model then it becomes much harder to influence some of the variables simulated.
Comparison with models in other sciences
Complex systems specialist and mathematician David Orrell wrote on this issue in his book Apollo's Arrow and explained that the weather, human health and economics use similar methods of prediction (mathematical models). Their systems—the atmosphere, the human body and the economy—also have similar levels of complexity. He found that forecasts fail because the models suffer from two problems: (i) they cannot capture the full detail of the underlying system, so rely on approximate equations; (ii) they are sensitive to small changes in the exact form of these equations. This is because complex systems like the economy or the climate consist of a delicate balance of opposing forces, so a slight imbalance in their representation has big effects. Thus, predictions of things like economic recessions are still highly inaccurate, despite the use of enormous models running on fast computers.
See .
Effects of deterministic chaos on economic models
Economic and meteorological simulations may share a fundamental limit to their predictive powers: chaos. Although the modern mathematical work on chaotic systems began in the 1970s the danger of chaos had been identified and defined in Econometrica as early as 1958:
"Good theorising consists to a large extent in avoiding assumptions ... [with the property that] a small change in what is posited will seriously affect the conclusions."
(William Baumol, Econometrica, 26 see: Economics on the Edge of Chaos).
It is straightforward to design economic models susceptible to butterfly effects of initial-condition sensitivity.
However, the econometric research program to identify which variables are chaotic (if any) has largely concluded that aggregate macroeconomic variables probably do not behave chaotically. This would mean that refinements to the models could ultimately produce reliable long-term forecasts. However, the validity of this conclusion has generated two challenges:
In 2004 Philip Mirowski challenged this view and those who hold it, saying that chaos in economics is suffering from a biased "crusade" against it by neo-classical economics in order to preserve their mathematical models.
The variables in finance may well be subject to chaos. Also in 2004, the University of Canterbury study Economics on the Edge of Chaos concludes that after noise is removed from S&P 500 returns, evidence of deterministic chaos is found.
More recently, chaos (or the butterfly effect) has been identified as less significant than previously thought to explain prediction errors. Rather, the predictive power of economics and meteorology would mostly be limited by the models themselves and the nature of their underlying systems (see Comparison with models in other sciences above).
Critique of hubris in planning
A key strand of free market economic thinking is that the market's invisible hand guides an economy to prosperity more efficiently than central planning using an economic model. One reason, emphasized by Friedrich Hayek, is the claim that many of the true forces shaping the economy can never be captured in a single plan. This is an argument that cannot be made through a conventional (mathematical) economic model because it says that there are critical systemic-elements that will always be omitted from any top-down analysis of the economy.
Examples of economic models
Cobb–Douglas model of production
Solow–Swan model of economic growth
Lucas islands model of money supply
Heckscher–Ohlin model of international trade
Black–Scholes model of option pricing
AD–AS model a macroeconomic model of aggregate demand– and supply
IS–LM model the relationship between interest rates and assets markets
Ramsey–Cass–Koopmans model of economic growth
Gordon–Loeb model for cyber security investments
See also
Economic methodology
Computational economics
Agent-based computational economics
Endogeneity
Financial model
Notes
References
.
.
. Defines model by analogy with maps, an idea borrowed from Baumol and Blinder. Discusses deduction within models, and logical derivation of one model from another. Chapter 9 compares the neoclassical school and the Austrian School, in particular in relation to falsifiability.
. One of the earliest studies on methodology of economics, analysing the postulate of rationality.
. A series of essays and papers analysing questions about how (and whether) models and theories in economics are empirically verified and the current status of positivism in economics.
. A thorough discussion of many quantitative models used in modern economic theory. Also a careful discussion of aggregation.
.
.
.
. This is a classic book carefully discussing comparative statics in microeconomics, though some dynamics is studied as well as some macroeconomic theory. This should not be confused with Samuelson's popular textbook.
.
.
.
.
External links
R. Frigg and S. Hartmann, Models in Science. Entry in the Stanford Encyclopedia of Philosophy.
H. Varian How to build a model in your spare time The author makes several unexpected suggestions: Look for a model in the real world, not in journals. Look at the literature later, not sooner.
Elmer G. Wiens: Classical & Keynesian AD-AS Model – An on-line, interactive model of the Canadian Economy.
IFs Economic Sub-Model : Online Global Model
Economic attractor
Conceptual modelling | 0.785948 | 0.992177 | 0.7798 |
PK/PD model | PK/PD modeling (pharmacokinetic/pharmacodynamic modeling) (alternatively abbreviated as PKPD or PK-PD modeling) is a technique that combines the two classical pharmacologic disciplines of pharmacokinetics and pharmacodynamics. It integrates a pharmacokinetic and a pharmacodynamic model component into one set of mathematical expressions that allows the description of the time course of effect intensity in response to administration of a drug dose. PK/PD modeling is related to the field of pharmacometrics.
Central to PK/PD models is the concentration-effect or exposure-response relationship. A variety of PK/PD modeling approaches exist to describe exposure-response relationships. PK/PD relationships can be described by simple equations such as linear model, Emax model or sigmoid Emax model. However, if a delay is observed between the drug administration and the drug effect, a temporal dissociation needs to be taken into account and more complex models exist:
Direct vs Indirect link PK/PD models
Direct vs Indirect response PK/PD models
Time variant vs time invariant
Cell lifespan models
Complex response models
PK/PD modeling has its importance at each step of the drug development and it has shown its usefulness in many diseases. The Food and Drug Administration also provides guidances for Industry to recommend how exposure-response studies should be performed.
References
Pharmacodynamics
Pharmacokinetics | 0.7944 | 0.981577 | 0.779764 |
Reaction mechanism | In chemistry, a reaction mechanism is the step by step sequence of elementary reactions by which overall chemical reaction occurs.
A chemical mechanism is a theoretical conjecture that tries to describe in detail what takes place at each stage of an overall chemical reaction. The detailed steps of a reaction are not observable in most cases. The conjectured mechanism is chosen because it is thermodynamically feasible and has experimental support in isolated intermediates (see next section) or other quantitative and qualitative characteristics of the reaction. It also describes each reactive intermediate, activated complex, and transition state, which bonds are broken (and in what order), and which bonds are formed (and in what order). A complete mechanism must also explain the reason for the reactants and catalyst used, the stereochemistry observed in reactants and products, all products formed and the amount of each.
The electron or arrow pushing method is often used in illustrating a reaction mechanism; for example, see the illustration of the mechanism for benzoin condensation in the following examples section.
A reaction mechanism must also account for the order in which molecules react. Often what appears to be a single-step conversion is in fact a multistep reaction.
Reaction intermediates
Reaction intermediates are chemical species, often unstable and short-lived (however sometimes can be isolated), which are not reactants or products of the overall chemical reaction, but are temporary products and/or reactants in the mechanism's reaction steps. Reaction intermediates are often free radicals or ions. Reaction intermediates are often confused with the transition state. The transition state is a fleeting, high-energy configuration that exists only at the peak of the energy barrier during a reaction, while a reaction intermediate is a relatively stable species that exists for a measurable time between steps in a reaction. Unlike the transition state, intermediates can sometimes be isolated or observed directly.
The kinetics (relative rates of the reaction steps and the rate equation for the overall reaction) are explained in terms of the energy needed for the conversion of the reactants to the proposed transition states (molecular states that correspond to maxima on the reaction coordinates, and to saddle points on the potential energy surface for the reaction).
Chemical kinetics
Information about the mechanism of a reaction is often provided by the use of chemical kinetics to determine the rate equation and the reaction order in each reactant.
Consider the following reaction for example:
CO + NO2 → CO2 + NO
In this case, experiments have determined that this reaction takes place according to the rate law . This form suggests that the rate-determining step is a reaction between two molecules of NO2. A possible mechanism for the overall reaction that explains the rate law is:
2 NO2 → NO3 + NO (slow)
NO3 + CO → NO2 + CO2 (fast)
Each step is called an elementary step, and each has its own rate law and molecularity. The elementary steps should add up to the original reaction. (Meaning, if we were to cancel out all the molecules that appear on both sides of the reaction, we would be left with the original reaction.)
When determining the overall rate law for a reaction, the slowest step is the step that determines the reaction rate. Because the first step (in the above reaction) is the slowest step, it is the rate-determining step. Because it involves the collision of two NO2 molecules, it is a bimolecular reaction with a rate which obeys the rate law .
Other reactions may have mechanisms of several consecutive steps. In organic chemistry, the reaction mechanism for the benzoin condensation, put forward in 1903 by A. J. Lapworth, was one of the first proposed reaction mechanisms.
A chain reaction is an example of a complex mechanism, in which the propagation steps form a closed cycle.
In a chain reaction, the intermediate produced in one step generates an intermediate in another step.
Intermediates are called chain carriers. Sometimes, the chain carriers are radicals, they can be ions as well. In nuclear fission they are neutrons.
Chain reactions have several steps, which may include:
Chain initiation: this can be by thermolysis (heating the molecules) or photolysis (absorption of light) leading to the breakage of a bond.
Propagation: a chain carrier makes another carrier.
Branching: one carrier makes more than one carrier.
Retardation: a chain carrier may react with a product reducing the rate of formation of the product. It makes another chain carrier, but the product concentration is reduced.
Chain termination: radicals combine and the chain carriers are lost.
Inhibition: chain carriers are removed by processes other than termination, such as by forming radicals.
Even though all these steps can appear in one chain reaction, the minimum necessary ones are Initiation, propagation, and termination.
An example of a simple chain reaction is the thermal decomposition of acetaldehyde (CH3CHO) to methane (CH4) and carbon monoxide (CO). The experimental reaction order is 3/2, which can be explained by a Rice-Herzfeld mechanism.
This reaction mechanism for acetaldehyde has 4 steps with rate equations for each step :
Initiation : CH3CHO → •CH3 + •CHO (Rate=k1 [CH3CHO])
Propagation: CH3CHO + •CH3 → CH4 + CH3CO• (Rate=k2 [CH3CHO][•CH3])
Propagation: CH3CO• → •CH3 + CO (Rate=k3 [CH3CO•])
Termination: •CH3 + •CH3 → CH3CH3 (Rate=k4 [•CH3]2)
For the overall reaction, the rates of change of the concentration of the intermediates •CH3 and CH3CO• are zero, according to the steady-state approximation, which is used to account for the rate laws of chain reactions.
d[•CH3]/dt = k1[CH3CHO] – k2[•CH3][CH3CHO] + k3[CH3CO•] - 2k4[•CH3]2 = 0
and d[CH3CO•]/dt = k2[•CH3][CH3CHO] – k3[CH3CO•] = 0
The sum of these two equations is k1[CH3CHO] – 2 k4[•CH3]2 = 0. This may be solved to find the steady-state concentration of •CH3 radicals as [•CH3] = (k1 / 2k4)1/2 [CH3CHO]1/2.
It follows that the rate of formation of CH4 is d[CH4]/dt = k2[•CH3][CH3CHO] = k2 (k1 / 2k4)1/2 [CH3CHO]3/2
Thus the mechanism explains the observed rate expression, for the principal products CH4 and CO. The exact rate law may be even more complicated, there are also minor products such as acetone (CH3COCH3) and propanal (CH3CH2CHO).
Other experimental methods to determine mechanism
Many experiments that suggest the possible sequence of steps in a reaction mechanism have been designed, including:
measurement of the effect of temperature (Arrhenius equation) to determine the activation energy
spectroscopic observation of reaction intermediates
determination of the stereochemistry of products, for example in nucleophilic substitution reactions
measurement of the effect of isotopic substitution on the reaction rate
for reactions in solution, measurement of the effect of pressure on the reaction rate to determine the volume change on formation of the activated complex
for reactions of ions in solution, measurement of the effect of ionic strength on the reaction rate
direct observation of the activated complex by pump-probe spectroscopy
infrared chemiluminescence to detect vibrational excitation in the products
electrospray ionization mass spectrometry.
crossover experiments.
Theoretical modeling
A correct reaction mechanism is an important part of accurate predictive modeling. For many combustion and plasma systems, detailed mechanisms are not available or require development.
Even when information is available, identifying and assembling the relevant data from a variety of sources, reconciling discrepant values and extrapolating to different conditions can be a difficult process without expert help. Rate constants or thermochemical data are often not available in the literature, so computational chemistry techniques or group additivity methods must be used to obtain the required parameters.
Computational chemistry methods can also be used to calculate potential energy surfaces for reactions and determine probable mechanisms.
Molecularity
Molecularity in chemistry is the number of colliding molecular entities that are involved in a single reaction step.
A reaction step involving one molecular entity is called unimolecular.
A reaction step involving two molecular entities is called bimolecular.
A reaction step involving three molecular entities is called trimolecular or termolecular.
In general, reaction steps involving more than three molecular entities do not occur, because is statistically improbable in terms of Maxwell distribution to find such a transition state.
See also
Organic reactions by mechanism
Nucleophilic acyl substitution
Neighbouring group participation
Finkelstein reaction
Lindemann mechanism
Electrochemical reaction mechanism
Nucleophilic abstraction
References
L.G.WADE, ORGANIC CHEMISTRY 7TH ED, 2010
External links
Reaction mechanisms for combustion of hydrocarbons
Mechanism
Chemical kinetics
Chemical reaction engineering
Combustion | 0.792231 | 0.98426 | 0.779761 |
Hypothetical types of biochemistry | Hypothetical types of biochemistry are forms of biochemistry agreed to be scientifically viable but not proven to exist at this time. The kinds of living organisms currently known on Earth all use carbon compounds for basic structural and metabolic functions, water as a solvent, and DNA or RNA to define and control their form. If life exists on other planets or moons it may be chemically similar, though it is also possible that there are organisms with quite different chemistries for instance, involving other classes of carbon compounds, compounds of another element, or another solvent in place of water.
The possibility of life-forms being based on "alternative" biochemistries is the topic of an ongoing scientific discussion, informed by what is known about extraterrestrial environments and about the chemical behaviour of various elements and compounds. It is of interest in synthetic biology and is also a common subject in science fiction.
The element silicon has been much discussed as a hypothetical alternative to carbon. Silicon is in the same group as carbon on the periodic table and, like carbon, it is tetravalent. Hypothetical alternatives to water include ammonia, which, like water, is a polar molecule, and cosmically abundant; and non-polar hydrocarbon solvents such as methane and ethane, which are known to exist in liquid form on the surface of Titan.
Overview of hypothetical types of biochemistry
Shadow biosphere
A shadow biosphere is a hypothetical microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. Although life on Earth is relatively well-studied, the shadow biosphere may still remain unnoticed because the exploration of the microbial world targets primarily the biochemistry of the macro-organisms.
Alternative-chirality biomolecules
Perhaps the least unusual alternative biochemistry would be one with differing chirality of its biomolecules. In known Earth-based life, amino acids are almost universally of the form and sugars are of the form. Molecules using amino acids or sugars may be possible; molecules of such a chirality, however, would be incompatible with organisms using the opposing chirality molecules. Amino acids whose chirality is opposite to the norm are found on Earth, and these substances are generally thought to result from decay of organisms of normal chirality. However, physicist Paul Davies speculates that some of them might be products of "anti-chiral" life.
It is questionable, however, whether such a biochemistry would be truly alien. Although it would certainly be an alternative stereochemistry, molecules that are overwhelmingly found in one enantiomer throughout the vast majority of organisms can nonetheless often be found in another enantiomer in different (often basal) organisms such as in comparisons between members of Archaea and other domains, making it an open topic whether an alternative stereochemistry is truly novel.
Non-carbon-based biochemistries
On Earth, all known living things have a carbon-based structure and system. Scientists have speculated about the pros and cons of using elements other than carbon to form the molecular structures necessary for life, but no one has proposed a theory employing such atoms to form all the necessary structures. However, as Carl Sagan argued, it is very difficult to be certain whether a statement that applies to all life on Earth will turn out to apply to all life throughout the universe. Sagan used the term "carbon chauvinism" for such an assumption. He regarded silicon and germanium as conceivable alternatives to carbon (other plausible elements include but are not limited to palladium and titanium); but, on the other hand, he noted that carbon does seem more chemically versatile and is more abundant in the cosmos. Norman Horowitz devised the experiments to determine whether life might exist on Mars that were carried out by the Viking Lander of 1976, the first U.S. mission to successfully land a probe on the surface of Mars. Horowitz argued that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival on other planets. He considered that there was only a remote possibility that non-carbon life forms could exist with genetic information systems capable of self-replication and the ability to evolve and adapt.
Silicon biochemistry
The silicon atom has been much discussed as the basis for an alternative biochemical system, because silicon has many chemical similarities to carbon and is in the same group of the periodic table. Like carbon, silicon can create molecules that are sufficiently large to carry biological information.
However, silicon has several drawbacks as a carbon alternative. Carbon is ten times more cosmically abundant than silicon, and its chemistry appears naturally more complex. By 1998, astronomers had identified 84 carbon-containing molecules in the interstellar medium, but only 8 containing silicon, of which half also included carbon. Even though Earth and other terrestrial planets are exceptionally silicon-rich and carbon-poor (silicon is roughly 925 times more abundant in Earth's crust than carbon), terrestrial life bases itself on carbon. It may eschew silicon because silicon compounds are less varied, unstable in the presence of water, or block the flow of heat.
Relative to carbon, silicon has a much larger atomic radius, and forms much weaker covalent bonds to atoms — except oxygen and fluorine, with which it forms very strong bonds. Almost no multiple bonds to silicon are stable, although silicon does exhibit varied coordination number. Silanes, silicon analogues to the alkanes, react rapidly with water, and long-chain silanes spontaneously decompose. Consequently, most terrestrial silicon is "locked up" in silica, and not a wide variety of biogenic precursors.
Silicones, which alternate between silicon and oxygen atoms, are much more stable than silanes, and may even be more stable than the equivalent hydrocarbons in sulfuric acid-rich extraterrestrial environments. Alternatively, the weak bonds in silicon compounds may help maintain a rapid pace of life at cryogenic temperatures. Polysilanols, the silicon homologues to sugars, are among the few compounds soluble in liquid nitrogen.
All known silicon macromolecules are artificial polymers, and so "monotonous compared with the combinatorial universe of organic macromolecules". Even so, some Earth life uses biogenic silica: diatoms' silicate skeletons. A. G. Cairns-Smith hypothesized that silicate minerals in water played a crucial role in abiogenesis, in that biogenic carbon compounds formed around their crystal structures. Although not observed in nature, carbon–silicon bonds have been added to biochemistry under directed evolution (artificial selection): a cytochrome c protein from Rhodothermus marinus has been engineered to catalyze new carbon–silicon bonds between hydrosilanes and diazo compounds.
Other exotic element-based biochemistries
Boranes are dangerously explosive in Earth's atmosphere, but would be more stable in a reducing atmosphere. However, boron's low cosmic abundance makes it less likely as a base for life than carbon.
Various metals, together with oxygen, can form very complex and thermally stable structures rivaling those of organic compounds; the heteropoly acids are one such family. Some metal oxides are also similar to carbon in their ability to form both nanotube structures and diamond-like crystals (such as cubic zirconia). Titanium, aluminium, magnesium, and iron are all more abundant in the Earth's crust than carbon. Metal-oxide-based life could therefore be a possibility under certain conditions, including those (such as high temperatures) at which carbon-based life would be unlikely. The Cronin group at Glasgow University reported self-assembly of tungsten polyoxometalates into cell-like spheres. By modifying their metal oxide content, the spheres can acquire holes that act as porous membrane, selectively allowing chemicals in and out of the sphere according to size.
Sulfur is also able to form long-chain molecules, but suffers from the same high-reactivity problems as phosphorus and silanes. The biological use of sulfur as an alternative to carbon is purely hypothetical, especially because sulfur usually forms only linear chains rather than branched ones. (The biological use of sulfur as an electron acceptor is widespread and can be traced back 3.5 billion years on Earth, thus predating the use of molecular oxygen. Sulfur-reducing bacteria can utilize elemental sulfur instead of oxygen, reducing sulfur to hydrogen sulfide.)
Arsenic as an alternative to phosphorus
Arsenic, which is chemically similar to phosphorus, while poisonous for most life forms on Earth, is incorporated into the biochemistry of some organisms. Some marine algae incorporate arsenic into complex organic molecules such as arsenosugars and arsenobetaines. Fungi and bacteria can produce volatile methylated arsenic compounds. Arsenate reduction and arsenite oxidation have been observed in microbes (Chrysiogenes arsenatis). Additionally, some prokaryotes can use arsenate as a terminal electron acceptor during anaerobic growth and some can utilize arsenite as an electron donor to generate energy.
It has been speculated that the earliest life forms on Earth may have used arsenic biochemistry in place of phosphorus in the structure of their DNA. A common objection to this scenario is that arsenate esters are so much less stable to hydrolysis than corresponding phosphate esters that arsenic is poorly suited for this function.
The authors of a 2010 geomicrobiology study, supported in part by NASA, have postulated that a bacterium, named GFAJ-1, collected in the sediments of Mono Lake in eastern California, can employ such 'arsenic DNA' when cultured without phosphorus. They proposed that the bacterium may employ high levels of poly-β-hydroxybutyrate or other means to reduce the effective concentration of water and stabilize its arsenate esters. This claim was heavily criticized almost immediately after publication for the perceived lack of appropriate controls. Science writer Carl Zimmer contacted several scientists for an assessment: "I reached out to a dozen experts ... Almost unanimously, they think the NASA scientists have failed to make their case".
Other authors were unable to reproduce their results and showed that the study had issues with phosphate contamination, suggesting that the low amounts present could sustain extremophile lifeforms.
Alternatively, it was suggested that GFAJ-1 cells grow by recycling phosphate from degraded ribosomes, rather than by replacing it with arsenate.
Non-water solvents
In addition to carbon compounds, all currently known terrestrial life also requires water as a solvent. This has led to discussions about whether water is the only liquid capable of filling that role. The idea that an extraterrestrial life-form might be based on a solvent other than water has been taken seriously in recent scientific literature by the biochemist Steven Benner, and by the astrobiological committee chaired by John A. Baross. Solvents discussed by the Baross committee include ammonia, sulfuric acid, formamide, hydrocarbons, and (at temperatures much lower than Earth's) liquid nitrogen, or hydrogen in the form of a supercritical fluid.
Water as a solvent limits the forms biochemistry can take. For example, Steven Benner, proposes the polyelectrolyte theory of the gene that claims that for a genetic biopolymer such as, DNA, to function in water, it requires repeated ionic charges. If water is not required for life, these limits on genetic biopolymers are removed.
Carl Sagan once described himself as both a carbon chauvinist and a water chauvinist; however, on another occasion he said that he was a carbon chauvinist but "not that much of a water chauvinist".
He speculated on hydrocarbons, hydrofluoric acid, and ammonia as possible alternatives to water.
Some of the properties of water that are important for life processes include:
A complexity which leads to a large number of permutations of possible reaction paths including acid–base chemistry, H+ cations, OH− anions, hydrogen bonding, van der Waals bonding, dipole–dipole and other polar interactions, aqueous solvent cages, and hydrolysis. This complexity offers a large number of pathways for evolution to produce life, many other solvents have dramatically fewer possible reactions, which severely limits evolution.
Thermodynamic stability: the free energy of formation of liquid water is low enough (−237.24 kJ/mol) that water undergoes few reactions. Other solvents are highly reactive, particularly with oxygen.
Water does not combust in oxygen because it is already the combustion product of hydrogen with oxygen. Most alternative solvents are not stable in an oxygen-rich atmosphere, so it is highly unlikely that those liquids could support aerobic life.
A large temperature range over which it is liquid.
High solubility of oxygen and carbon dioxide at room temperature supporting the evolution of aerobic aquatic plant and animal life.
A high heat capacity (leading to higher environmental temperature stability).
Water is a room-temperature liquid leading to a large population of quantum transition states required to overcome reaction barriers. Cryogenic liquids (such as liquid methane) have exponentially lower transition state populations which are needed for life based on chemical reactions. This leads to chemical reaction rates which may be so slow as to preclude the development of any life based on chemical reactions.
Spectroscopic transparency allowing solar radiation to penetrate several meters into the liquid (or solid), greatly aiding the evolution of aquatic life.
A large heat of vaporization leading to stable lakes and oceans.
The ability to dissolve a wide variety of compounds.
The solid (ice) has lower density than the liquid, so ice floats on the liquid. This is why bodies of water freeze over but do not freeze solid (from the bottom up). If ice were denser than liquid water (as is true for nearly all other compounds), then large bodies of liquid would slowly freeze solid, which would not be conducive to the formation of life.
Water as a compound is cosmically abundant, although much of it is in the form of vapor or ice. Subsurface liquid water is considered likely or possible on several of the outer moons: Enceladus (where geysers have been observed), Europa, Titan, and Ganymede. Earth and Titan are the only worlds currently known to have stable bodies of liquid on their surfaces.
Not all properties of water are necessarily advantageous for life, however. For instance, water ice has a high albedo, meaning that it reflects a significant quantity of light and heat from the Sun. During ice ages, as reflective ice builds up over the surface of the water, the effects of global cooling are increased.
There are some properties that make certain compounds and elements much more favorable than others as solvents in a successful biosphere. The solvent must be able to exist in liquid equilibrium over a range of temperatures the planetary object would normally encounter. Because boiling points vary with the pressure, the question tends not to be does the prospective solvent remain liquid, but at what pressure. For example, hydrogen cyanide has a narrow liquid-phase temperature range at 1 atmosphere, but in an atmosphere with the pressure of Venus, with of pressure, it can indeed exist in liquid form over a wide temperature range.
Ammonia
The ammonia molecule (NH3), like the water molecule, is abundant in the universe, being a compound of hydrogen (the simplest and most common element) with another very common element, nitrogen. The possible role of liquid ammonia as an alternative solvent for life is an idea that goes back at least to 1954, when J. B. S. Haldane raised the topic at a symposium about life's origin.
Numerous chemical reactions are possible in an ammonia solution, and liquid ammonia has chemical similarities with water. Ammonia can dissolve most organic molecules at least as well as water does and, in addition, it is capable of dissolving many elemental metals. Haldane made the point that various common water-related organic compounds have ammonia-related analogs; for instance the ammonia-related amine group (−NH2) is analogous to the water-related hydroxyl group (−OH).
Ammonia, like water, can either accept or donate an H+ ion. When ammonia accepts an H+, it forms the ammonium cation (NH4+), analogous to hydronium (H3O+). When it donates an H+ ion, it forms the amide anion (NH2−), analogous to the hydroxide anion (OH−). Compared to water, however, ammonia is more inclined to accept an H+ ion, and less inclined to donate one; it is a stronger nucleophile. Ammonia added to water functions as Arrhenius base: it increases the concentration of the anion hydroxide. Conversely, using a solvent system definition of acidity and basicity, water added to liquid ammonia functions as an acid, because it increases the concentration of the cation ammonium. The carbonyl group (C=O), which is much used in terrestrial biochemistry, would not be stable in ammonia solution, but the analogous imine group (C=NH) could be used instead.
However, ammonia has some problems as a basis for life. The hydrogen bonds between ammonia molecules are weaker than those in water, causing ammonia's heat of vaporization to be half that of water, its surface tension to be a third, and reducing its ability to concentrate non-polar molecules through a hydrophobic effect. Gerald Feinberg and Robert Shapiro have questioned whether ammonia could hold prebiotic molecules together well enough to allow the emergence of a self-reproducing system. Ammonia is also flammable in oxygen and could not exist sustainably in an environment suitable for aerobic metabolism.
A biosphere based on ammonia would likely exist at temperatures or air pressures that are extremely unusual in relation to life on Earth. Life on Earth usually exists between the melting point and boiling point of water, at a pressure designated as normal pressure, between . When also held to normal pressure, ammonia's melting and boiling points are and respectively. Because chemical reactions generally proceed more slowly at lower temperatures, ammonia-based life existing in this set of conditions might metabolize more slowly and evolve more slowly than life on Earth. On the other hand, lower temperatures could also enable living systems to use chemical species that would be too unstable at Earth temperatures to be useful.
A set of conditions where ammonia is liquid at Earth-like temperatures would involve it being at a much higher pressure. For example, at 60 atm ammonia melts at and boils at .
Ammonia and ammonia–water mixtures remain liquid at temperatures far below the freezing point of pure water, so such biochemistries might be well suited to planets and moons orbiting outside the water-based habitability zone. Such conditions could exist, for example, under the surface of Saturn's largest moon Titan.
Methane and other hydrocarbons
Methane (CH4) is a simple hydrocarbon: that is, a compound of two of the most common elements in the cosmos: hydrogen and carbon. It has a cosmic abundance comparable with ammonia. Hydrocarbons could act as a solvent over a wide range of temperatures, but would lack polarity. Isaac Asimov, the biochemist and science fiction writer, suggested in 1981 that poly-lipids could form a substitute for proteins in a non-polar solvent such as methane. Lakes composed of a mixture of hydrocarbons, including methane and ethane, have been detected on the surface of Titan by the Cassini spacecraft.
There is debate about the effectiveness of methane and other hydrocarbons as a solvent for life compared to water or ammonia. Water is a stronger solvent than the hydrocarbons, enabling easier transport of substances in a cell. However, water is also more chemically reactive and can break down large organic molecules through hydrolysis. A life-form whose solvent was a hydrocarbon would not face the threat of its biomolecules being destroyed in this way. Also, the water molecule's tendency to form strong hydrogen bonds can interfere with internal hydrogen bonding in complex organic molecules. Life with a hydrocarbon solvent could make more use of hydrogen bonds within its biomolecules. Moreover, the strength of hydrogen bonds within biomolecules would be appropriate to a low-temperature biochemistry.
Astrobiologist Chris McKay has argued, on thermodynamic grounds, that if life does exist on Titan's surface, using hydrocarbons as a solvent, it is likely also to use the more complex hydrocarbons as an energy source by reacting them with hydrogen, reducing ethane and acetylene to methane. Possible evidence for this form of life on Titan was identified in 2010 by Darrell Strobel of Johns Hopkins University; a greater abundance of molecular hydrogen in the upper atmospheric layers of Titan compared to the lower layers, arguing for a downward diffusion at a rate of roughly 1025 molecules per second and disappearance of hydrogen near Titan's surface. As Strobel noted, his findings were in line with the effects Chris McKay had predicted if methanogenic life-forms were present. The same year, another study showed low levels of acetylene on Titan's surface, which were interpreted by Chris McKay as consistent with the hypothesis of organisms reducing acetylene to methane. While restating the biological hypothesis, McKay cautioned that other explanations for the hydrogen and acetylene findings are to be considered more likely: the possibilities of yet unidentified physical or chemical processes (e.g. a non-living surface catalyst enabling acetylene to react with hydrogen), or flaws in the current models of material flow. He noted that even a non-biological catalyst effective at 95 K would in itself be a startling discovery.
Azotosome
A hypothetical cell membrane termed an azotosome, capable of functioning in liquid methane in Titan conditions was computer-modeled in an article published in February 2015. Composed of acrylonitrile, a small molecule containing carbon, hydrogen, and nitrogen, it is predicted to have stability and flexibility in liquid methane comparable to that of a phospholipid bilayer (the type of cell membrane possessed by all life on Earth) in liquid water. An analysis of data obtained using the Atacama Large Millimeter / submillimeter Array (ALMA), completed in 2017, confirmed substantial amounts of acrylonitrile in Titan's atmosphere. Later studies questioned whether acrylonitrile would be able to self-assemble into azotozomes.
Hydrogen fluoride
Hydrogen fluoride (HF), like water, is a polar molecule, and due to its polarity it can dissolve many ionic compounds. At atmospheric pressure, its melting point is , and its boiling point is ; the difference between the two is a little more than 100 K. HF also makes hydrogen bonds with its neighbor molecules, as do water and ammonia. It has been considered as a possible solvent for life by scientists such as Peter Sneath and Carl Sagan.
HF is dangerous to the systems of molecules that Earth-life is made of, but certain other organic compounds, such as paraffin waxes, are stable with it. Like water and ammonia, liquid hydrogen fluoride supports an acid–base chemistry. Using a solvent system definition of acidity and basicity, nitric acid functions as a base when it is added to liquid HF.
However, hydrogen fluoride is cosmically rare, unlike water, ammonia, and methane.
Hydrogen sulfide
Hydrogen sulfide is the closest chemical analog to water, but is less polar and is a weaker inorganic solvent. Hydrogen sulfide is quite plentiful on Jupiter's moon Io and may be in liquid form a short distance below the surface; astrobiologist Dirk Schulze-Makuch has suggested it as a possible solvent for life there. On a planet with hydrogen sulfide oceans, the source of the hydrogen sulfide could come from volcanoes, in which case it could be mixed in with a bit of hydrogen fluoride, which could help dissolve minerals. Hydrogen sulfide life might use a mixture of carbon monoxide and carbon dioxide as their carbon source. They might produce and live on sulfur monoxide, which is analogous to oxygen (O2). Hydrogen sulfide, like hydrogen cyanide and ammonia, suffers from the small temperature range where it is liquid, though that, like that of hydrogen cyanide and ammonia, increases with increasing pressure.
Silicon dioxide and silicates
Silicon dioxide, also known as silica and quartz, is very abundant in the universe and has a large temperature range where it is liquid. However, its melting point is , so it would be impossible to make organic compounds in that temperature, because all of them would decompose. Silicates are similar to silicon dioxide and some have lower melting points than silica. Feinberg and Shapiro have suggested that molten silicate rock could serve as a liquid medium for organisms with a chemistry based on silicon, oxygen, and other elements such as aluminium.
Other solvents or cosolvents
Other solvents sometimes proposed:
Supercritical fluids: supercritical carbon dioxide and supercritical hydrogen.
Simple hydrogen compounds: hydrogen chloride.
More complex compounds: sulfuric acid, formamide, methanol.
Very-low-temperature fluids: liquid nitrogen and hydrogen.
High-temperature liquids: sodium chloride.
Sulfuric acid in liquid form is strongly polar. It remains liquid at higher temperatures than water, its liquid range being 10 °C to 337 °C at a pressure of 1 atm, although above 300 °C it slowly decomposes. Sulfuric acid is known to be abundant in the clouds of Venus, in the form of aerosol droplets. In a biochemistry that used sulfuric acid as a solvent, the alkene group (C=C), with two carbon atoms joined by a double bond, could function analogously to the carbonyl group (C=O) in water-based biochemistry.
A proposal has been made that life on Mars may exist and be using a mixture of water and hydrogen peroxide as its solvent.
A 61.2% (by mass) mix of water and hydrogen peroxide has a freezing point of −56.5 °C and tends to super-cool rather than crystallize. It is also hygroscopic, an advantage in a water-scarce environment.
Supercritical carbon dioxide has been proposed as a candidate for alternative biochemistry due to its ability to selectively dissolve organic compounds and assist the functioning of enzymes and because "super-Earth"- or "super-Venus"-type planets with dense high-pressure atmospheres may be common.
Other speculations
Non-green photosynthesizers
Physicists have noted that, although photosynthesis on Earth generally involves green plants, a variety of other-colored plants could also support photosynthesis, essential for most life on Earth, and that other colors might be preferred in places that receive a different mix of stellar radiation than Earth.
These studies indicate that blue plants would be unlikely; however yellow or red plants may be relatively common.
Variable environments
Many Earth plants and animals undergo major biochemical changes during their life cycles as a response to changing environmental conditions, for example, by having a spore or hibernation state that can be sustained for years or even millennia between more active life stages. Thus, it would be biochemically possible to sustain life in environments that are only periodically consistent with life as we know it.
For example, frogs in cold climates can survive for extended periods of time with most of their body water in a frozen state, whereas desert frogs in Australia can become inactive and dehydrate in dry periods, losing up to 75% of their fluids, yet return to life by rapidly rehydrating in wet periods. Either type of frog would appear biochemically inactive (i.e. not living) during dormant periods to anyone lacking a sensitive means of detecting low levels of metabolism.
Alanine world and hypothetical alternatives
The genetic code may have evolved during the transition from the RNA world to a protein world. The Alanine World Hypothesis postulates that the evolution of the genetic code (the so-called GC phase) started with only four basic amino acids: alanine, glycine, proline and ornithine (now arginine). The evolution of the genetic code ended with 20 proteinogenic amino acids. From a chemical point of view, most of them are Alanine-derivatives particularly suitable for the construction of α-helices and β-sheets basic secondary structural elements of modern proteins. Direct evidence of this is an experimental procedure in molecular biology known as alanine scanning.
A hypothetical "Proline World" would create a possible alternative life with the genetic code based on the proline chemical scaffold as the protein backbone. Similarly, a "Glycine World" and "Ornithine World" are also conceivable, but nature has chosen none of them. Evolution of life with Proline, Glycine, or Ornithine as the basic structure for protein-like polymers (foldamers) would lead to parallel biological worlds. They would have morphologically radically different body plans and genetics from the living organisms of the known biosphere.
Nonplanetary life
Dusty plasma-based
In 2007, Vadim N. Tsytovich and colleagues proposed that lifelike behaviors could be exhibited by dust particles suspended in a plasma, under conditions that might exist in space. Computer models showed that, when the dust became charged, the particles could self-organize into microscopic helical structures, and the authors offer "a rough sketch of a possible model of...helical grain structure reproduction".
Cosmic necklace-based
In 2020, Luis A. Anchordoqu and Eugene M. Chudnovsky of the City University of New York hypothesized that cosmic necklace-based life composed of magnetic monopoles connected by cosmic strings could evolve inside stars. This would be achieved by a stretching of cosmic strings due to the star's intense gravity, thus allowing it to take on more complex forms and potentially form structures similar to the RNA and DNA structures found within carbon-based life. As such, it is theoretically possible that such beings could eventually become intelligent and construct a civilization using the power generated by the star's nuclear fusion. Because such use would use up part of the star's energy output, the luminosity would also fall. For this reason, it is thought that such life might exist inside stars observed to be cooling faster or dimmer than current cosmological models predict.
Life on a neutron star
Frank Drake suggested in 1973 that intelligent life could inhabit neutron stars. Physical models in 1973 implied that Drake's creatures would be microscopic.
Scientists who have published on this topic
Scientists who have considered possible alternatives to carbon-water biochemistry include:
J. B. S. Haldane (1892–1964), a geneticist noted for his work on abiogenesis.
V. Axel Firsoff (1910–1981), British astronomer.
Isaac Asimov (1920–1992), biochemist and science fiction writer.
Fred Hoyle (1915–2001), astronomer and science fiction writer.
Norman Horowitz (1915–2005), Caltech geneticist who devised the first experiments carried out to detect life on Mars.
George C. Pimentel (1922–1989), American chemist, University of California, Berkeley.
Peter Sneath (1923–2011), microbiologist, author of the book Planets and Life.
Gerald Feinberg (1933–1992), physicist and Robert Shapiro (1935–2011), chemist, co-authors of the book Life Beyond Earth.
Carl Sagan (1934–1996), astronomer, science popularizer, and SETI proponent.
Jonathan Lunine (born 1959), American planetary scientist and physicist.
Robert Freitas (born 1952), specialist in nano-technology and nano-medicine.
John Baross (born 1940), oceanographer and astrobiologist, who chaired a committee of scientists under the United States National Research Council that published a report on life's limiting conditions in 2007.
See also
Abiogenesis
Astrobiology
Carbon chauvinism
Carbon-based life
Earliest known life forms
Extraterrestrial life
Hachimoji DNA
Iron–sulfur world hypothesis
Life origination beyond planets
Nexus for Exoplanet System Science
Non-cellular life
Non-proteinogenic amino acids
Nucleic acid analogues
Planetary habitability
Shadow biosphere
References
Further reading
External links
Astronomy FAQ
Ammonia-based life
Silicon-based life
Astrobiology
Science fiction themes
Biological hypotheses
Scientific speculation | 0.782938 | 0.995912 | 0.779737 |
SAMSON | SAMSON (Software for Adaptive Modeling and Simulation Of Nanosystems) is a computer software platform for molecular design being developed by OneAngstrom and previously by the NANO-D group at the French Institute for Research in Computer Science and Automation (INRIA).
SAMSON has a modular architecture that makes it suitable for different domains of nanoscience, including material science, life science, and drug design.
SAMSON Elements
SAMSON Elements are modules for SAMSON, developed with the SAMSON software development kit (SDK). SAMSON Elements help users perform tasks in SAMSON, including building new models, performing calculations, running interactive or offline simulations, and visualizing and interpreting results.
SAMSON Elements may contain different class types, including for example:
Apps – generic classes with a graphical user interface that extend the functions of SAMSON
Editors – classes that receive user interaction events to provide editing functions (e.g., model generation, structure deformation, etc.)
Models – classes that describe properties of nanosystems (see below)
Parsers – classes that may parse files to add content to SAMSON's data graph (see below)
SAMSON Elements expose their functions to SAMSON and other Elements through an introspection mechanism, and may thus be integrated and pipelined.
Modeling and simulation
SAMSON represents nanosystems using five categories of models:
Structural models – describe geometry and topology
Visual models – provide graphical representations
Dynamical models – describe dynamical degrees of freedom
Interaction models – describe energies and forces
Property models – describe traits that do not enter in the first four model categories
Simulators (potentially interactive ones) are used to build physically-based models, and predict properties.
Data graph
All models and simulators are integrated into a hierarchical, layered structure that form the SAMSON data graph. SAMSON Elements interact with each other and with the data graph to perform modeling and simulation tasks. A signals and slots mechanism makes it possible for data graph nodes to send events when they are updated, which makes it possible to develop e.g., adaptive simulation algorithms.
Node specification language
SAMSON has a node specification language (NSL) that users may employ to select data graph nodes based on their properties. Example NSL expressions include:
Hydrogen – select all hydrogens (short version: H)
atom.chainID > 2 – select all atoms with a chain ID strictly larger than 2 (short version: a.ci > 2)
Carbon in node.selected – select all carbons in the current selection (short version: C in n.s)
bond.order > 1.5 – select all bonds with order strictly larger than 1.5 (short version: b.o > 1.5)
node.type backbone – select all backbone nodes (short version: n.t bb)
O in node.type sidechain – select all oxygens in sidechain nodes (short version: O in n.t sc)
"CA" within 5A of S – select all nodes named CA that are within 5 angstrom of any sulfur atom (short version: "CA" w 5A of S)
node.type residue beyond 5A of node.selected – select all residue nodes beyond 5 angstrom of the current selection (short version: n.t r b 5A of n.s)
residue.secondaryStructure helix – select residue nodes in alpha helices (short version: r.ss h)
node.type sidechain having S – select sidechain nodes that have at least one sulfur atom (short version: n.t sc h S)
H linking O – select all hydrogens bonded to oxygen atoms (short version: H l O)
C or H – select atoms that are carbons or hydrogens
Features
SAMSON is developed in C++ and implements many features to ease developing SAMSON Elements, including:
Managed memory
Signals and slots
Serialization
Multilevel undo-redo
Introspection
Referencing
Unit system
Functors and predicate logic
SAMSON Element source code generators
SAMSON Connect
SAMSON, SAMSON Elements and the SAMSON Software Development Kit are distributed via the SAMSON Connect website. The site acts as a repository for the SAMSON Elements being uploaded by developers, and users of SAMSON choose and add Elements from SAMSON Connect.
See also
Comparison of software for molecular mechanics modeling
Gabedit
Jmol
Molden
Molecular design software
Molekel
PyMol
RasMol
UCSF Chimera
Visual Molecular Dynamics (VMD)
References
Computational chemistry software
Nanotechnology
Simulation software | 0.780114 | 0.999487 | 0.779714 |
Molecular orbital theory | In chemistry, molecular orbital theory (MO theory or MOT) is a method for describing the electronic structure of molecules using quantum mechanics. It was proposed early in the 20th century. The MOT explains the paramagnetic nature of O2, which VSEPR theory cannot explain.
In molecular orbital theory, electrons in a molecule are not assigned to individual chemical bonds between atoms, but are treated as moving under the influence of the atomic nuclei in the whole molecule. Quantum mechanics describes the spatial and energetic properties of electrons as molecular orbitals that surround two or more atoms in a molecule and contain valence electrons between atoms.
Molecular orbital theory revolutionized the study of chemical bonding by approximating the states of bonded electrons—the molecular orbitals—as linear combinations of atomic orbitals (LCAO). These approximations are made by applying the density functional theory (DFT) or Hartree–Fock (HF) models to the Schrödinger equation.
Molecular orbital theory and valence bond theory are the foundational theories of quantum chemistry.
Linear combination of atomic orbitals (LCAO) method
In the LCAO method, each molecule has a set of molecular orbitals. It is assumed that the molecular orbital wave function ψj can be written as a simple weighted sum of the n constituent atomic orbitals χi, according to the following equation:
One may determine cij coefficients numerically by substituting this equation into the Schrödinger equation and applying the variational principle. The variational principle is a mathematical technique used in quantum mechanics to build up the coefficients of each atomic orbital basis. A larger coefficient means that the orbital basis is composed more of that particular contributing atomic orbital—hence, the molecular orbital is best characterized by that type. This method of quantifying orbital contribution as a linear combination of atomic orbitals is used in computational chemistry. An additional unitary transformation can be applied on the system to accelerate the convergence in some computational schemes. Molecular orbital theory was seen as a competitor to valence bond theory in the 1930s, before it was realized that the two methods are closely related and that when extended they become equivalent.
Molecular orbital theory is used to interpret ultraviolet-visible spectroscopy (UV-VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state.
There are three main requirements for atomic orbital combinations to be suitable as approximate molecular orbitals.
The atomic orbital combination must have the correct symmetry, which means that it must belong to the correct irreducible representation of the molecular symmetry group. Using symmetry adapted linear combinations, or SALCs, molecular orbitals of the correct symmetry can be formed.
Atomic orbitals must also overlap within space. They cannot combine to form molecular orbitals if they are too far away from one another.
Atomic orbitals must be at similar energy levels to combine as molecular orbitals. Because if the energy difference is great, when the molecular orbitals form, the change in energy becomes small. Consequently, there is not enough reduction in energy of electrons to make significant bonding.
History
Molecular orbital theory was developed in the years after valence bond theory had been established (1927), primarily through the efforts of Friedrich Hund, Robert Mulliken, John C. Slater, and John Lennard-Jones. MO theory was originally called the Hund-Mulliken theory. According to physicist and physical chemist Erich Hückel, the first quantitative use of molecular orbital theory was the 1929 paper of Lennard-Jones. This paper predicted a triplet ground state for the dioxygen molecule which explained its paramagnetism (see ) before valence bond theory, which came up with its own explanation in 1931. The word orbital was introduced by Mulliken in 1932. By 1933, the molecular orbital theory had been accepted as a valid and useful theory.
Erich Hückel applied molecular orbital theory to unsaturated hydrocarbon molecules starting in 1931 with his Hückel molecular orbital (HMO) method for the determination of MO energies for pi electrons, which he applied to conjugated and aromatic hydrocarbons. This method provided an explanation of the stability of molecules with six pi-electrons such as benzene.
The first accurate calculation of a molecular orbital wavefunction was that made by Charles Coulson in 1938 on the hydrogen molecule. By 1950, molecular orbitals were completely defined as eigenfunctions (wave functions) of the self-consistent field Hamiltonian and it was at this point that molecular orbital theory became fully rigorous and consistent. This rigorous approach is known as the Hartree–Fock method for molecules although it had its origins in calculations on atoms. In calculations on molecules, the molecular orbitals are expanded in terms of an atomic orbital basis set, leading to the Roothaan equations. This led to the development of many ab initio quantum chemistry methods. In parallel, molecular orbital theory was applied in a more approximate manner using some empirically derived parameters in methods now known as semi-empirical quantum chemistry methods.
The success of Molecular Orbital Theory also spawned ligand field theory, which was developed during the 1930s and 1940s as an alternative to crystal field theory.
Types of orbitals
Molecular orbital (MO) theory uses a linear combination of atomic orbitals (LCAO) to represent molecular orbitals resulting from bonds between atoms. These are often divided into three types, bonding, antibonding, and non-bonding. A bonding orbital concentrates electron density in the region between a given pair of atoms, so that its electron density will tend to attract each of the two nuclei toward the other and hold the two atoms together. An anti-bonding orbital concentrates electron density "behind" each nucleus (i.e. on the side of each atom which is farthest from the other atom), and so tends to pull each of the two nuclei away from the other and actually weaken the bond between the two nuclei. Electrons in non-bonding orbitals tend to be associated with atomic orbitals that do not interact positively or negatively with one another, and electrons in these orbitals neither contribute to nor detract from bond strength.
Molecular orbitals are further divided according to the types of atomic orbitals they are formed from. Chemical substances will form bonding interactions if their orbitals become lower in energy when they interact with each other. Different bonding orbitals are distinguished that differ by electron configuration (electron cloud shape) and by energy levels.
The molecular orbitals of a molecule can be illustrated in molecular orbital diagrams.
Common bonding orbitals are sigma (σ) orbitals which are symmetric about the bond axis and pi (π) orbitals with a nodal plane along the bond axis. Less common are delta (δ) orbitals and phi (φ) orbitals with two and three nodal planes respectively along the bond axis. Antibonding orbitals are signified by the addition of an asterisk. For example, an antibonding pi orbital may be shown as π*.
Bond Order
Bond order is the number of chemical bonds between a pair of atoms. The bond order of a molecule can be calculated by subtracting the number of electrons in anti-bonding orbitals from the number of bonding orbitals, and the resulting number is then divided by two. A molecule is expected to be stable if it has bond order larger than zero. It is adequate to consider the valence electron to determine the bond order. Because (for principal quantum number, n>1) when MOs are derived from 1s AOs, the difference in number of electrons in bonding and anti-bonding molecular orbital is zero. So, there is no net effect on bond order if the electron is not the valence one.
Bond Order = 1/2 [(Number of electrons in bonding MO) - (Number of electrons in anti-bonding MO)]
From bond order, one can predict whether a bond between two atoms will form or not. For example, the existence of He2 molecule. From the molecular orbital diagram, bond order,=1/2*(2-2)=0. That means, no bond formation will occur between two He atoms which is seen experimentally. It can be detected under very low temperature and pressure molecular beam and has binding energy of approximately 0.001 J/mol.
Besides, the strength of a bond can also be realized from bond order (BO). For example:
H2 :BO=(2-0)/2=1; Bond Energy= 436 kJ/mol.
H2+ :BO=(1-0)/2=1/2; Bond Energy=171 kJ/mol.
As bond order of H2+ is smaller than H2, it should be less stable which is observed experimentally and can be seen from the bond energy.
Overview
MOT provides a global, delocalized perspective on chemical bonding. In MO theory, any electron in a molecule may be found anywhere in the molecule, since quantum conditions allow electrons to travel under the influence of an arbitrarily large number of nuclei, as long as they are in eigenstates permitted by certain quantum rules. Thus, when excited with the requisite amount of energy through high-frequency light or other means, electrons can transition to higher-energy molecular orbitals. For instance, in the simple case of a hydrogen diatomic molecule, promotion of a single electron from a bonding orbital to an antibonding orbital can occur under UV radiation. This promotion weakens the bond between the two hydrogen atoms and can lead to photodissociation—the breaking of a chemical bond due to the absorption of light.
Molecular orbital theory is used to interpret ultraviolet-visible spectroscopy (UV-VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state.
Although in MO theory some molecular orbitals may hold electrons that are more localized between specific pairs of molecular atoms, other orbitals may hold electrons that are spread more uniformly over the molecule. Thus, overall, bonding is far more delocalized in MO theory, which makes it more applicable to resonant molecules that have equivalent non-integer bond orders than valence bond theory. This makes MO theory more useful for the description of extended systems.
Robert S. Mulliken, who actively participated in the advent of molecular orbital theory, considers each molecule to be a self-sufficient unit. He asserts in his article: ...Attempts to regard a molecule as consisting of specific atomic or ionic units held together by discrete numbers of bonding electrons or electron-pairs are considered as more or less meaningless, except as an approximation in special cases, or as a method of calculation […]. A molecule is here regarded as a set of nuclei, around each of which is grouped an electron configuration closely similar to that of a free atom in an external field, except that the outer parts of the electron configurations surrounding each nucleus usually belong, in part, jointly to two or more nuclei....An example is the MO description of benzene, , which is an aromatic hexagonal ring of six carbon atoms and three double bonds. In this molecule, 24 of the 30 total valence bonding electrons—24 coming from carbon atoms and 6 coming from hydrogen atoms—are located in 12 σ (sigma) bonding orbitals, which are located mostly between pairs of atoms (C-C or C-H), similarly to the electrons in the valence bond description. However, in benzene the remaining six bonding electrons are located in three π (pi) molecular bonding orbitals that are delocalized around the ring. Two of these electrons are in an MO that has equal orbital contributions from all six atoms. The other four electrons are in orbitals with vertical nodes at right angles to each other. As in the VB theory, all of these six delocalized π electrons reside in a larger space that exists above and below the ring plane. All carbon-carbon bonds in benzene are chemically equivalent. In MO theory this is a direct consequence of the fact that the three molecular π orbitals combine and evenly spread the extra six electrons over six carbon atoms.
In molecules such as methane, , the eight valence electrons are found in four MOs that are spread out over all five atoms. It is possible to transform the MOs into four localized sp3 orbitals. Linus Pauling, in 1931, hybridized the carbon 2s and 2p orbitals so that they pointed directly at the hydrogen 1s basis functions and featured maximal overlap. However, the delocalized MO description is more appropriate for predicting ionization energies and the positions of spectral absorption bands. When methane is ionized, a single electron is taken from the valence MOs, which can come from the s bonding or the triply degenerate p bonding levels, yielding two ionization energies. In comparison, the explanation in valence bond theory is more complicated. When one electron is removed from an sp3 orbital, resonance is invoked between four valence bond structures, each of which has a single one-electron bond and three two-electron bonds. Triply degenerate T2 and A1 ionized states (CH4+) are produced from different linear combinations of these four structures. The difference in energy between the ionized and ground state gives the two ionization energies.
As in benzene, in substances such as beta carotene, chlorophyll, or heme, some electrons in the π orbitals are spread out in molecular orbitals over long distances in a molecule, resulting in light absorption in lower energies (the visible spectrum), which accounts for the characteristic colours of these substances. This and other spectroscopic data for molecules are well explained in MO theory, with an emphasis on electronic states associated with multicenter orbitals, including mixing of orbitals premised on principles of orbital symmetry matching. The same MO principles also naturally explain some electrical phenomena, such as high electrical conductivity in the planar direction of the hexagonal atomic sheets that exist in graphite. This results from continuous band overlap of half-filled p orbitals and explains electrical conduction. MO theory recognizes that some electrons in the graphite atomic sheets are completely delocalized over arbitrary distances, and reside in very large molecular orbitals that cover an entire graphite sheet, and some electrons are thus as free to move and therefore conduct electricity in the sheet plane, as if they resided in a metal.
See also
Cis effect
Configuration interaction
Coupled cluster
Frontier molecular orbital theory
Ligand field theory (MO theory for transition metal complexes)
Møller–Plesset perturbation theory
Quantum chemistry computer programs
Semi-empirical quantum chemistry methods
Valence bond theory
References
External links
Molecular Orbital Theory - Purdue University
Molecular Orbital Theory - Sparknotes
Molecular Orbital Theory - Mark Bishop's Chemistry Site
Introduction to MO Theory - Queen Mary, London University
Molecular Orbital Theory - a related terms table
An introduction to Molecular Group Theory - Oxford University
Chemistry theories
Quantum chemistry
Chemical bonding
General chemistry | 0.78302 | 0.995681 | 0.779638 |
BALL | BALL (Biochemical Algorithms Library) is a C++ class framework and set of algorithms and data structures for molecular modelling and computational structural bioinformatics, a Python interface to this library, and a graphical user interface to BALL, the molecule viewer BALLView.
BALL has evolved from a commercial product into free-of-charge open-source software licensed under the GNU Lesser General Public License (LGPL). BALLView is licensed under the GNU General Public License (GPL) license.
BALL and BALLView have been ported to the operating systems Linux, macOS, Solaris, and Windows.
The molecule viewer BALLView, also developed by the BALL project team, is a C++ application of BALL using Qt, and OpenGL with the real-time ray tracer RTFact as render back-ends. For both, BALLView offers three-dimensional and stereoscopic visualizing in several different modes, and applying directly the algorithms of the BALL library via its graphical user interface.
The BALL project is developed and maintained by groups at Saarland University, Mainz University, and University of Tübingen. Both the library and the viewer are used for education and research. BALL packages have been made available in the Debian project.
Key features
Interactive molecular drawing and conformational editing
Reading and writing of molecular file formats (PDB, MOL2, MOL, HIN, XYZ, KCF, SD, AC)
Reading secondary data sources e.g. (DCD, DSN6, GAMESS, JCAMP, SCWRL, TRR)
Generating molecules from and matching of SMILES- and SMARTS expressions to molecules
Geometry optimization
Minimizer and molecular dynamics classes
Support for force fields (MMFF94, AMBER, CHARMM) for scoring and energy minimization
Python interface and scripting functionality
Plugin infrastructure (3D Space-Navigator)
Molecular graphics (3D, stereoscopic viewing)
comprehensive documentation (Wiki, code snippets, online class documentation, bug tracker)
comprehensive regression tests
BALL project format for presentations and collaborative data exchange
NMR
editable shortcuts
BALL library
BALL is a development framework for structural bioinformatics. Using BALL as a programming toolbox allows greatly reducing application development times and helps ensure stability and correctness by avoiding often error-prone reimplementation of complex algorithms and replacing them with calls into a library that has been tested by many developers.
File import-export
BALL supports molecular file formats including PDB, MOL2, MOL, HIN, XYZ, KCF, SD, AC, and secondary data sources like DCD, DSN6, GAMESS, JCAMP, SCWRL, and TRR. Molecules can also be created using BALL's peptide builder, or based on SMILES expressions.
General structure analysis
Further preparation and structure validation is enabled by, e.g., Kekuliser-, Aromaticity-, Bondorder-, HBond-, and Secondary Structure processors. A Fragment Library automatically infers missing information, e.g., a protein's hydrogens or bonds. A Rotamer Library allows determining, assigning, and switching between a protein's most likely side chain conformations. BALL's Transformation processors guide generation of valid 3D structures. Its selection mechanism enables to specify parts of a molecule by simple expressions (SMILES, SMARTS, element types). This selection can be used by all modeling classes like the processors or force fields.
Molecular mechanics
Implementations of the popular force fields CHARMM, Amber, and MMFF94 can be combined with BALL's minimizer and simulation classes (steepest descent, conjugate gradient, L-BFGS, and shifted L-VMM).
Python interface
SIP is used to automatically create Python classes for all relevant C++ classes in the BALL library to allow for the same class interfaces. The Python classes have the same name as the C++ classes, to aid in porting code that uses BALL from C++ to Python, and vice versa.
The Python interface is fully integrated into the viewer application BALLView and thus allows for direct visualization of results computed by python scripts. Also, BALLView can be operated from the scripting interface and recurring tasks can be automated.
BALLView
BALLView is BALL's standalone molecule modeling and visualization application. It is also a framework to develop molecular visualization functions.
BALLView offers standard visualization models for atoms, bonds, surfaces, and grid based visualization of e.g., electrostatic potentials. A large part of the functionality of the library BALL can be applied directly to the loaded molecule in BALLView. BALLView supports several visualization and input methods such as different stereo modes, space navigator, and VRPN-supported Input devices.
At CeBIT 2009, BALLView was prominently presented as the first complete integration of real-time ray tracing technology into a molecular viewer and modeling tool.
See also
List of molecular graphics systems
List of free and open-source software packages
Comparison of software for molecular mechanics modeling
Molecular design software
Molecular graphics
Molecule editor
References
Further reading
External links
BALLView web page
Code Library
Gallery
Tutorials
C++ libraries
Computational chemistry software
Molecular modelling software
Chemistry software for Linux
Science software that uses Qt
Articles with example C++ code | 0.782181 | 0.996743 | 0.779634 |
Structural biology | Structural biology, as defined by the Journal of Structural Biology, deals with structural analysis of living material (formed, composed of, and/or maintained and refined by living cells) at every level of organization.
Early structural biologists throughout the 19th and early 20th centuries were primarily only able to study structures to the limit of the naked eye's visual acuity and through magnifying glasses and light microscopes. In the 20th century, a variety of experimental techniques were developed to examine the 3D structures of biological molecules. The most prominent techniques are X-ray crystallography, nuclear magnetic resonance, and electron microscopy. Through the discovery of X-rays and its applications to protein crystals, structural biology was revolutionized, as now scientists could obtain the three-dimensional structures of biological molecules in atomic detail. Likewise, NMR spectroscopy allowed information about protein structure and dynamics to be obtained. Finally, in the 21st century, electron microscopy also saw a drastic revolution with the development of more coherent electron sources, aberration correction for electron microscopes, and reconstruction software that enabled the successful implementation of high resolution cryo-electron microscopy, thereby permitting the study of individual proteins and molecular complexes in three-dimensions at angstrom resolution.
With the development of these three techniques, the field of structural biology expanded and also became a branch of molecular biology, biochemistry, and biophysics concerned with the molecular structure of biological macromolecules (especially proteins, made up of amino acids, RNA or DNA, made up of nucleotides, and membranes, made up of lipids), how they acquire the structures they have, and how alterations in their structures affect their function. This subject is of great interest to biologists because macromolecules carry out most of the functions of cells, and it is only by coiling into specific three-dimensional shapes that they are able to perform these functions. This architecture, the "tertiary structure" of molecules, depends in a complicated way on each molecule's basic composition, or "primary structure." At lower resolutions, tools such as FIB-SEM tomography have allowed for greater understanding of cells and their organelles in 3-dimensions, and how each hierarchical level of various extracellular matrices contributes to function (for example in bone). In the past few years it has also become possible to predict highly accurate physical molecular models to complement the experimental study of biological structures. Computational techniques such as molecular dynamics simulations can be used in conjunction with empirical structure determination strategies to extend and study protein structure, conformation and function.
History
In 1912 Max Von Laue directed X-rays at crystallized copper sulfate generating a diffraction pattern. These experiments led to the development of X-ray crystallography, and its usage in exploring biological structures. In 1951, Rosalind Franklin and Maurice Wilkins used X-ray diffraction patterns to capture the first image of deoxyribonucleic acid (DNA). Francis Crick and James Watson modeled the double helical structure of DNA using this same technique in 1953 and received the Nobel Prize in Medicine along with Wilkins in 1962.
Pepsin crystals were the first proteins to be crystallized for use in X-ray diffraction, by Theodore Svedberg who received the 1962 Nobel Prize in Chemistry. The first tertiary protein structure, that of myoglobin, was published in 1958 by John Kendrew. During this time, modeling of protein structures was done using balsa wood or wire models. With the invention of modeling software such as CCP4 in the late 1970s, modeling is now done with computer assistance. Recent developments in the field have included the generation of X-ray free electron lasers, allowing analysis of the dynamics and motion of biological molecules, and the use of structural biology in assisting synthetic biology.
In the late 1930s and early 1940s, the combination of work done by Isidor Rabi, Felix Bloch, and Edward Mills Purcell led to the development of nuclear magnetic resonance (NMR). Currently, solid-state NMR is widely used in the field of structural biology to determine the structure and dynamic nature of proteins (protein NMR).
In 1990, Richard Henderson produced the first three-dimensional, high resolution image of bacteriorhodopsin using cryogenic electron microscopy (cryo-EM). Since then, cryo-EM has emerged as an increasingly popular technique to determine three-dimensional, high resolution structures of biological images.
More recently, computational methods have been developed to model and study biological structures. For example, molecular dynamics (MD) is commonly used to analyze the dynamic movements of biological molecules. In 1975, the first simulation of a biological folding process using MD was published in Nature. Recently, protein structure prediction was significantly improved by a new machine learning method called AlphaFold. Some claim that computational approaches are starting to lead the field of structural biology research.
Techniques
Biomolecules are too small to see in detail even with the most advanced light microscopes. The methods that structural biologists use to determine their structures generally involve measurements on vast numbers of identical molecules at the same time. These methods include:
Mass spectrometry
Macromolecular crystallography
Neutron diffraction
Proteolysis
Nuclear magnetic resonance spectroscopy of proteins (NMR)
Electron paramagnetic resonance (EPR)
Cryogenic electron microscopy (cryoEM)
Electron crystallography and microcrystal electron diffraction
Multiangle light scattering
Small angle scattering
Ultrafast laser spectroscopy
Anisotropic terahertz microspectroscopy
Two-dimensional infrared spectroscopy
Dual-polarization interferometry and circular dichroism
Most often researchers use them to study the "native states" of macromolecules. But variations on these methods are also used to watch nascent or denatured molecules assume or reassume their native states. See protein folding.
A third approach that structural biologists take to understanding structure is bioinformatics to look for patterns among the diverse sequences that give rise to particular shapes. Researchers often can deduce aspects of the structure of integral membrane proteins based on the membrane topology predicted by hydrophobicity analysis. See protein structure prediction.
Applications
Structural biologists have made significant contributions towards understanding the molecular components and mechanisms underlying human diseases. For example, cryo-EM and ssNMR have been used to study the aggregation of amyloid fibrils, which are associated with Alzheimer's disease, Parkinson's disease, and type II diabetes. In addition to amyloid proteins, scientists have used cryo-EM to produce high resolution models of tau filaments in the brain of Alzheimer's patients which may help develop better treatments in the future. Structural biology tools can also be used to explain interactions between pathogens and hosts. For example, structural biology tools have enabled virologists to understand how the HIV envelope allows the virus to evade human immune responses.
Structural biology is also an important component of drug discovery. Scientists can identify targets using genomics, study those targets using structural biology, and develop drugs that are suited for those targets. Specifically, ligand-NMR, mass spectrometry, and X-ray crystallography are commonly used techniques in the drug discovery process. For example, researchers have used structural biology to better understand Met, a protein encoded by a protooncogene that is an important drug target in cancer. Similar research has been conducted for HIV targets to treat people with AIDS. Researchers are also developing new antimicrobials for mycobacterial infections using structure-driven drug discovery.
See also
Primary structure
Secondary structure
Tertiary structure
Quaternary structure
Structural domain
Structural motif
Protein subunit
Molecular model
Cooperativity
Chaperonin
Structural genomics
Stereochemistry
Resolution (electron density)
Proteopedia The collaborative, 3D encyclopedia of proteins and other molecules.
Protein structure prediction
References
External links
Nature: Structural & Molecular Biology magazine website
Journal of Structural Biology
Structural Biology - The Virtual Library of Biochemistry, Molecular Biology and Cell Biology
Structural Biology in Europe
Learning Crystallography
Molecular biology
Protein structure
Biophysics | 0.790959 | 0.985671 | 0.779626 |
Systematic name | A systematic name is a name given in a systematic way to one unique group, organism, object or chemical substance, out of a specific population or collection. Systematic names are usually part of a nomenclature.
A semisystematic name or semitrivial name is a name that has at least one systematic part and at least one trivial part, such as a chemical vernacular name.
Creating systematic names can be as simple as assigning a prefix or a number to each object (in which case they are a type of numbering scheme), or as complex as encoding the complete structure of the object in the name. Many systems combine some information about the named object with an extra sequence number to make it into a unique identifier.
Systematic names often co-exist with earlier common names assigned before the creation of any systematic naming system. For example, many common chemicals are still referred to by their common or trivial names, even by chemists.
In chemistry
In chemistry, a systematic name describes the chemical structure of a chemical substance, thus giving some information about its chemical properties.
The Compendium of Chemical Terminology published by the IUPAC defines systematic name as "a name composed wholly of specially coined or selected syllables, with or without numerical prefixes; e.g. pentane, oxazole." However, when trivial names have become part of chemical nomenclature, they can be the systematic name of a substance or part of it. Examples for some systematic names that have trivial origins are benzene (cyclohexatriene) or glycerol (trihydroxypropane).
Examples
There are standardized systematic or semi-systematic names for:
Chemical elements (following IUPAC guidelines)
Chemical nomenclature (following IUPAC guidelines)
Binomial nomenclature, initiated by Carl Linnaeus
Astronomical objects and entities (administered by the International Astronomical Union)
Genes (following HUGO Gene Nomenclature Committee procedures)
Proteins
Minerals (administered by the IMA)
Monoclonal antibodies
See also
Biological classification
Chemical element
Chemical compound
International scientific vocabulary
List of Latin and Greek words commonly used in systematic names
Name
Namespace
Naming convention
Numbering scheme
Retained name
References
External links
Naming organic compounds (archived)
Selected pages from IUPAC rules for naming inorganic compounds
Naming conventions | 0.794601 | 0.981096 | 0.779579 |
Taphonomy | Taphonomy is the study of how organisms decay and become fossilized or preserved in the paleontological record. The term taphonomy (from Greek , 'burial' and , 'law') was introduced to paleontology in 1940 by Soviet scientist Ivan Efremov to describe the study of the transition of remains, parts, or products of organisms from the biosphere to the lithosphere.
The term taphomorph is used to describe fossil structures that represent poorly-preserved, deteriorated remains of a mixture of taxonomic groups, rather than of a single one.
Description
Taphonomic phenomena are grouped into two phases: biostratinomy, events that occur between death of the organism and the burial; and diagenesis, events that occur after the burial. Since Efremov's definition, taphonomy has expanded to include the fossilization of organic and inorganic materials through both cultural and environmental influences. Taphonomy is now most widely defined as the study of what happens to objects after they leave the biosphere (living contexts), enter the lithosphere (buried contexts), and are subsequently recovered and studied.
This is a multidisciplinary concept and is used in slightly different contexts throughout different fields of study. Fields that employ the concept of taphonomy include:
Archaeobotany
Archaeology
Biology
Forensic science
Geoarchaeology
Geology
Paleoecology
Paleontology
Zooarchaeology
There are five main stages of taphonomy: disarticulation, dispersal, accumulation, fossilization, and mechanical alteration. The first stage, disarticulation, occurs as the organism decays and the bones are no longer held together by the flesh and tendons of the organism. Dispersal is the separation of pieces of an organism caused by natural events (i.e. floods, scavengers etc.). Accumulation occurs when there is a buildup of organic and/or inorganic materials in one location (scavengers or human behavior). When mineral rich groundwater permeates organic materials and fills the empty spaces, a fossil is formed. The final stage of taphonomy is mechanical alteration; these are the processes that physically alter the remains (i.e. freeze-thaw, compaction, transport, burial). These stages are not only successive, they interplay. For example, chemical changes occur at every stage of the process, because of bacteria. Changes begin as soon as the death of the organism: enzymes are released that destroy the organic contents of the tissues, and mineralised tissues such as bone, enamel and dentin are a mixture of organic and mineral components. Moreover, most often the organisms (vegetal or animal) are dead because they have been killed by a predator. The digestion modifies the composition of the flesh, but also that of the bones.
Research areas
Taphonomy has undergone an explosion of interest since the 1980s, with research focusing on certain areas.
Microbial, biogeochemical, and larger-scale controls on the preservation of different tissue types; in particular, exceptional preservation in Konzervat-lagerstätten. Covered within this field is the dominance of biological versus physical agents in the destruction of remains from all major taxonomic groups (plants, invertebrates, vertebrates).
Processes that concentrate biological remains; especially the degree to which different types of assemblages reflect the species composition and abundance of source faunas and floras.
Actualistic taphonomy uses the present to understand past taphonomic events. This is often done through controlled experiments, such as the role microbes play in fossilization, the effects of mammalian carnivores on bone, or the burial of bone in a water flume. Computer modeling is also used to explain taphonomic events. Studies on actualistic taphonomy gave rise to the discipline conservation paleobiology.
The spatio-temporal resolution and ecological fidelity of species assemblages, particularly the relatively minor role of out-of-habitat transport contrasted with the major effects of time-averaging.
The outlines of megabiases in the fossil record, including the evolution of new bauplans and behavioral capabilities, and by broad-scale changes in climate, tectonics, and geochemistry of Earth surface systems.
The Mars Science Laboratory mission objectives evolved from assessment of ancient Mars habitability to developing predictive models on taphonomy.
Paleontology
One motivation behind taphonomy is to understand biases present in the fossil record better. Fossils are ubiquitous in sedimentary rocks, yet paleontologists cannot draw the most accurate conclusions about the lives and ecology of the fossilized organisms without knowing about the processes involved in their fossilization. For example, if a fossil assemblage contains more of one type of fossil than another, one can infer either that the organism was present in greater numbers, or that its remains were more resistant to decomposition.
During the late twentieth century, taphonomic data began to be applied to other paleontological subfields such as paleobiology, paleoceanography, ichnology (the study of trace fossils) and biostratigraphy. By coming to understand the oceanographic and ethological implications of observed taphonomic patterns, paleontologists have been able to provide new and meaningful interpretations and correlations that would have otherwise remained obscure in the fossil record. In the marine environment, taphonomy, specifically aragonite loss, poses a major challenge in reconstructing past environments from the modern, notably in settings such as carbonate platforms.
Forensic science
Forensic taphonomy is a relatively new field that has increased in popularity in the past 15 years. It is a subfield of forensic anthropology focusing specifically on how taphonomic forces have altered criminal evidence.
There are two different branches of forensic taphonomy: biotaphonomy and geotaphonomy. Biotaphonomy looks at how the decomposition and/or destruction of the organism has happened. The main factors that affect this branch are categorized into three groups: environmental factors; external variables, individual factors; factors from the organism itself (i.e. body size, age, etc.), and cultural factors; factors specific to any cultural behaviors that would affect the decomposition (burial practices). Geotaphonomy studies how the burial practices and the burial itself affects the surrounding environment. This includes soil disturbances and tool marks from digging the grave, disruption of plant growth and soil pH from the decomposing body, and the alteration of the land and water drainage from introducing an unnatural mass to the area.
This field is extremely important because it helps scientists use the taphonomic profile to help determine what happened to the remains at the time of death (perimortem) and after death (postmortem). This can make a huge difference when considering what can be used as evidence in a criminal investigation.
Archaeology
Taphonomy is an important study for archaeologists to better interpret archaeological sites. Since the archaeological record is often incomplete, taphonomy helps explain how it became incomplete. The methodology of taphonomy involves observing transformation processes in order to understand their impact on archaeological material and interpret patterns on real sites. This is mostly in the form of assessing how the deposition of the preserved remains of an organism (usually animal bones) has occurred to better understand a deposit.
Whether the deposition was a result of human, animals and/or the environment is often the goal of taphonomic study. Archaeologists typically separate natural from cultural processes when identifying evidence of human interaction with faunal remains. This is done by looking at human processes preceding artifact discard in addition to processes after artifact discard. Changes preceding discard include butchering, skinning, and cooking. Understanding these processes can inform archaeologists on tool use or how an animal was processed. When the artifact is deposited, abiotic and biotic modifications occur. These can include thermal alteration, rodent disturbances, gnaw marks, and the effects of soil pH to name a few.
While taphonomic methodology can be applied and used to study a variety of materials such as buried ceramics and lithics, its primary application in archaeology involves the examination of organic residues. Interpretation of the post-mortem, pre-, and post-burial histories of faunal assemblages is critical in determining their association with hominid activity and behaviour.
For instance, to distinguish the bone assemblages that are produced by humans from those of non humans, much ethnoarchaeological observation has been done on different human groups and carnivores, to ascertain if there is anything different in the accumulation and fragmentation of bones. This study has also come in the form of excavation of animal dens and burrows to study the discarded bones and experimental breakage of bones with and without stone tools.
Studies of this kind by C.K. Brain in South Africa have shown that bone fractures previously attributed to "killer man-apes" were in fact caused by the pressure of overlying rocks and earth in limestone caves. His research has also demonstrated that early hominins, for example australopithecines, were more likely preyed upon by carnivores rather than being hunters themselves, from cave sites such as Swartkrans in South Africa.
Outside of Africa Lewis Binford observed the effects of wolves and dogs on bones in Alaska and the American Southwest, differentiating the interference of humans and carnivores on bone remains by the number of bone splinters and the number of intact articular ends. He observed that animals gnaw and attack the articular ends first leaving mostly bone cylinders behind, therefore it can be assumed a deposit with a high number of bone cylinders and a low number of bones with articular ends intact is therefore probably the result of carnivore activity. In practice John Speth applied these criteria to the bones from the Garnsey site in New Mexico. The rarity of bone cylinders indicated that there had been minimal destruction by scavengers, and that the bone assemblage could be assumed to be wholly the result of human activity, butchering the animals for meat and marrow extraction.
One of the most important elements in this methodology is replication, to confirm the validity of results.
There are limitations to this kind of taphonomic study in archaeological deposits as any analysis has to presume that processes in the past were the same as today, e.g that living carnivores behaved in a similar way to those in prehistoric times. There are wide variations among existing species so determining the behavioural patterns of extinct species is sometimes hard to justify. Moreover, the differences between faunal assemblages of animals and humans is not always so distinct, hyenas and humans display similar patterning in breakage and form similarly shaped fragments as the ways in which a bone can break are limited. Since large bones survive better than plants this also has created a bias and inclination towards big-game hunting rather than gathering when considering prehistoric economies.
While all of archaeology studies taphonomy to some extent, certain subfields deal with it more than others. These include zooarchaeology, geoarchaeology, and paleoethnobotany.
Microbial Mats
Modern experiments have been conducted on post-mortem invertebrates and vertebrates to understand how microbial mats and microbial activity influence the formation of fossils and the preservation of soft tissues. In these studies, microbial mats entomb animal carcasses in a sarcophagus of microbes—the sarcophagus entombing the animal's carcass delays decay. Entombed carcasses were observed to be more intact than non-entombed counter-parts by years at a time. Microbial mats maintained and stabilized the articulation of the joints and the skeleton of post-mortem organisms, as seen in frog carcasses for up to 1080 days after coverage by the mats. The environment within the entombed carcasses is typically described as anoxic and acidic during the initial stage of decomposition. These conditions are perpetuated by the exhaustion of oxygen by aerobic bacteria within the carcass creating an environment ideal for the preservation of soft tissues, such as muscle tissue and brain tissue. The anoxic and acidic conditions created by that mats also inhibit the process of autolysis within the carcasses delaying decay even further. Endogenous gut bacteria have also been described to aid the preservation of invertebrate soft tissue by delaying decay and stabilizing soft tissue structures. Gut bacteria form pseudomorphs replicating the form of soft tissues within the animal. These pseudomorphs are possible explanation for the increased occurrence of preserved guts impression among invertebrates. In the later stages of the prolonged decomposition of the carcasses, the environment within the sarcophagus alters to more oxic and basic conditions promoting biomineralization and the precipitation of calcium carbonate.
Microbial mats additionally play a role in the formation of molds and impressions of carcasses. These molds and impressions replicate and preserve the integument of animal carcasses. The degree to which has been demonstrated in frog skin preservation. The original morphology of the frog skin, including structures such as warts, was preserved for more than 1.5 years. The microbial mats also aided in the formation of the mineral gypsum embedded within the frog skin. The microbes that constitute the microbial mats in addition to forming a sarcophagus, secrete an exopolymeric substances (EPS) that drive biomineralization. The EPS provides a nucleated center for biomineralization. During later stages of decomposition heterotrophic microbes degrade the EPS, facilitating the release of calcium ions into the environment and creating a Ca-enriched film. The degradation of the EPS and formation of the Ca-rich film is suggested to aid in the precipitation of calcium carbonate and further the process of biomineralization.
Taphonomic biases in the fossil record
Because of the very select processes that cause preservation, not all organisms have the same chance of being preserved. Any factor that affects the likelihood that an organism is preserved as a fossil is a potential source of bias. It is thus arguably the most important goal of taphonomy to identify the scope of such biases such that they can be quantified to allow correct interpretations of the relative abundances of organisms that make up a fossil biota. Some of the most common sources of bias are listed below.
Physical attributes of the organism itself
This perhaps represents the biggest source of bias in the fossil record. First and foremost, organisms that contain hard parts have a far greater chance of being represented in the fossil record than organisms consisting of soft tissue only. As a result, animals with bones or shells are overrepresented in the fossil record, and many plants are only represented by pollen or spores that have hard walls. Soft-bodied organisms may form 30% to 100% of the biota, but most fossil assemblages preserve none of this unseen diversity, which may exclude groups such as fungi and entire animal phyla from the fossil record. Many animals that moult, on the other hand, are overrepresented, as one animal may leave multiple fossils due to its discarded body parts. Among plants, wind-pollinated species produce so much more pollen than animal-pollinated species, the former being overrepresented relative to the latter.
Characteristics of the habitat
Most fossils form in conditions where material is deposited on the bottom of water bodies. Coastal areas are often prone to high rates of erosion, and rivers flowing into the sea may carry a high particulate load from inland. These sediments will eventually settle out, so organisms living in such environments have a much higher chance of being preserved as fossils after death than do those organisms living in non-depositing conditions. In continental environments, fossilization is likely in lakes and riverbeds that gradually fill in with organic and inorganic material. The organisms of such habitats are also liable to be overrepresented in the fossil record than those living far from these aquatic environments where burial by sediments is unlikely to occur.
Mixing of fossils from different places
A sedimentary deposit may have experienced a mixing of noncontemporaneous remains within single sedimentary units via physical or biological processes; i.e. a deposit could be ripped up and redeposited elsewhere, meaning that a deposit may contain a large number of fossils from another place (an allochthonous deposit, as opposed to the usual autochthonous). Thus, a question that is often asked of fossil deposits is to what extent does the fossil deposit record the true biota that originally lived there? Many fossils are obviously autochthonous, such as rooted fossils like crinoids, and many fossils are intrinsically obviously allochthonous, such as the presence of photoautotrophic plankton in a benthic deposit that must have sunk to be deposited. A fossil deposit may thus become biased towards exotic species (i.e. species not endemic to that area) when the sedimentology is dominated by gravity-driven surges, such as mudslides, or may become biased if there are very few endemic organisms to be preserved. This is a particular problem in palynology.
Temporal resolution
Because population turnover rates of individual taxa are much less than net rates of sediment accumulation, the biological remains of successive, noncontemporaneous populations of organisms may be admixed within a single bed, known as time-averaging. Because of the slow and episodic nature of the geologic record, two apparently contemporaneous fossils may have actually lived centuries, or even millennia, apart. Moreover, the degree of time-averaging in an assemblage may vary. The degree varies on many factors, such as tissue type, the habitat, the frequency of burial events and exhumation events, and the depth of bioturbation within the sedimentary column relative to net sediment accumulation rates. Like biases in spatial fidelity, there is a bias towards organisms that can survive reworking events, such as shells. An example of a more ideal deposit with respect to time-averaging bias would be a volcanic ash deposit, which captures an entire biota caught in the wrong place at the wrong time (e.g. the Silurian Herefordshire lagerstätte).
Gaps in time series
The geological record is very discontinuous, and deposition is episodic at all scales. At the largest scale, a sedimentological high-stand period may mean that no deposition may occur for millions of years and, in fact, erosion of the deposit may occur. Such a hiatus is called an unconformity. Conversely, a catastrophic event such as a mudslide may overrepresent a time period. At a shorter scale, scouring processes such as the formation of ripples and dunes and the passing of turbidity currents may cause layers to be removed. Thus the fossil record is biased towards periods of greatest sedimentation; periods of time that have less sedimentation are consequently less well represented in the fossil record.
A related problem is the slow changes that occur in the depositional environment of an area; a deposit may experience periods of poor preservation due to, for example, a lack of biomineralizing elements. This causes the taphonomic or diagenetic obliteration of fossils, producing gaps and condensation of the record.
Consistency in preservation over geologic time
Major shifts in intrinsic and extrinsic properties of organisms, including morphology and behaviour in relation to other organisms or shifts in the global environment, can cause secular or long-term cyclic changes in preservation (megabias).
Human biases
Much of the incompleteness of the fossil record is due to the fact that only a small amount of rock is ever exposed at the surface of the Earth, and not even most of that has been explored. Our fossil record relies on the small amount of exploration that has been done on this. Unfortunately, paleontologists as humans can be very biased in their methods of collection; a bias that must be identified. Potential sources of bias include,
Search images: field experiments have shown that paleontologists working on, say fossil clams are better at collecting clams than anything else because their search image has been shaped to bias them in favour of clams.
Relative ease of extraction: fossils that are easy to obtain (such as many phosphatic fossils that are easily extracted en masse by dissolution in acid) are overabundant in the fossil record.
Taxonomic bias: fossils with easily discernible morphologies will be easy to distinguish as separate species, and will thus have an inflated abundance.
Preservation of biopolymers
The taphonomic pathways involved in relatively inert substances such as calcite (and to a lesser extent bone) are relatively obvious, as such body parts are stable and change little through time. However, the preservation of "soft tissue" is more interesting, as it requires more peculiar conditions. While usually only biomineralised material survives fossilisation, the preservation of soft tissue is not as rare as sometimes thought.
Both DNA and proteins are unstable, and rarely survive more than hundreds of thousands of years before degrading. Polysaccharides also have low preservation potential, unless they are highly cross-linked; this interconnection is most common in structural tissues, and renders them resistant to chemical decay. Such tissues include wood (lignin), spores and pollen (sporopollenin), the cuticles of plants (cutan) and animals, the cell walls of algae (algaenan), and potentially the polysaccharide layer of some lichens. This interconnectedness makes the chemicals less prone to chemical decay, and also means they are a poorer source of energy so less likely to be digested by scavenging organisms. After being subjected to heat and pressure, these cross-linked organic molecules typically "cook" and become kerogen or short (<17 C atoms) aliphatic/aromatic carbon molecules. Other factors affect the likelihood of preservation; for instance sclerotization renders the jaws of polychaetes more readily preserved than the chemically equivalent but non-sclerotized body cuticle. A peer-reviewed study in 2023 was the first to present an in-depth chemical description of how biological tissues and cells potentially preserve into the fossil record. This study generalized the chemistry underlying cell and tissue preservation to explain the phenomenon for potentially any cellular organism.
It was thought that only tough, cuticle type soft tissue could be preserved by Burgess Shale type preservation, but an increasing number of organisms are being discovered that lack such cuticle, such as the probable chordate Pikaia and the shellless Odontogriphus.
It is a common misconception that anaerobic conditions are necessary for the preservation of soft tissue; indeed much decay is mediated by sulfate reducing bacteria which can only survive in anaerobic conditions. Anoxia does, however, reduce the probability that scavengers will disturb the dead organism, and the activity of other organisms is undoubtedly one of the leading causes of soft-tissue destruction.
Plant cuticle is more prone to preservation if it contains cutan, rather than cutin.
Plants and algae produce the most preservable compounds, which are listed according to their preservation potential by Tegellaar (see reference).
Disintegration
How complete fossils are was once thought to be a proxy for the energy of the environment, with stormier waters leaving less articulated carcasses. However, the dominant force actually seems to be predation, with scavengers more likely than rough waters to break up a fresh carcass before it is buried. Sediments cover smaller fossils faster so they are likely to be found fully articulated. However, erosion also tends to destroy smaller fossils more easily.
Distortion
Often fossils, particularly those of vertebrates, are distorted by the subsequent movements of the surrounding sediment, this can include compression of the fossil in a particular axis, as well as shearing.
Significance
Taphonomic processes allow researchers of multiple fields to identify the past of natural and cultural objects. From the time of death or burial until excavation, taphonomy can aid in the understanding of past environments. When studying the past it is important to gain contextual information in order to have a solid understanding of the data. Often these findings can be used to better understand cultural or environmental shifts within the present day.
The term taphomorph is used to collectively describe fossil structures that represent poorly-preserved and deteriorated remains of various taxonomic groups, rather than of a single species. For example, the 579–560 million year old fossil Ediacaran assemblages from Avalonian locations in Newfoundland contain taphomorphs of a mixture of taxa which have collectively been named Ivesheadiomorphs. Originally interpreted as fossils of a single genus, Ivesheadia, they are now thought to be the deteriorated remains of various types of frondose organism. Similarly, Ediacaran fossils from England, once assigned to Blackbrookia, Pseudovendia and Shepshedia, are now all regarded as taphomorphs related to Charnia or Charniodiscus.
Fluvial taphonomy
Fluvial taphonomy is concerned with the decomposition of organisms in rivers. An organism may sink or float within a river, it may also be carried by the current near the surface of the river or near its bottom. Organisms in terrestrial and fluvial environments will not undergo the same processes. A fluvial environment may be colder than a terrestrial environment. The ecosystem of live organisms that scavenge on the organism in question and the abiotic items in rivers will differ than on land. Organisms within a river may also be physically transported by the flow of the river. The flow of the river can additionally erode the surface of the organisms found within it. The processes an organism may undergo in a fluvial environment will result in a slower rate of decomposition within a river compared to on land.
See also
Beecher's Trilobite type preservation
Bitter Springs type preservation
Burgess Shale type preservation
Doushantuo type preservation
Ediacaran type preservation
Fossil record
Karen Chin
Lagerstätte
Permineralization
Petrifaction
Pseudofossil
Trace fossil
References
Further reading
External links
The Shelf and Slope Experimental Taphonomy Initiative is the first long-term large-scale deployment and re-collection of organism remains on the sea floor.
Journal of Taphonomy
Bioerosion Website at the College of Wooster
Comprehensive bioerosion bibliography compiled by Mark A. Wilson
Taphonomy
Minerals and the Origins of Life (Robert Hazen, NASA) (video, 60m, April 2014).
7th International Meeting on Taphonomy and Fossilization (Taphos 2014), at the Università degli studi di Ferrara, Italy, 10–13 September 2014
Archaeological science
Methods in archaeology | 0.788881 | 0.988142 | 0.779527 |
HSAB theory | HSAB is an acronym for "hard and soft (Lewis) acids and bases". HSAB is widely used in chemistry for explaining the stability of compounds, reaction mechanisms and pathways. It assigns the terms 'hard' or 'soft', and 'acid' or 'base' to chemical species. 'Hard' applies to species which are small, have high charge states (the charge criterion applies mainly to acids, to a lesser extent to bases), and are weakly polarizable. 'Soft' applies to species which are big, have low charge states and are strongly polarizable.
The theory is used in contexts where a qualitative, rather than quantitative, description would help in understanding the predominant factors which drive chemical properties and reactions. This is especially so in transition metal chemistry, where numerous experiments have been done to determine the relative ordering of ligands and transition metal ions in terms of their hardness and softness.
HSAB theory is also useful in predicting the products of metathesis reactions. In 2005 it was shown that even the sensitivity and performance of explosive materials can be explained on basis of HSAB theory.
Ralph Pearson introduced the HSAB principle in the early 1960s as an attempt to unify inorganic and organic reaction chemistry.
Theory
Essentially, the theory states that soft acids prefer to form bonds with soft bases, whereas hard acids prefer to form bonds with hard bases, all other factors being equal. It can also be said that hard acids bind strongly to hard bases and soft acids bind strongly to soft bases. The HASB classification in the original work was largely based on equilibrium constants of Lewis acid/base reactions with a reference base for comparison.
Borderline cases are also identified: borderline acids are trimethylborane, sulfur dioxide and ferrous Fe2+, cobalt Co2+ caesium Cs+ and lead Pb2+ cations. Borderline bases are: aniline, pyridine, nitrogen N2 and the azide, chloride, bromide, nitrate and sulfate anions.
Generally speaking, acids and bases interact and the most stable interactions are hard–hard (ionogenic character) and soft–soft (covalent character).
An attempt to quantify the 'softness' of a base consists in determining the equilibrium constant for the following equilibrium:
BH + CH3Hg+ H+ + CH3HgB
where CH3Hg+ (methylmercury ion) is a very soft acid and H+ (proton) is a hard acid, which compete for B (the base to be classified).
Some examples illustrating the effectiveness of the theory:
Bulk metals are soft acids and are poisoned by soft bases such as phosphines and sulfides.
Hard solvents such as hydrogen fluoride, water and the protic solvents tend to dissolve strong solute bases such as fluoride and oxide anions. On the other hand, dipolar aprotic solvents such as dimethyl sulfoxide and acetone are soft solvents with a preference for solvating large anions and soft bases.
In coordination chemistry soft–soft and hard–hard interactions exist between ligands and metal centers.
Chemical hardness
In 1983 Pearson together with Robert Parr extended the qualitative HSAB theory with a quantitative definition of the chemical hardness (η) as being proportional to the second derivative of the total energy of a chemical system with respect to changes in the number of electrons at a fixed nuclear environment:
The factor of one-half is arbitrary and often dropped as Pearson has noted.
An operational definition for the chemical hardness is obtained by applying a three-point finite difference approximation to the second derivative:
where I is the ionization potential and A the electron affinity. This expression implies that the chemical hardness is proportional to the band gap of a chemical system, when a gap exists.
The first derivative of the energy with respect to the number of electrons is equal to the chemical potential, μ, of the system,
,
from which an operational definition for the chemical potential is obtained from a finite difference approximation to the first order derivative as
which is equal to the negative of the electronegativity (χ) definition on the Mulliken scale: μ = −χ.
The hardness and Mulliken electronegativity are related as
,
and in this sense hardness is a measure for resistance to deformation or change. Likewise a value of zero denotes maximum softness, where softness is defined as the reciprocal of hardness.
In a compilation of hardness values only that of the hydride anion deviates. Another discrepancy noted in the original 1983 article are the apparent higher hardness of Tl3+ compared to Tl+.
Modifications
If the interaction between acid and base in solution results in an equilibrium mixture the strength of the interaction can be quantified in terms of an equilibrium constant. An alternative quantitative measure is the heat (enthalpy) of formation of the Lewis acid-base adduct in a non-coordinating solvent. The ECW model is quantitative model that describes and predicts the strength of Lewis acid base interactions, -ΔH . The model assigned E and C parameters to many Lewis acids and bases. Each acid is characterized by an EA and a CA. Each base is likewise characterized by its own EB and CB. The E and C parameters refer, respectively, to the electrostatic and covalent contributions to the strength of the bonds that the acid and base will form. The equation is
-ΔH = EAEB + CACB + W
The W term represents a constant energy contribution for acid–base reaction such as the cleavage of a dimeric acid or base. The equation predicts reversal of acids and base strengths. The graphical presentations of the equation show that there is no single order of Lewis base strengths or Lewis acid strengths. The ECW model accommodates the failure of single parameter descriptions of acid-base interactions.
A related method adopting the E and C formalism of Drago and co-workers quantitatively predicts the formation constants for complexes of many metal ions plus the proton with a wide range of unidentate Lewis acids in aqueous solution, and also offered insights into factors governing HSAB behavior in solution.
Another quantitative system has been proposed, in which Lewis acid strength toward Lewis base fluoride is based on gas-phase affinity for fluoride. Additional one-parameter base strength scales have been presented. However, it has been shown that to define the order of Lewis base strength (or Lewis acid strength) at least two properties must be considered. For Pearson's qualitative HSAB theory the two properties are hardness and strength while for Drago's quantitative ECW model the two properties are electrostatic and covalent .
Kornblum's rule
An application of HSAB theory is the so-called Kornblum's rule (after Nathan Kornblum) which states that in reactions with ambident nucleophiles (nucleophiles that can attack from two or more places), the more electronegative atom reacts when the reaction mechanism is SN1 and the less electronegative one in a SN2 reaction. This rule (established in 1954) predates HSAB theory but in HSAB terms its explanation is that in a SN1 reaction the carbocation (a hard acid) reacts with a hard base (high electronegativity) and that in a SN2 reaction tetravalent carbon (a soft acid) reacts with soft bases.
According to findings, electrophilic alkylations at free CN− occur preferentially at carbon, regardless of whether the SN1 or SN2 mechanism is involved and whether hard or soft electrophiles are employed. Preferred N attack, as postulated for hard electrophiles by the HSAB principle, could not be observed with any alkylating agent. Isocyano compounds are only formed with highly reactive electrophiles that react without an activation barrier because the diffusion limit is approached. It is claimed that the knowledge of absolute rate constants and not of the hardness of the reaction partners is needed to predict the outcome of alkylations of the cyanide ion.
Criticism
Reanalysis of a large number of various most typical ambident organic system reveals that thermodynamic/kinetic control describes reactivity of organic compounds perfectly, whereas the HSAB principle fails and should be abandoned in the rationalization of ambident reactivity of organic compounds.
See also
Acid-base reaction
Oxophilicity
References
Acid–base chemistry
Inorganic chemistry | 0.78727 | 0.990145 | 0.779512 |
Staining | Staining is a technique used to enhance contrast in samples, generally at the microscopic level. Stains and dyes are frequently used in histology (microscopic study of biological tissues), in cytology (microscopic study of cells), and in the medical fields of histopathology, hematology, and cytopathology that focus on the study and diagnoses of diseases at the microscopic level. Stains may be used to define biological tissues (highlighting, for example, muscle fibers or connective tissue), cell populations (classifying different blood cells), or organelles within individual cells.
In biochemistry, it involves adding a class-specific (DNA, proteins, lipids, carbohydrates) dye to a substrate to qualify or quantify the presence of a specific compound. Staining and fluorescent tagging can serve similar purposes. Biological staining is also used to mark cells in flow cytometry, and to flag proteins or nucleic acids in gel electrophoresis. Light microscopes are used for viewing stained samples at high magnification, typically using bright-field or epi-fluorescence illumination.
Staining is not limited to only biological materials, since it can also be used to study the structure of other materials; for example, the lamellar structures of semi-crystalline polymers or the domain structures of block copolymers.
In vivo vs In vitro
In vivo staining (also called vital staining or intravital staining) is the process of dyeing living tissues. By causing certain cells or structures to take on contrasting colours, their form (morphology) or position within a cell or tissue can be readily seen and studied. The usual purpose is to reveal cytological details that might otherwise not be apparent; however, staining can also reveal where certain chemicals or specific chemical reactions are taking place within cells or tissues.
In vitro staining involves colouring cells or structures that have been removed from their biological context. Certain stains are often combined to reveal more details and features than a single stain alone. Combined with specific protocols for fixation and sample preparation, scientists and physicians can use these standard techniques as consistent, repeatable diagnostic tools. A counterstain is stain that makes cells or structures more visible, when not completely visible with the principal stain.
Crystal violet stains both Gram positive and Gram negative organisms. Treatment with alcohol removes the crystal violet colour from gram negative organisms only. Safranin as counterstain is used to colour the gram negative organisms that got decolorised by alcohol.
While ex vivo, many cells continue to live and metabolize until they are "fixed". Some staining methods are based on this property. Those stains excluded by the living cells but taken up by the already dead cells are called vital stains (e.g. trypan blue or propidium iodide for eukaryotic cells). Those that enter and stain living cells are called supravital stains (e.g. New Methylene Blue and brilliant cresyl blue for reticulocyte staining). However, these stains are eventually toxic to the organism, some more so than others. Partly due to their toxic interaction inside a living cell, when supravital stains enter a living cell, they might produce a characteristic pattern of staining different from the staining of an already fixed cell (e.g. "reticulocyte" look versus diffuse "polychromasia"). To achieve desired effects, the stains are used in very dilute solutions ranging from to (Howey, 2000). Note that many stains may be used in both living and fixed cells.
Preparation
The preparatory steps involved depend on the type of analysis planned. Some or all of the following procedures may be required.
Wet mounts are used to view live organisms and can be made using water and certain stains. The liquid is added to the slide before the addition of the organism and a coverslip is placed over the specimen in the water and stain to help contain it within the field of view.
Fixation, which may itself consist of several steps, aims to preserve the shape of the cells or tissue involved as much as possible. Sometimes heat fixation is used to kill, adhere, and alter the specimen so it accepts stains. Most chemical fixatives (chemicals causing fixation) generate chemical bonds between proteins and other substances within the sample, increasing their rigidity. Common fixatives include formaldehyde, ethanol, methanol, and/or picric acid. Pieces of tissue may be embedded in paraffin wax to increase their mechanical strength and stability and to make them easier to cut into thin slices.
Mordants are chemical agents which have power of making dyes to stain materials which otherwise are unstainable
Mordants are classified into two categories:
a) Basic mordant: React with acidic dyes e.g. alum, ferrous sulfate, cetylpyridinium chloride etc.
b) Acidic mordant : React with basic dyes e.g. picric acid, tannic acid etc.
Direct Staining: Carried out without mordant.
Indirect Staining: Staining with the aid of a mordant.
Permeabilization involves treatment of cells with (usually) a mild surfactant. This treatment dissolves cell membranes, and allows larger dye molecules into the cell's interior.
Mounting usually involves attaching the samples to a glass microscope slide for observation and analysis. In some cases, cells may be grown directly on a slide. For samples of loose cells (as with a blood smear or a pap smear) the sample can be directly applied to a slide. For larger pieces of tissue, thin sections (slices) are made using a microtome; these slices can then be mounted and inspected.
Standardization
Most of the dyes commonly used in microscopy are available as BSC-certified stains. This means that samples of the manufacturer's batch have been tested by an independent body, the Biological Stain Commission (BSC), and found to meet or exceed certain standards of purity, dye content and performance in staining techniques ensuring more accurately performed experiments and more reliable results. These standards are published in the commission's journal Biotechnic & Histochemistry. Many dyes are inconsistent in composition from one supplier to another. The use of BSC-certified stains eliminates a source of unexpected results.
Some vendors sell stains "certified" by themselves rather than by the Biological Stain Commission. Such products may or may not be suitable for diagnostic and other applications.
Negative staining
A simple staining method for bacteria that is usually successful, even when the positive staining methods fail, is to use a negative stain. This can be achieved by smearing the sample onto the slide and then applying nigrosin (a black synthetic dye) or India ink (an aqueous suspension of carbon particles). After drying, the microorganisms may be viewed in bright field microscopy as lighter inclusions well-contrasted against the dark environment surrounding them. Negative staining is able to stain the background instead of the organisms because the cell wall of microorganisms typically has a negative charge which repels the negatively charged stain. The dyes used in negative staining are acidic. Note: negative staining is a mild technique that may not destroy the microorganisms, and is therefore unsuitable for studying pathogens.
Positive staining
Unlike negative staining, positive staining uses basic dyes to color the specimen against a bright background. While chromophore is used for both negative and positive staining alike, the type of chromophore used in this technique is a positively charged ion instead of a negative one. The negatively charged cell wall of many microorganisms attracts the positively charged chromophore which causes the specimen to absorb the stain giving it the color of the stain being used. Positive staining is more commonly used than negative staining in microbiology. The different types of positive staining are listed below.
Simple versus differential
Simple Staining is a technique that only uses one type of stain on a slide at a time. Because only one stain is being used, the specimens (for positive stains) or background (for negative stains) will be one color. Therefore, simple stains are typically used for viewing only one organism per slide. Differential staining uses multiple stains per slide. Based on the stains being used, organisms with different properties will appear different colors allowing for categorization of multiple specimens. Differential staining can also be used to color different organelles within one organism which can be seen in endospore staining.
Types
Techniques
Gram
Gram staining is used to determine gram status to classifying bacteria broadly based on the composition of their cell wall. Gram staining uses crystal violet to stain cell walls, iodine (as a mordant), and a fuchsin or safranin counterstain to (mark all bacteria). Gram status, helps divide specimens of bacteria into two groups, generally representative of their underlying phylogeny. This characteristic, in combination with other techniques makes it a useful tool in clinical microbiology laboratories, where it can be important in early selection of appropriate antibiotics.
On most Gram-stained preparations, Gram-negative organisms appear red or pink due to their counterstain. Due to the presence of higher lipid content, after alcohol-treatment, the porosity of the cell wall increases, hence the CVI complex (crystal violet – iodine) can pass through. Thus, the primary stain is not retained. In addition, in contrast to most Gram-positive bacteria, Gram-negative bacteria have only a few layers of peptidoglycan and a secondary cell membrane made primarily of lipopolysaccharide.
Endospore
Endospore staining is used to identify the presence or absence of endospores, which make bacteria very difficult to kill. Bacterial spores have proven to be difficult to stain as they are not permeable to aqueous dye reagents. Endospore staining is particularly useful for identifying endospore-forming bacterial pathogens such as Clostridium difficile. Prior to the development of more efficient methods, this stain was performed using the Wirtz method with heat fixation and counterstain. Through the use of malachite green and a diluted ratio of carbol fuchsin, fixing bacteria in osmic acid was a great way to ensure no blending of dyes. However, newly revised staining methods have significantly decreased the time it takes to create these stains. This revision included substitution of carbol fuchsin with aqueous Safranin paired with a newly diluted 5% formula of malachite green. This new and improved composition of stains was performed in the same way as before with the use of heat fixation, rinsing, and blotting dry for later examination. Upon examination, all endospore forming bacteria will be stained green accompanied by all other cells appearing red.
Ziehl-Neelsen
A Ziehl–Neelsen stain is an acid-fast stain used to stain species of Mycobacterium tuberculosis that do not stain with the standard laboratory staining procedures such as Gram staining.
This stain is performed through the use of both red coloured carbol fuchsin that stains the bacteria and a counter stain such as methylene blue.
Haematoxylin and eosin (H&E)
Haematoxylin and eosin staining is frequently used in histology to examine thin tissue sections. Haematoxylin stains cell nuclei blue, while eosin stains cytoplasm, connective tissue and other extracellular substances pink or red. Eosin is strongly absorbed by red blood cells, colouring them bright red. In a skillfully made H&E preparation the red blood cells are almost orange, and collagen and cytoplasm (especially muscle) acquire different shades of pink.
Papanicolaou
Papanicolaou staining, or PAP staining, was developed to replace fine needle aspiration cytology (FNAC) in hopes of decreasing staining times and cost without compromising quality. This stain is a frequently used method for examining cell samples from a variety of tissue types in various organs. PAP staining has endured several modifications in order to become a “suitable alternative” for FNAC. This transition stemmed from the appreciation of wet fixed smears by scientists preserving the structures of the nuclei opposed to the opaque appearance of air dried Romanowsky smears. This led to the creation of a hybrid stain of wet fixed and air dried known as the ultrafast papanicolaou stain. This modification includes the use of nasal saline to rehydrate cells to increase cell transparency and is paired with the use of alcoholic formalin to enhance colors of the nuclei. The papanicolaou stain is now used in place of cytological staining in all organ types due to its increase in morphological quality, decreased staining time, and decreased cost. It is frequently used to stain Pap smear specimens. It uses a combination of haematoxylin, Orange G, eosin Y, Light Green SF yellowish, and sometimes Bismarck Brown Y.
PAS
Periodic acid-Schiff is a histology special stain used to mark carbohydrates (glycogen, glycoprotein, proteoglycans). PAS is commonly used on liver tissue where glycogen deposits are made which is done in efforts to distinguish different types of glycogen storage diseases. PAS is important because it can detect glycogen granules found in tumors of the ovaries and pancreas of the endocrine system, as well as in the bladder and kidneys of the renal system. Basement membranes can also show up in a PAS stain and can be important when diagnosing renal disease. Due to the high volume of carbohydrates within the cell wall of hyphae and yeast forms of fungi, the Periodic acid -Schiff stain can help locate these species inside tissue samples of the human body.
Masson
Masson's trichrome is (as the name implies) a three-colour staining protocol. The recipe has evolved from Masson's original technique for different specific applications, but all are well-suited to distinguish cells from surrounding connective tissue. Most recipes produce red keratin and muscle fibers, blue or green staining of collagen and bone, light red or pink staining of cytoplasm, and black cell nuclei.
Romanowsky
The Romanowsky stains is considered a polychrome staining effect and is based on a combination of eosin plus (chemically reduced eosin) and demethylated methylene blue (containing its oxidation products azure A and azure B). This stain develops varying colors for all cell structures (“Romanowsky-Giemsa effect) and thus was used in staining neutrophil polymorphs and cell nuclei. Common variants include Wright's stain, Jenner's stain, May-Grunwald stain, Leishman stain and Giemsa stain.
All are used to examine blood or bone marrow samples. They are preferred over H&E for inspection of blood cells because different types of leukocytes (white blood cells) can be readily distinguished. All are also suited to examination of blood to detect blood-borne parasites such as malaria.
Silver
Silver staining is the use of silver to stain histologic sections. This kind of staining is important in the demonstration of proteins (for example type III collagen) and DNA. It is used to show both substances inside and outside cells. Silver staining is also used in temperature gradient gel electrophoresis.
Argentaffin cells reduce silver solution to metallic silver after formalin fixation. This method was discovered by Italian Camillo Golgi, by using a reaction between silver nitrate and potassium dichromate, thus precipitating silver chromate in some cells (see Golgi's method). Argyrophilic cells reduce silver solution to metallic silver after being exposed to the stain that contains a reductant. An example of this would be hydroquinone or formalin.
Sudan
Sudan staining utilizes Sudan dyes to stain sudanophilic substances, often including lipids. Sudan III, Sudan IV, Oil Red O, Osmium tetroxide, and Sudan Black B are often used. Sudan staining is often used to determine the level of fecal fat in diagnosing steatorrhea.
Wirtz-Conklin
The Wirtz-Conklin stain is a special technique designed for staining true endospores with the use of malachite green dye as the primary stain and safranin as the counterstain. Once stained, they do not decolourize. The addition of heat during the staining process is a huge contributing factor. Heat helps open the spore's membrane so the dye can enter. The main purpose of this stain is to show germination of bacterial spores. If the process of germination is taking place, then the spore will turn green in color due to malachite green and the surrounding cell will be red from the safranin. This stain can also help determine the orientation of the spore within the bacterial cell; whether it being terminal (at the tip), subterminal (within the cell), or central (completely in the middle of the cell).
Collagen hybridizing peptide
Collagen hybridizing peptide (CHP) staining allows for an easy, direct way to stain denatured collagens of any type (Type I, II, IV, etc.) regardless if they were damaged or degraded via enzymatic, mechanical, chemical, or thermal means. They work by refolding into the collagen triple helix with the available single strands in the tissue. CHPs can be visualized by a simple fluorescence microscope.
Common biological stains
Different stains react or concentrate in different parts of a cell or tissue, and these properties are used to advantage to reveal specific parts or areas. Some of the most common biological stains are listed below. Unless otherwise marked, all of these dyes may be used with fixed cells and tissues; vital dyes (suitable for use with living organisms) are noted.
Acridine orange
Acridine orange (AO) is a nucleic acid selective fluorescent cationic dye useful for cell cycle determination. It is cell-permeable, and interacts with DNA and RNA by intercalation or electrostatic attractions. When bound to DNA, it is very similar spectrally to fluorescein. Like fluorescein, it is also useful as a non-specific stain for backlighting conventionally stained cells on the surface of a solid sample of tissue (fluorescence backlighted staining).
Bismarck brown
Bismarck brown (also Bismarck brown Y or Manchester brown) imparts a yellow colour to acid mucins and an intense brown color to mast cells. One default of this stain is that it blots out any other structure surrounding it and makes the quality of the contrast low. It has to be paired with other stains in order to be useful. Some complementing stains used alongside Bismark brown are Hematoxylin and Toluidine blue which provide better contrast within the histology sample.
Carmine
Carmine is an intensely red dye used to stain glycogen, while Carmine alum is a nuclear stain. Carmine stains require the use of a mordant, usually aluminum.
Coomassie blue
Coomassie brilliant blue nonspecifically stains proteins a strong blue colour. It is often used in gel electrophoresis.
Cresyl violet
Cresyl violet stains the acidic components of the neuronal cytoplasm a violet colour, specifically nissl bodies. Often used in brain research.
Crystal violet
Crystal violet, when combined with a suitable mordant, stains cell walls purple. Crystal violet is the stain used in Gram staining.
DAPI
DAPI is a fluorescent nuclear stain, excited by ultraviolet light and showing strong blue fluorescence when bound to DNA. DAPI binds with A=T rich repeats of chromosomes. DAPI is also not visible with regular transmission microscopy. It may be used in living or fixed cells. DAPI-stained cells are especially appropriate for cell counting.
Eosin
Eosin is most often used as a counterstain to haematoxylin, imparting a pink or red colour to cytoplasmic material, cell membranes, and some extracellular structures. It also imparts a strong red colour to red blood cells. Eosin may also be used as a counterstain in some variants of Gram staining, and in many other protocols. There are actually two very closely related compounds commonly referred to as eosin. Most often used is eosin Y (also known as eosin Y ws or eosin yellowish); it has a very slightly yellowish cast. The other eosin compound is eosin B (eosin bluish or imperial red); it has a very faint bluish cast. The two dyes are interchangeable, and the use of one or the other is more a matter of preference and tradition.
Ethidium bromide
Ethidium bromide intercalates and stains DNA, providing a fluorescent red-orange stain. Although it will not stain healthy cells, it can be used to identify cells that are in the final stages of apoptosis – such cells have much more permeable membranes. Consequently, ethidium bromide is often used as a marker for apoptosis in cells populations and to locate bands of DNA in gel electrophoresis. The stain may also be used in conjunction with acridine orange (AO) in viable cell counting. This EB/AO combined stain causes live cells to fluoresce green whilst apoptotic cells retain the distinctive red-orange fluorescence.
Acid fuchsin
Acid fuchsine may be used to stain collagen, smooth muscle, or mitochondria.
Acid fuchsin is used as the nuclear and cytoplasmic stain in Mallory's trichrome method. Acid fuchsin stains cytoplasm in some variants of Masson's trichrome. In Van Gieson's picro-fuchsine, acid fuchsin imparts its red colour to collagen fibres. Acid fuchsin is also a traditional stain for mitochondria (Altmann's method).
Haematoxylin
Haematoxylin (hematoxylin in North America) is a nuclear stain. Used with a mordant, haematoxylin stains nuclei blue-violet or brown. It is most often used with eosin in the H&E stain (haematoxylin and eosin) staining, one of the most common procedures in histology.
Hoechst stains
Hoechst is a bis-benzimidazole derivative compound that binds to the minor groove of DNA. Often used in fluorescence microscopy for DNA staining, Hoechst stains appear yellow when dissolved in aqueous solutions and emit blue light under UV excitation. There are two major types of Hoechst: Hoechst 33258 and Hoechst 33342. The two compounds are functionally similar, but with a little difference in structure. Hoechst 33258 contains a terminal hydroxyl group and is thus more soluble in aqueous solution, however this characteristics reduces its ability to penetrate the plasma membrane. Hoechst 33342 contains an ethyl substitution on the terminal hydroxyl group (i.e. an ethylether group) making it more hydrophobic for easier plasma membrane passage
Iodine
Iodine is used in chemistry as an indicator for starch. When starch is mixed with iodine in solution, an intensely dark blue colour develops, representing a starch/iodine complex. Starch is a substance common to most plant cells and so a weak iodine solution will stain starch present in the cells. Iodine is one component in the staining technique known as Gram staining, used in microbiology. Used as a mordant in Gram's staining, iodine enhances the entrance of the dye through the pores present in the cell wall/membrane.
Lugol's solution or Lugol's iodine (IKI) is a brown solution that turns black in the presence of starches and can be used as a cell stain, making the cell nuclei more visible.
Used with common vinegar (acetic acid), Lugol's solution is used to identify pre-cancerous and cancerous changes in cervical and vaginal tissues during "Pap smear" follow up examinations in preparation for biopsy. The acetic acid causes the abnormal cells to blanch white, while the normal tissues stain a mahogany brown from the iodine.
Malachite green
Malachite green (also known as diamond green B or victoria green B) can be used as a blue-green counterstain to safranin in the Gimenez staining technique for bacteria. It can also be used to directly stain spores.
Methyl green
Methyl green is used commonly with bright-field, as well as fluorescence microscopes to dye the chromatin of cells so that they are more easily viewed.
Methylene blue
Methylene blue is used to stain animal cells, such as human cheek cells, to make their nuclei more observable. Also used to stain blood films in cytology.
Neutral red
Neutral red (or toluylene red) stains Nissl substance red. It is usually used as a counterstain in combination with other dyes.
Nile blue
Nile blue (or Nile blue A) stains nuclei blue. It may be used with living cells.
Nile red
Nile red (also known as Nile blue oxazone) is formed by boiling Nile blue with sulfuric acid. This produces a mix of Nile red and Nile blue. Nile red is a lipophilic stain; it will accumulate in lipid globules inside cells, staining them red. Nile red can be used with living cells. It fluoresces strongly when partitioned into lipids, but practically not at all in aqueous solution.
Osmium tetroxide (formal name: osmium tetraoxide)
Osmium tetraoxide is used in optical microscopy to stain lipids. It dissolves in fats, and is reduced by organic materials to elemental osmium, an easily visible black substance.
Propidium iodide
Propidium iodide is a fluorescent intercalating agent that can be used to stain cells. Propidium iodide is used as a DNA stain in flow cytometry to evaluate cell viability or DNA content in cell cycle analysis, or in microscopy to visualise the nucleus and other DNA-containing organelles. Propidium Iodide cannot cross the membrane of live cells, making it useful to differentiate necrotic, apoptotic and healthy cells. PI also binds to RNA, necessitating treatment with nucleases to distinguish between RNA and DNA staining
Rhodamine
Rhodamine is a protein specific fluorescent stain commonly used in fluorescence microscopy.
Safranine
Safranine (or Safranine O) is a red cationic dye. It binds to nuclei (DNA) and other tissue polyanions, including glycosaminoglycans in cartilage and mast cells, and components of lignin and plastids in plant tissues. Safranine should not be confused with saffron, an expensive natural dye that is used in some methods to impart a yellow colour to collagen, to contrast with blue and red colours imparted by other dyes to nuclei and cytoplasm in animal (including human) tissues.
The incorrect spelling "safranin" is in common use. The -ine ending is appropriate for safranine O because this dye is an amine.
Stainability of tissues
Tissues which take up stains are called chromatic. Chromosomes were so named because of their ability to absorb a violet stain.
Positive affinity for a specific stain may be designated by the suffix -philic. For example, tissues that stain with an azure stain may be referred to as azurophilic. This may also be used for more generalized staining properties, such as acidophilic for tissues that stain by acidic stains (most notably eosin), basophilic when staining in basic dyes, and amphophilic when staining with either acid or basic dyes. In contrast, chromophobic tissues do not take up coloured dye readily.
Electron microscopy
As in light microscopy, stains can be used to enhance contrast in transmission electron microscopy. Electron-dense compounds of heavy metals are typically used.
Phosphotungstic acid
Phosphotungstic acid is a common negative stain for viruses, nerves, polysaccharides, and other biological tissue materials. It is mostly used in a .5-2% ph form making it neutral and is paired with water to make an aqueous solution. Phosphotungstic acid is filled with electron dense matter that stains the background surrounding the specimen dark and the specimen itself light. This process is not the normal positive technique for staining where the specimen is dark and the background remains light.
Osmium tetroxide
Osmium tetroxide is used in optical microscopy to stain lipids. It dissolves in fats, and is reduced by organic materials to elemental osmium, an easily visible black substance. Because it is a heavy metal that absorbs electrons, it is perhaps the most common stain used for morphology in biological electron microscopy. It is also used for the staining of various polymers for the study of their morphology by TEM. is very volatile and extremely toxic. It is a strong oxidizing agent as the osmium has an oxidation number of +8. It aggressively oxidizes many materials, leaving behind a deposit of non-volatile osmium in a lower oxidation state.
Ruthenium tetroxide
Ruthenium tetroxide is equally volatile and even more aggressive than osmium tetraoxide and able to stain even materials that resist the osmium stain, e.g. polyethylene.
Other chemicals used in electron microscopy staining include:
ammonium molybdate, cadmium iodide, carbohydrazide, ferric chloride, hexamine, indium trichloride, lanthanum(III) nitrate, lead acetate, lead citrate, lead(II) nitrate, periodic acid, phosphomolybdic acid, potassium ferricyanide, potassium ferrocyanide, ruthenium red, silver nitrate, silver proteinate, sodium chloroaurate, thallium nitrate, thiosemicarbazide, uranyl acetate, uranyl nitrate, and vanadyl sulfate.
See also
Biological Stain Commission: Third-party quality control and certification of stains
Cytology: the study of cells
Histology: the study of tissues
Immunohistochemistry: the use of antisera to label specific antigens
Ruthenium(II) tris(bathophenanthroline disulfonate), a protein dye.
Vital stain: stains that do not kill cells
PAGE: separation of protein molecules
Barium enema - a type of in vivo stain that creates contrast in the x-ray part of the light spectrum
Diaphonization
References
Further reading
External links
The Biological Stain commission is an independent non-profit company that has been testing dyes since the early 1920s and issuing Certificates of approval for batches of dyes that meet internationally recognized standards.
StainsFile Reference for dyes and staining techniques.
Vital Staining for Protozoa and Related Temporary Mounting Techniques ~ Howey, 2000
Speaking of Fixation: Part 1 and Part 2 – by M. Halit Umar
Photomicrographs of Histology Stains
Frequently asked questions in staining exercises at Sridhar Rao P.N's home page
dyes
pigments
Staining dyes
Scientific techniques
Biological techniques and tools | 0.783106 | 0.995373 | 0.779482 |
Protein biosynthesis | Protein biosynthesis (or protein synthesis) is a core biological process, occurring inside cells, balancing the loss of cellular proteins (via degradation or export) through the production of new proteins. Proteins perform a number of critical functions as enzymes, structural proteins or hormones. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences.
Protein synthesis can be divided broadly into two phases: transcription and translation. During transcription, a section of DNA encoding a protein, known as a gene, is converted into a template molecule called messenger RNA (mRNA). This conversion is carried out by enzymes, known as RNA polymerases, in the nucleus of the cell. In eukaryotes, this mRNA is initially produced in a premature form (pre-mRNA) which undergoes post-transcriptional modifications to produce mature mRNA. The mature mRNA is exported from the cell nucleus via nuclear pores to the cytoplasm of the cell for translation to occur. During translation, the mRNA is read by ribosomes which use the nucleotide sequence of the mRNA to determine the sequence of amino acids. The ribosomes catalyze the formation of covalent peptide bonds between the encoded amino acids to form a polypeptide chain.
Following translation the polypeptide chain must fold to form a functional protein; for example, to function as an enzyme the polypeptide chain must fold correctly to produce a functional active site. To adopt a functional three-dimensional shape, the polypeptide chain must first form a series of smaller underlying structures called secondary structures. The polypeptide chain in these secondary structures then folds to produce the overall 3D tertiary structure. Once correctly folded, the protein can undergo further maturation through different post-translational modifications, which can alter the protein's ability to function, its location within the cell (e.g. cytoplasm or nucleus) and its ability to interact with other proteins.
Protein biosynthesis has a key role in disease as changes and errors in this process, through underlying DNA mutations or protein misfolding, are often the underlying causes of a disease. DNA mutations change the subsequent mRNA sequence, which then alters the mRNA encoded amino acid sequence. Mutations can cause the polypeptide chain to be shorter by generating a stop sequence which causes early termination of translation. Alternatively, a mutation in the mRNA sequence changes the specific amino acid encoded at that position in the polypeptide chain. This amino acid change can impact the protein's ability to function or to fold correctly. Misfolded proteins have a tendency to form dense protein clumps, which are often implicated in diseases, particularly neurological disorders including Alzheimer's and Parkinson's disease.
Transcription
Transcription occurs in the nucleus using DNA as a template to produce mRNA. In eukaryotes, this mRNA molecule is known as pre-mRNA as it undergoes post-transcriptional modifications in the nucleus to produce a mature mRNA molecule. However, in prokaryotes post-transcriptional modifications are not required so the mature mRNA molecule is immediately produced by transcription.
Initially, an enzyme known as a helicase acts on the molecule of DNA. DNA has an antiparallel, double helix structure composed of two, complementary polynucleotide strands, held together by hydrogen bonds between the base pairs. The helicase disrupts the hydrogen bonds causing a region of DNAcorresponding to a geneto unwind, separating the two DNA strands and exposing a series of bases. Despite DNA being a double-stranded molecule, only one of the strands acts as a template for pre-mRNA synthesis; this strand is known as the template strand. The other DNA strand (which is complementary to the template strand) is known as the coding strand.
Both DNA and RNA have intrinsic directionality, meaning there are two distinct ends of the molecule. This property of directionality is due to the asymmetrical underlying nucleotide subunits, with a phosphate group on one side of the pentose sugar and a base on the other. The five carbons in the pentose sugar are numbered from 1' (where ' means prime) to 5'. Therefore, the phosphodiester bonds connecting the nucleotides are formed by joining the hydroxyl group on the 3' carbon of one nucleotide to the phosphate group on the 5' carbon of another nucleotide. Hence, the coding strand of DNA runs in a 5' to 3' direction and the complementary, template DNA strand runs in the opposite direction from 3' to 5'.
The enzyme RNA polymerase binds to the exposed template strand and reads from the gene in the 3' to 5' direction. Simultaneously, the RNA polymerase synthesizes a single strand of pre-mRNA in the 5'-to-3' direction by catalysing the formation of phosphodiester bonds between activated nucleotides (free in the nucleus) that are capable of complementary base pairing with the template strand. Behind the moving RNA polymerase the two strands of DNA rejoin, so only 12 base pairs of DNA are exposed at one time. RNA polymerase builds the pre-mRNA molecule at a rate of 20 nucleotides per second enabling the production of thousands of pre-mRNA molecules from the same gene in an hour. Despite the fast rate of synthesis, the RNA polymerase enzyme contains its own proofreading mechanism. The proofreading mechanisms allows the RNA polymerase to remove incorrect nucleotides (which are not complementary to the template strand of DNA) from the growing pre-mRNA molecule through an excision reaction. When RNA polymerases reaches a specific DNA sequence which terminates transcription, RNA polymerase detaches and pre-mRNA synthesis is complete.
The pre-mRNA molecule synthesized is complementary to the template DNA strand and shares the same nucleotide sequence as the coding DNA strand. However, there is one crucial difference in the nucleotide composition of DNA and mRNA molecules. DNA is composed of the bases: guanine, cytosine, adenine and thymine (G, C, A and T). RNA is also composed of four bases: guanine, cytosine, adenine and uracil. In RNA molecules, the DNA base thymine is replaced by uracil which is able to base pair with adenine. Therefore, in the pre-mRNA molecule, all complementary bases which would be thymine in the coding DNA strand are replaced by uracil.
Post-transcriptional modifications
Once transcription is complete, the pre-mRNA molecule undergoes post-transcriptional modifications to produce a mature mRNA molecule.
There are 3 key steps within post-transcriptional modifications:
Addition of a 5' cap to the 5' end of the pre-mRNA molecule
Addition of a 3' poly(A) tail is added to the 3' end pre-mRNA molecule
Removal of introns via RNA splicing
The 5' cap is added to the 5' end of the pre-mRNA molecule and is composed of a guanine nucleotide modified through methylation. The purpose of the 5' cap is to prevent break down of mature mRNA molecules before translation, the cap also aids binding of the ribosome to the mRNA to start translation and enables mRNA to be differentiated from other RNAs in the cell. In contrast, the 3' Poly(A) tail is added to the 3' end of the mRNA molecule and is composed of 100-200 adenine bases. These distinct mRNA modifications enable the cell to detect that the full mRNA message is intact if both the 5' cap and 3' tail are present.
This modified pre-mRNA molecule then undergoes the process of RNA splicing. Genes are composed of a series of introns and exons, introns are nucleotide sequences which do not encode a protein while, exons are nucleotide sequences that directly encode a protein. Introns and exons are present in both the underlying DNA sequence and the pre-mRNA molecule, therefore, to produce a mature mRNA molecule encoding a protein, splicing must occur. During splicing, the intervening introns are removed from the pre-mRNA molecule by a multi-protein complex known as a spliceosome (composed of over 150 proteins and RNA). This mature mRNA molecule is then exported into the cytoplasm through nuclear pores in the envelope of the nucleus.
Translation
During translation, ribosomes synthesize polypeptide chains from mRNA template molecules. In eukaryotes, translation occurs in the cytoplasm of the cell, where the ribosomes are located either free floating or attached to the endoplasmic reticulum. In prokaryotes, which lack a nucleus, the processes of both transcription and translation occur in the cytoplasm.
Ribosomes are complex molecular machines, made of a mixture of protein and ribosomal RNA, arranged into two subunits (a large and a small subunit), which surround the mRNA molecule. The ribosome reads the mRNA molecule in a 5'-3' direction and uses it as a template to determine the order of amino acids in the polypeptide chain. To translate the mRNA molecule, the ribosome uses small molecules, known as transfer RNAs (tRNA), to deliver the correct amino acids to the ribosome. Each tRNA is composed of 70-80 nucleotides and adopts a characteristic cloverleaf structure due to the formation of hydrogen bonds between the nucleotides within the molecule. There are around 60 different types of tRNAs, each tRNA binds to a specific sequence of three nucleotides (known as a codon) within the mRNA molecule and delivers a specific amino acid.
The ribosome initially attaches to the mRNA at the start codon (AUG) and begins to translate the molecule. The mRNA nucleotide sequence is read in triplets; three adjacent nucleotides in the mRNA molecule correspond to a single codon. Each tRNA has an exposed sequence of three nucleotides, known as the anticodon, which are complementary in sequence to a specific codon that may be present in mRNA. For example, the first codon encountered is the start codon composed of the nucleotides AUG. The correct tRNA with the anticodon (complementary 3 nucleotide sequence UAC) binds to the mRNA using the ribosome. This tRNA delivers the correct amino acid corresponding to the mRNA codon, in the case of the start codon, this is the amino acid methionine. The next codon (adjacent to the start codon) is then bound by the correct tRNA with complementary anticodon, delivering the next amino acid to ribosome. The ribosome then uses its peptidyl transferase enzymatic activity to catalyze the formation of the covalent peptide bond between the two adjacent amino acids.
The ribosome then moves along the mRNA molecule to the third codon. The ribosome then releases the first tRNA molecule, as only two tRNA molecules can be brought together by a single ribosome at one time. The next complementary tRNA with the correct anticodon complementary to the third codon is selected, delivering the next amino acid to the ribosome which is covalently joined to the growing polypeptide chain. This process continues with the ribosome moving along the mRNA molecule adding up to 15 amino acids per second to the polypeptide chain. Behind the first ribosome, up to 50 additional ribosomes can bind to the mRNA molecule forming a polysome, this enables simultaneous synthesis of multiple identical polypeptide chains. Termination of the growing polypeptide chain occurs when the ribosome encounters a stop codon (UAA, UAG, or UGA) in the mRNA molecule. When this occurs, no tRNA can recognise it and a release factor induces the release of the complete polypeptide chain from the ribosome. Dr. Har Gobind Khorana, a scientist originating from India, decoded the RNA sequences for about 20 amino acids. He was awarded the Nobel prize in 1968, along with two other scientists, for his work.
Protein folding
Once synthesis of the polypeptide chain is complete, the polypeptide chain folds to adopt a specific structure which enables the protein to carry out its functions. The basic form of protein structure is known as the primary structure, which is simply the polypeptide chain i.e. a sequence of covalently bonded amino acids. The primary structure of a protein is encoded by a gene. Therefore, any changes to the sequence of the gene can alter the primary structure of the protein and all subsequent levels of protein structure, ultimately changing the overall structure and function.
The primary structure of a protein (the polypeptide chain) can then fold or coil to form the secondary structure of the protein. The most common types of secondary structure are known as an alpha helix or beta sheet, these are small structures produced by hydrogen bonds forming within the polypeptide chain. This secondary structure then folds to produce the tertiary structure of the protein. The tertiary structure is the proteins overall 3D structure which is made of different secondary structures folding together. In the tertiary structure, key protein features e.g. the active site, are folded and formed enabling the protein to function. Finally, some proteins may adopt a complex quaternary structure. Most proteins are made of a single polypeptide chain, however, some proteins are composed of multiple polypeptide chains (known as subunits) which fold and interact to form the quaternary structure. Hence, the overall protein is a multi-subunit complex composed of multiple folded, polypeptide chain subunits e.g. haemoglobin.
Post-translation events
There are events that follow protein biosynthesis such as proteolysis and protein-folding. Proteolysis refers to the cleavage of proteins by proteases and the breakdown of proteins into amino acids by the action of enzymes.
Post-translational modifications
When protein folding into the mature, functional 3D state is complete, it is not necessarily the end of the protein maturation pathway. A folded protein can still undergo further processing through post-translational modifications. There are over 200 known types of post-translational modification, these modifications can alter protein activity, the ability of the protein to interact with other proteins and where the protein is found within the cell e.g. in the cell nucleus or cytoplasm. Through post-translational modifications, the diversity of proteins encoded by the genome is expanded by 2 to 3 orders of magnitude.
There are four key classes of post-translational modification:
Cleavage
Addition of chemical groups
Addition of complex molecules
Formation of intramolecular bonds
Cleavage
Cleavage of proteins is an irreversible post-translational modification carried out by enzymes known as proteases. These proteases are often highly specific and cause hydrolysis of a limited number of peptide bonds within the target protein. The resulting shortened protein has an altered polypeptide chain with different amino acids at the start and end of the chain. This post-translational modification often alters the proteins function, the protein can be inactivated or activated by the cleavage and can display new biological activities.
Addition of chemical groups
Following translation, small chemical groups can be added onto amino acids within the mature protein structure. Examples of processes which add chemical groups to the target protein include methylation, acetylation and phosphorylation.
Methylation is the reversible addition of a methyl group onto an amino acid catalyzed by methyltransferase enzymes. Methylation occurs on at least 9 of the 20 common amino acids, however, it mainly occurs on the amino acids lysine and arginine. One example of a protein which is commonly methylated is a histone. Histones are proteins found in the nucleus of the cell. DNA is tightly wrapped round histones and held in place by other proteins and interactions between negative charges in the DNA and positive charges on the histone. A highly specific pattern of amino acid methylation on the histone proteins is used to determine which regions of DNA are tightly wound and unable to be transcribed and which regions are loosely wound and able to be transcribed.
Histone-based regulation of DNA transcription is also modified by acetylation. Acetylation is the reversible covalent addition of an acetyl group onto a lysine amino acid by the enzyme acetyltransferase. The acetyl group is removed from a donor molecule known as acetyl coenzyme A and transferred onto the target protein. Histones undergo acetylation on their lysine residues by enzymes known as histone acetyltransferase. The effect of acetylation is to weaken the charge interactions between the histone and DNA, thereby making more genes in the DNA accessible for transcription.
The final, prevalent post-translational chemical group modification is phosphorylation. Phosphorylation is the reversible, covalent addition of a phosphate group to specific amino acids (serine, threonine and tyrosine) within the protein. The phosphate group is removed from the donor molecule ATP by a protein kinase and transferred onto the hydroxyl group of the target amino acid, this produces adenosine diphosphate as a byproduct. This process can be reversed and the phosphate group removed by the enzyme protein phosphatase. Phosphorylation can create a binding site on the phosphorylated protein which enables it to interact with other proteins and generate large, multi-protein complexes. Alternatively, phosphorylation can change the level of protein activity by altering the ability of the protein to bind its substrate.
Addition of complex molecules
Post-translational modifications can incorporate more complex, large molecules into the folded protein structure. One common example of this is glycosylation, the addition of a polysaccharide molecule, which is widely considered to be most common post-translational modification.
In glycosylation, a polysaccharide molecule (known as a glycan) is covalently added to the target protein by glycosyltransferases enzymes and modified by glycosidases in the endoplasmic reticulum and Golgi apparatus. Glycosylation can have a critical role in determining the final, folded 3D structure of the target protein. In some cases glycosylation is necessary for correct folding. N-linked glycosylation promotes protein folding by increasing solubility and mediates the protein binding to protein chaperones. Chaperones are proteins responsible for folding and maintaining the structure of other proteins.
There are broadly two types of glycosylation, N-linked glycosylation and O-linked glycosylation. N-linked glycosylation starts in the endoplasmic reticulum with the addition of a precursor glycan. The precursor glycan is modified in the Golgi apparatus to produce complex glycan bound covalently to the nitrogen in an asparagine amino acid. In contrast, O-linked glycosylation is the sequential covalent addition of individual sugars onto the oxygen in the amino acids serine and threonine within the mature protein structure.
Formation of covalent bonds
Many proteins produced within the cell are secreted outside the cell to function as extracellular proteins. Extracellular proteins are exposed to a wide variety of conditions. To stabilize the 3D protein structure, covalent bonds are formed either within the protein or between the different polypeptide chains in the quaternary structure. The most prevalent type is a disulfide bond (also known as a disulfide bridge). A disulfide bond is formed between two cysteine amino acids using their side chain chemical groups containing a Sulphur atom, these chemical groups are known as thiol functional groups. Disulfide bonds act to stabilize the pre-existing structure of the protein. Disulfide bonds are formed in an oxidation reaction between two thiol groups and therefore, need an oxidizing environment to react. As a result, disulfide bonds are typically formed in the oxidizing environment of the endoplasmic reticulum catalyzed by enzymes called protein disulfide isomerases. Disulfide bonds are rarely formed in the cytoplasm as it is a reducing environment.
Role of protein synthesis in disease
Many diseases are caused by mutations in genes, due to the direct connection between the DNA nucleotide sequence and the amino acid sequence of the encoded protein. Changes to the primary structure of the protein can result in the protein mis-folding or malfunctioning. Mutations within a single gene have been identified as a cause of multiple diseases, including sickle cell disease, known as single gene disorders.
Sickle cell disease
Sickle cell disease is a group of diseases caused by a mutation in a subunit of hemoglobin, a protein found in red blood cells responsible for transporting oxygen. The most dangerous of the sickle cell diseases is known as sickle cell anemia. Sickle cell anemia is the most common homozygous recessive single gene disorder, meaning the affected individual must carry a mutation in both copies of the affected gene (one inherited from each parent) to experience the disease. Hemoglobin has a complex quaternary structure and is composed of four polypeptide subunitstwo A subunits and two B subunits. Patients with sickle cell anemia have a missense or substitution mutation in the gene encoding the hemoglobin B subunit polypeptide chain. A missense mutation means the nucleotide mutation alters the overall codon triplet such that a different amino acid is paired with the new codon. In the case of sickle cell anemia, the most common missense mutation is a single nucleotide mutation from thymine to adenine in the hemoglobin B subunit gene. This changes codon 6 from encoding the amino acid glutamic acid to encoding valine.
This change in the primary structure of the hemoglobin B subunit polypeptide chain alters the functionality of the hemoglobin multi-subunit complex in low oxygen conditions. When red blood cells unload oxygen into the tissues of the body, the mutated haemoglobin protein starts to stick together to form a semi-solid structure within the red blood cell. This distorts the shape of the red blood cell, resulting in the characteristic "sickle" shape, and reduces cell flexibility. This rigid, distorted red blood cell can accumulate in blood vessels creating a blockage. The blockage prevents blood flow to tissues and can lead to tissue death which causes great pain to the individual.
Cancer
Cancers form as a result of gene mutations as well as improper protein translation. In addition to cancer cells proliferating abnormally, they suppress the expression of anti-apoptotic or pro-apoptotic genes or proteins. Most cancer cells see a mutation in the signaling protein Ras, which functions as an on/off signal transductor in cells. In cancer cells, the RAS protein becomes persistently active, thus promoting the proliferation of the cell due to the absence of any regulation. Additionally, most cancer cells carry two mutant copies of the regulator gene p53, which acts as a gatekeeper for damaged genes and initiates apoptosis in malignant cells. In its absence, the cell cannot initiate apoptosis or signal for other cells to destroy it.
As the tumor cells proliferate, they either remain confined to one area and are called benign, or become malignant cells that migrate to other areas of the body. Oftentimes, these malignant cells secrete proteases that break apart the extracellular matrix of tissues. This then allows the cancer to enter its terminal stage called Metastasis, in which the cells enter the bloodstream or the lymphatic system to travel to a new part of the body.
See also
Central dogma of molecular biology
Genetic code
References
External links
A more advanced video detailing the different types of post-translational modifications and their chemical structures
A useful video visualising the process of converting DNA to protein via transcription and translation
Video visualising the process of protein folding from the non-functional primary structure to a mature, folded 3D protein structure with reference to the role of mutations and protein mis-folding in disease
Gene expression
Proteins
Biosynthesis
Metabolism | 0.782534 | 0.996062 | 0.779453 |
Reductionism | Reductionism is any of several related philosophical ideas regarding the associations between phenomena which can be described in terms of simpler or more fundamental phenomena. It is also described as an intellectual and philosophical position that interprets a complex system as the sum of its parts.
Definitions
The Oxford Companion to Philosophy suggests that reductionism is "one of the most used and abused terms in the philosophical lexicon" and suggests a three-part division:
Ontological reductionism: a belief that the whole of reality consists of a minimal number of parts.
Methodological reductionism: the scientific attempt to provide an explanation in terms of ever-smaller entities.
Theory reductionism: the suggestion that a newer theory does not replace or absorb an older one, but reduces it to more basic terms. Theory reduction itself is divisible into three parts: translation, derivation, and explanation.
Reductionism can be applied to any phenomenon, including objects, problems, explanations, theories, and meanings.
For the sciences, application of methodological reductionism attempts explanation of entire systems in terms of their individual, constituent parts and their interactions. For example, the temperature of a gas is reduced to nothing beyond the average kinetic energy of its molecules in motion. Thomas Nagel and others speak of 'psychophysical reductionism' (the attempted reduction of psychological phenomena to physics and chemistry), and 'physico-chemical reductionism' (the attempted reduction of biology to physics and chemistry). In a very simplified and sometimes contested form, reductionism is said to imply that a system is nothing but the sum of its parts.
However, a more nuanced opinion is that a system is composed entirely of its parts, but the system will have features that none of the parts have (which, in essence is the basis of emergentism). "The point of mechanistic explanations is usually showing how the higher level features arise from the parts."
Other definitions are used by other authors. For example, what John Polkinghorne terms 'conceptual' or 'epistemological' reductionism is the definition provided by Simon Blackburn and by Jaegwon Kim: that form of reductionism which concerns a program of replacing the facts or entities involved in one type of discourse with other facts or entities from another type, thereby providing a relationship between them. Richard Jones distinguishes ontological and epistemological reductionism, arguing that many ontological and epistemological reductionists affirm the need for different concepts for different degrees of complexity while affirming a reduction of theories.
The idea of reductionism can be expressed by "levels" of explanation, with higher levels reducible if need be to lower levels. This use of levels of understanding in part expresses our human limitations in remembering detail. However, "most philosophers would insist that our role in conceptualizing reality [our need for a hierarchy of "levels" of understanding] does not change the fact that different levels of organization in reality do have different 'properties'."
Reductionism does not preclude the existence of what might be termed emergent phenomena, but it does imply the ability to understand those phenomena completely in terms of the processes from which they are composed. This reductionist understanding is very different from ontological or strong emergentism, which intends that what emerges in "emergence" is more than the sum of the processes from which it emerges, respectively either in the ontological sense or in the epistemological sense.
Ontological reductionism
Richard Jones divides ontological reductionism into two: the reductionism of substances (e.g., the reduction of mind to matter) and the reduction of the number of structures operating in nature (e.g., the reduction of one physical force to another). This permits scientists and philosophers to affirm the former while being anti-reductionists regarding the latter.
Nancey Murphy has claimed that there are two species of ontological reductionism: one that claims that wholes are nothing more than their parts; and atomist reductionism, claiming that wholes are not "really real". She admits that the phrase "really real" is apparently senseless but she has tried to explicate the supposed difference between the two.
Ontological reductionism denies the idea of ontological emergence, and claims that emergence is an epistemological phenomenon that only exists through analysis or description of a system, and does not exist fundamentally.
In some scientific disciplines, ontological reductionism takes two forms: token-identity theory and type-identity theory. In this case, "token" refers to a biological process.
Token ontological reductionism is the idea that every item that exists is a sum item. For perceivable items, it affirms that every perceivable item is a sum of items with a lesser degree of complexity. Token ontological reduction of biological things to chemical things is generally accepted.
Type ontological reductionism is the idea that every type of item is a sum type of item, and that every perceivable type of item is a sum of types of items with a lesser degree of complexity. Type ontological reduction of biological things to chemical things is often rejected.
Michael Ruse has criticized ontological reductionism as an improper argument against vitalism.
Methodological reductionism
In a biological context, methodological reductionism means attempting to explain all biological phenomena in terms of their underlying biochemical and molecular processes.
In religion
Anthropologists Edward Burnett Tylor and James George Frazer employed some religious reductionist arguments.
Theory reductionism
Theory reduction is the process by which a more general theory absorbs a special theory. It can be further divided into translation, derivation, and explanation. For example, both Kepler's laws of the motion of the planets and Galileo's theories of motion formulated for terrestrial objects are reducible to Newtonian theories of mechanics because all the explanatory power of the former are contained within the latter. Furthermore, the reduction is considered beneficial because Newtonian mechanics is a more general theory—that is, it explains more events than Galileo's or Kepler's. Besides scientific theories, theory reduction more generally can be the process by which one explanation subsumes another.
In mathematics
In mathematics, reductionism can be interpreted as the philosophy that all mathematics can (or ought to) be based on a common foundation, which for modern mathematics is usually axiomatic set theory. Ernst Zermelo was one of the major advocates of such an opinion; he also developed much of axiomatic set theory. It has been argued that the generally accepted method of justifying mathematical axioms by their usefulness in common practice can potentially weaken Zermelo's reductionist claim.
Jouko Väänänen has argued for second-order logic as a foundation for mathematics instead of set theory, whereas others have argued for category theory as a foundation for certain aspects of mathematics.
The incompleteness theorems of Kurt Gödel, published in 1931, caused doubt about the attainability of an axiomatic foundation for all of mathematics. Any such foundation would have to include axioms powerful enough to describe the arithmetic of the natural numbers (a subset of all mathematics). Yet Gödel proved that, for any consistent recursively enumerable axiomatic system powerful enough to describe the arithmetic of the natural numbers, there are (model-theoretically) true propositions about the natural numbers that cannot be proved from the axioms. Such propositions are known as formally undecidable propositions. For example, the continuum hypothesis is undecidable in the Zermelo–Fraenkel set theory as shown by Cohen.
In science
Reductionist thinking and methods form the basis for many of the well-developed topics of modern science, including much of physics, chemistry and molecular biology. Classical mechanics in particular is seen as a reductionist framework. For instance, we understand the solar system in terms of its components (the sun and the planets) and their interactions. Statistical mechanics can be considered as a reconciliation of macroscopic thermodynamic laws with the reductionist method of explaining macroscopic properties in terms of microscopic components, although it has been argued that reduction in physics 'never goes all the way in practice'.
In computer science
The role of reduction in computer science can be thought as a precise and unambiguous mathematical formalization of the philosophical idea of "theory reductionism". In a general sense, a problem (or set) is said to be reducible to another problem (or set), if there is a computable/feasible method to translate the questions of the former into the latter, so that, if one knows how to computably/feasibly solve the latter problem, then one can computably/feasibly solve the former. Thus, the latter can only be at least as "hard" to solve as the former.
Reduction in theoretical computer science is pervasive in both: the mathematical abstract foundations of computation; and in real-world performance or capability analysis of algorithms. More specifically, reduction is a foundational and central concept, not only in the realm of mathematical logic and abstract computation in computability (or recursive) theory, where it assumes the form of e.g. Turing reduction, but also in the realm of real-world computation in time (or space) complexity analysis of algorithms, where it assumes the form of e.g. polynomial-time reduction.
Criticism
Free will
Philosophers of the Enlightenment worked to insulate human free will from reductionism. Descartes separated the material world of mechanical necessity from the world of mental free will. German philosophers introduced the concept of the "noumenal" realm that is not governed by the deterministic laws of "phenomenal" nature, where every event is completely determined by chains of causality. The most influential formulation was by Immanuel Kant, who distinguished between the causal deterministic framework the mind imposes on the world—the phenomenal realm—and the world as it exists for itself, the noumenal realm, which, as he believed, included free will. To insulate theology from reductionism, 19th century post-Enlightenment German theologians, especially Friedrich Schleiermacher and Albrecht Ritschl, used the Romantic method of basing religion on the human spirit, so that it is a person's feeling or sensibility about spiritual matters that comprises religion.
Causation
Most common philosophical understandings of causation involve reducing it to some collection of non-causal facts. Opponents of these reductionist views have given arguments that the non-causal facts in question are insufficient to determine the causal facts.
Alfred North Whitehead's metaphysics opposed reductionism. He refers to this as the "fallacy of the misplaced concreteness". His scheme was to frame a rational, general understanding of phenomena, derived from our reality.
In science
An alternative term for ontological reductionism is fragmentalism, often used in a pejorative sense. In cognitive psychology, George Kelly developed "constructive alternativism" as a form of personal construct psychology and an alternative to what he considered "accumulative fragmentalism". For this theory, knowledge is seen as the construction of successful mental models of the exterior world, rather than the accumulation of independent "nuggets of truth". Others argue that inappropriate use of reductionism limits our understanding of complex systems. In particular, ecologist Robert Ulanowicz says that science must develop techniques to study ways in which larger scales of organization influence smaller ones, and also ways in which feedback loops create structure at a given level, independently of details at a lower level of organization. He advocates and uses information theory as a framework to study propensities in natural systems. The limits of the application of reductionism are claimed to be especially evident at levels of organization with greater complexity, including living cells, neural networks (biology), ecosystems, society, and other systems formed from assemblies of large numbers of diverse components linked by multiple feedback loops.
See also
Antireductionism
Eliminative materialism
Emergentism
Further facts
Materialism
Multiple realizability
Physicalism
Technological determinism
References
Further reading
Churchland, Patricia (1986), Neurophilosophy: Toward a Unified Science of the Mind-Brain. MIT Press.
Dawkins, Richard (1976), The Selfish Gene. Oxford University Press; 2nd edition, December 1989.
Dennett, Daniel C. (1995) Darwin's Dangerous Idea. Simon & Schuster.
Descartes (1637), Discourses, Part V.
Dupre, John (1993), The Disorder of Things. Harvard University Press.
Galison, Peter and David J. Stump, eds. (1996), The Disunity of the Sciences: Boundaries, Contexts, and Power. Stanford University Press.
Jones, Richard H. (2013), Analysis & the Fullness of Reality: An Introduction to Reductionism & Emergence. Jackson Square Books.
Laughlin, Robert (2005), A Different Universe: Reinventing Physics from the Bottom Down. Basic Books.
Nagel, Ernest (1961), The Structure of Science. New York.
Pinker, Steven (2002), The Blank Slate: The Modern Denial of Human Nature. Viking Penguin.
Ruse, Michael (1988), Philosophy of Biology. Albany, NY.
Rosenberg, Alexander (2006), Darwinian Reductionism or How to Stop Worrying and Love Molecular Biology. University of Chicago Press.
Eric Scerri The reduction of chemistry to physics has become a central aspect of the philosophy of chemistry. See several articles by this author.
Weinberg, Steven (1992), Dreams of a Final Theory: The Scientist's Search for the Ultimate Laws of Nature, Pantheon Books.
Weinberg, Steven (2002) describes what he terms the culture war among physicists in his review of A New Kind of Science.
Capra, Fritjof (1982), The Turning Point.
Lopez, F., Il pensiero olistico di Ippocrate. Riduzionismo, antiriduzionismo, scienza della complessità nel trattato sull'Antica Medicina, vol. IIA, Ed. Pubblisfera, Cosenza Italy 2008.
Maureen L Pope, Personal construction of formal knowledge, Humanities Social Science and Law, 13.4, December, 1982, pp. 3–14
Tara W. Lumpkin, Perceptual Diversity: Is Polyphasic Consciousness Necessary for Global Survival? December 28, 2006, bioregionalanimism.com
Vandana Shiva, 1995, Monocultures, Monopolies and the Masculinisation of Knowledge. International Development Research Centre (IDRC) Reports: Gender Equity. 23: 15–17. Gender and Equity (v. 23, no. 2, July 1995)
The Anti-Realist Side of the Debate: A Theory's Predictive Success does not Warrant Belief in the Unobservable Entities it Postulates Andre Kukla and Joel Walmsley.
External links
Alyssa Ney, "Reductionism" in: Internet Encyclopedia of Philosophy.
Ingo Brigandt and Alan Love, "Reductionism in Biology" in: The Stanford Encyclopedia of Philosophy.
John Dupré: The Disunity of Science—an interview at the Galilean Library covering criticisms of reductionism.
Monica Anderson: Reductionism Considered Harmful
Reduction and Emergence in Chemistry, Internet Encyclopedia of Philosophy.
Metatheory of science
Metaphysical theories
Sociological theories
Analytic philosophy
Epistemology of science
Cognition
Epistemological theories
Emergence | 0.782191 | 0.996425 | 0.779394 |
X-ray crystallography | X-ray crystallography is the experimental science of determining the atomic and molecular structure of a crystal, in which the crystalline structure causes a beam of incident X-rays to diffract in specific directions. By measuring the angles and intensities of the X-ray diffraction, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal and the positions of the atoms, as well as their chemical bonds, crystallographic disorder, and other information.
X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences between various materials, especially minerals and alloys. The method has also revealed the structure and function of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the primary method for characterizing the atomic structure of materials and in differentiating materials that appear similar in other experiments. X-ray crystal structures can also help explain unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases.
Modern work involves a number of steps all of which are important. The preliminary steps include preparing good quality samples, careful recording of the diffracted intensities, and processing of the data to remove artifacts. A variety of different methods are then used to obtain an estimate of the atomic structure, generically called direct methods. With an initial estimate further computational techniques such as those involving difference maps are used to complete the structure. The final step is a numerical refinement of the atomic positions against the experimental data, sometimes assisted by ab-initio calculations. In almost all cases new structures are deposited in databases available to the international community.
History
Crystals, though long admired for their regularity and symmetry, were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (A New Year's Gift of Hexagonal Snow) (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles. The Danish scientist Nicolas Steno (1669) pioneered experimental investigations of crystal symmetry. Steno showed that the angles between the faces are the same in every exemplar of a particular type of crystal. René Just Haüy (1784) discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size. Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which remain in use for identifying crystal faces. Haüy's study led to the idea that crystals are a regular three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is repeated indefinitely along three principal directions. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johan Hessel, Auguste Bravais, Evgraf Fedorov, Arthur Schönflies and (belatedly) William Barlow (1894). Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography; however, the available data were too scarce in the 1880s to accept his models as conclusive.
Wilhelm Röntgen discovered X-rays in 1895. Physicists were uncertain of the nature of X-rays, but suspected that they were waves of electromagnetic radiation. The Maxwell theory of electromagnetic radiation was well accepted, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Barkla created the x-ray notation for sharp spectral lines, noting in 1909 two separate energies, at first naming them "A" and "B" and then supposing that there may be lines prior to "A", he started an alphabet numbering beginning with "K." Single-slit experiments in the laboratory of Arnold Sommerfeld suggested that X-rays had a wavelength of about 1 angstrom. X-rays are not only waves but also have particle properties causing Sommerfeld to coin the name Bremsstrahlung for the continuous spectra when they were formed when electrons bombarded a material. Albert Einstein introduced the photon concept in 1905, but it was not broadly accepted until 1922, when Arthur Compton confirmed it by the scattering of X-rays from electrons. The particle-like properties of X-rays, such as their ionization of gases, had prompted William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation. Bragg's view proved unpopular and the observation of X-ray diffraction by Max von Laue in 1912 confirmed that X-rays are a form of electromagnetic radiation.
The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam. The results were presented to the Bavarian Academy of Sciences and Humanities in June 1912 as "Interferenz-Erscheinungen bei Röntgenstrahlen" (Interference phenomena in X-rays). Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914.
After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912–1913, the younger Bragg developed Bragg's law, which connects the scattering with evenly spaced planes within a crystal. The Braggs, father and son, shared the 1915 Nobel Prize in Physics for their work in crystallography. The earliest structures were generally simple; as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated arrangements of atoms.
The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be "solved" (i.e., determined) in 1914 was that of table salt. The distribution of electrons in the table-salt structure showed that crystals are not necessarily composed of covalently bonded molecules, and proved the existence of ionic compounds. The structure of diamond was solved in the same year, proving the tetrahedral arrangement of its chemical bonds and showing that the length of C–C single bond was about 1.52 angstroms. Other early structures included copper, calcium fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2) in 1914; spinel (MgAl2O4) in 1915; the rutile and anatase forms of titanium dioxide (TiO2) in 1916; pyrochroite (Mn(OH)2) and, by extension, brucite (Mg(OH)2) in 1919. Also in 1919, sodium nitrate (NaNO3) and caesium dichloroiodide (CsICl2) were determined by Ralph Walter Graystone Wyckoff, and the wurtzite (hexagonal ZnS) structure was determined in 1920.
The structure of graphite was solved in 1916 by the related method of powder diffraction, which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917. The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently. Hull also used the powder method to determine the structures of various metals, such as iron and magnesium.
Contributions in different areas
Chemistry
X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure, the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV), and the resonance observed in the planar carbonate group and in aromatic molecules. Kathleen Lonsdale's 1928 structure of hexamethylbenzene established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry. Her conclusions were anticipated by William Henry Bragg, who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement.
The first structure of an organic compound, hexamethylenetetramine, was solved in 1923. This was rapidly followed by several studies of different long-chain fatty acids, which are an important component of biological membranes. In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine, a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll.
In the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile, brookite and anatase forms of titanium dioxide.
The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-metal double bonds, metal-metal quadruple bonds, and three-center, two-electron bonds. X-ray crystallography—or, strictly speaking, an inelastic Compton scattering experiment—has also provided evidence for the partly covalent character of hydrogen bonds. In the field of organometallic chemistry, the X-ray structure of ferrocene initiated scientific studies of sandwich compounds, while that of Zeise's salt stimulated research into "back bonding" and metal-pi complexes. Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry, particularly in clarifying the structures of the crown ethers and the principles of host–guest chemistry.
Materials science and mineralogy
The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy also occurred in the mid-1920s. Most notably, Linus Pauling's structure of the alloy Mg2Sn led to his theory of the stability and structure of complex ionic crystals. Many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry. The Cambridge Structural Database contains over 1,000,000 structures as of June 2019; most of these structures were determined by X-ray crystallography.
On October 17, 2012, the Curiosity rover on the planet Mars at "Rocknest" performed the first X-ray diffraction analysis of Martian soil. The results from the rover's CheMin analyzer revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian soil in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes.
Biological macromolecular crystallography
X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), penicillin (1946) and vitamin B12 (1956), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years.
Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Sir John Cowdery Kendrew, for which he shared the Nobel Prize in Chemistry with Max Perutz in 1962. Since that success, over 130,000 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined. The nearest competing method in number of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved less than one tenth as many. Crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 kDa). X-ray crystallography is used routinely to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it. However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other denaturants to solubilize them in isolation, and such detergents often interfere with crystallization. Membrane proteins are a large component of the genome, and include many proteins of great physiological importance, such as ion channels and receptors. Helium cryogenics are used to prevent radiation damage in protein crystals.
Methods
Overview
Two limiting cases of X-ray crystallography—"small-molecule" (which includes continuous inorganic solids) and "macromolecular" crystallography—are often used. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. In contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved; the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses and proteins with hundreds of thousands of atoms, through improved crystallographic imaging and technology.
The technique of single-crystal X-ray crystallography has three basic steps. The first—and often most difficult—step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning.
In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. The angles and intensities of diffracted X-rays are measured, with each compound having a unique diffraction pattern. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections.
In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement—now called a crystal structure—is usually stored in a public database.
Crystallization
Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or synthetic materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with macromolecular crystal annealing and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure.
Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallization. By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out so as to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded.
Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal. The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (subsequent growth). The solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever. Other approaches involve crystallizing proteins under oil, where aqueous protein solutions are dispensed under liquid oil, and water evaporates through the layer of oil. Different oils have different evaporation permeabilities, therefore yielding changes in concentration rates from different percipient/protein mixture.
It is difficult to predict good conditions for nucleation or growth of well-ordered crystals. In practice, favorable conditions are identified by screening; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested. Hundreds, even thousands, of solution conditions are generally tried before finding the successful one. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of crystallization-grade protein, robots have been developed that are capable of accurately dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means that 10-fold less protein is used per experiment when compared to crystallization trials set up by hand (in the order of 1 microliter).
Several factors are known to inhibit crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior.
Data collection
Mounting the crystal
The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of mounting. In the past, crystals were loaded into glass capillaries with the crystallization solution (the mother liquor). Crystals of small molecules are typically attached with oil or glue to a glass fiber or a loop, which is made of nylon or plastic and attached to a solid rod. Protein crystals are scooped up by a loop, then flash-frozen with liquid nitrogen. This freezing reduces the radiation damage of the X-rays, as well as thermal motion (the Debye-Waller effect). However, untreated protein crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing. This pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error.
The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer", which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer.
Recording the reflections
The relative intensities of the reflections provides information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector (such as a pixel detector) or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point.
One set of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full three dimensional set. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space, due to the curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller angular range such as 90° or 45° may be recorded. The rotation axis should be changed at least once, to avoid developing a "blind spot" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5–2°) to catch a broader region of reciprocal space.
Multiple data sets may be necessary for certain phasing methods. For example, multi-wavelength anomalous dispersion phasing requires that the scattering be recorded at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken.
Crystal symmetry, unit cell, and image scaling
The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional set. Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 230 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine. Having assigned symmetry, the data is then integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)).
A full data set may consist of hundreds of separate images taken at different orientations of the crystal. These have to be merged and scaled usingpeaks appear in two or more images (merging) and scaling so there is a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows calculating the symmetry-related R-factor, a reliability index based upon how similar are the measured intensities of symmetry-equivalent reflections, thus assessing the quality of the data.
Initial phasing
The intensity of each diffraction 'spot' is proportional to the modulus squared of the structure factor. The structure factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem. Initial phase estimates can be obtained in a variety of ways:
Ab initio phasing or direct methods – This is usually the method of choice for small molecules (<1000 non-hydrogen atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better than 1.4 Å (140 pm), direct methods can be used to obtain phase information, by exploiting known phase relationships between certain groups of reflections.
Molecular replacement – if a related structure is known, it can be used as a search model in molecular replacement to determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to generate electron density maps.
Anomalous X-ray scattering (MAD or SAD phasing) – the X-ray wavelength may be scanned past an absorption edge of an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths (far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously diffracting atoms and hence the structure of the whole molecule. The most popular method of incorporating anomalous scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing methionine) in a media rich in seleno-methionine, which contains selenium atoms. A multi-wavelength anomalous dispersion (MAD) experiment can then be conducted around the absorption edge, which should then yield the position of any methionine residues within the protein, providing initial phases.
Heavy atom methods (multiple isomorphous replacement) – If electron-dense metal atoms can be introduced into the crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases. Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-crystallization (growing the crystals in the presence of a heavy atom). As in multi-wavelength anomalous dispersion phasing, the changes in the scattering amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures were solved, it has largely been superseded by multi-wavelength anomalous dispersion phasing with selenomethionine.
Model building and phase refinement
Having obtained initial phases, an initial model can be built. The atomic positions in the model and their respective Debye-Waller factors (or B-factors, accounting for the thermal motion of the atom) can be refined to fit the observed diffraction data, ideally yielding a better set of phases. A new model can then be fit to the new electron density map and successive rounds of refinement are carried out. This iterative process continues until the correlation between the diffraction data and the model is maximized. The agreement is measured by an R-factor defined as
where F is the structure factor. A similar quality criterion is Rfree, which is calculated from a subset (~10%) of reflections that were not included in the structure refinement. Both R factors depend on the resolution of the data. As a rule of thumb, Rfree should be approximately the resolution in angstroms divided by 10; thus, a data-set with 2 Å resolution should yield a final Rfree ~ 0.2. Chemical bonding features such as stereochemistry, hydrogen bonding and distribution of bond lengths and angles are complementary measures of the model quality. In iterative model building, it is common to encounter phase bias or model bias: because phase estimations come from the model, each round of calculated map tends to show density wherever the model has density, regardless of whether there truly is a density. This problem can be mitigated by maximum-likelihood weighting and checking using omit maps.
It may not be possible to observe every atom in the asymmetric unit. In many cases, crystallographic disorder smears the electron density map. Weakly scattering atoms such as hydrogen are routinely invisible. It is also possible for a single atom to appear multiple times in an electron density map, e.g., if a protein sidechain has multiple (<4) allowed conformations. In still other cases, the crystallographer may detect that the covalent structure deduced for the molecule was incorrect, or changed. For example, proteins may be cleaved or undergo post-translational modifications that were not detected prior to the crystallization.
Disorder
A common challenge in refinement of crystal structures results from crystallographic disorder. Disorder can take many forms but in general involves the coexistence of two or more species or conformations. Failure to recognize disorder results in flawed interpretation. Pitfalls from improper modeling of disorder are illustrated by the discounted hypothesis of bond stretch isomerism. Disorder is modelled with respect to the relative population of the components, often only two, and their identity. In structures of large molecules and ions, solvent and counterions are often disordered.
Applied computational data analysis
The use of computational methods for the powder X-ray diffraction data analysis is now generalized. It typically compares the experimental data to the simulated diffractogram of a model structure, taking into account the instrumental parameters, and refines the structural or microstructural parameters of the model using least squares based minimization algorithm. Most available tools allowing phase identification and structural refinement are based on the Rietveld method, some of them being open and free software such as FullProf Suite, Jana2006, MAUD, Rietan, GSAS, etc. while others are available under commercial licenses such as Diffrac.Suite TOPAS, Match!, etc. Most of these tools also allow Le Bail refinement (also referred to as profile matching), that is, refinement of the cell parameters based on the Bragg peaks positions and peak profiles, without taking into account the crystallographic structure by itself. More recent tools allow the refinement of both structural and microstructural data, such as the FAULTS program included in the FullProf Suite, which allows the refinement of structures with planar defects (e.g. stacking faults, twinnings, intergrowths).
Deposition of the structure
Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules), the Inorganic Crystal Structure Database (ICSD) (for inorganic compounds) or the Protein Data Bank (for protein and sometimes nucleic acids). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins are not deposited in public crystallographic databases.
Contribution of women to X-ray crystallography
A number of women were pioneers in X-ray crystallography at a time when they were excluded from most other branches of physical science.
Kathleen Lonsdale was a research student of William Henry Bragg, who had 11 women research students out of a total of 18. She is known for both her experimental and theoretical work. Lonsdale joined his crystallography research team at the Royal Institution in London in 1923, and after getting married and having children, went back to work with Bragg as a researcher. She confirmed the structure of the benzene ring, carried out studies of diamond, was one of the first two women to be elected to the Royal Society in 1945, and in 1949 was appointed the first female tenured professor of chemistry and head of the Department of crystallography at University College London. Lonsdale always advocated greater participation of women in science and said in 1970: "Any country that wants to make full use of all its potential scientists and technologists could do so, but it must not expect to get the women quite so simply as it gets the men.... It is utopian, then, to suggest that any country that really wants married women to return to a scientific career, when her children no longer need her physical presence, should make special arrangements to encourage her to do so?". During this period, Lonsdale began a collaboration with William T. Astbury on a set of 230 space group tables which was published in 1924 and became an essential tool for crystallographers.
In 1932 Dorothy Hodgkin joined the laboratory of the physicist John Desmond Bernal, who was a former student of Bragg, in Cambridge, UK. She and Bernal took the first X-ray photographs of crystalline proteins. Hodgkin also played a role in the foundation of the International Union of Crystallography. She was awarded the Nobel Prize in Chemistry in 1964 for her work using X-ray techniques to study the structures of penicillin, insulin and vitamin B12. Her work on penicillin began in 1942 during the war and on vitamin B12 in 1948. While her group slowly grew, their predominant focus was on the X-ray analysis of natural products. She is the only British woman ever to have won a Nobel Prize in a science subject.
Rosalind Franklin took the X-ray photograph of a DNA fibre that proved key to James Watson and Francis Crick's discovery of the double helix, for which they both won the Nobel Prize for Physiology or Medicine in 1962. Watson revealed in his autobiographic account of the discovery of the structure of DNA, The Double Helix, that he had used Franklin's X-ray photograph without her permission. Franklin died of cancer in her 30s, before Watson received the Nobel Prize. Franklin also carried out important structural studies of carbon in coal and graphite, and of plant and animal viruses.
Isabella Karle of the United States Naval Research Laboratory developed an experimental approach to the mathematical theory of crystallography. Her work improved the speed and accuracy of chemical and biomedical analysis. Yet only her husband Jerome shared the 1985 Nobel Prize in Chemistry with Herbert Hauptman, "for outstanding achievements in the development of direct methods for the determination of crystal structures". Other prize-giving bodies have showered Isabella with awards in her own right.
Women have written many textbooks and research papers in the field of X-ray crystallography. For many years Lonsdale edited the International Tables for Crystallography, which provide information on crystal lattices, symmetry, and space groups, as well as mathematical, physical and chemical data on structures. Olga Kennard of the University of Cambridge, founded and ran the Cambridge Crystallographic Data Centre, an internationally recognized source of structural data on small molecules, from 1965 until 1997. Jenny Pickworth Glusker, a British scientist, co-authored Crystal Structure Analysis: A Primer, first published in 1971 and as of 2010 in its third edition. Eleanor Dodson, an Australian-born biologist, who began as Dorothy Hodgkin's technician, was the main instigator behind CCP4, the collaborative computing project that currently shares more than 250 software tools with protein crystallographers worldwide.
Nobel Prizes involving X-ray crystallography
See also
Beevers–Lipson strip
Bragg diffraction
Crystallographic database
Crystallographic point groups
Difference density map
Electron diffraction
Energy-dispersive X-ray diffraction
Flack parameter
Grazing incidence diffraction
Henderson limit
International Year of Crystallography
Multipole density formalism
Neutron diffraction
Powder diffraction
Ptychography
Scherrer equation
Small angle X-ray scattering (SAXS)
Structure determination
Ultrafast x-ray
Wide angle X-ray scattering (WAXS)
X-ray diffraction
Notes
References
Further reading
International Tables for Crystallography
Bound collections of articles
Textbooks
Applied computational data analysis
Historical
External links
Tutorials
Learning Crystallography
Simple, non technical introduction
The Crystallography Collection, video series from the Royal Institution
"Small Molecule Crystalization" (PDF) at Illinois Institute of Technology website
International Union of Crystallography
Crystallography 101
Interactive structure factor tutorial, demonstrating properties of the diffraction pattern of a 2D crystal.
Picturebook of Fourier Transforms, illustrating the relationship between crystal and diffraction pattern in 2D.
Lecture notes on X-ray crystallography and structure determination
Online lecture on Modern X-ray Scattering Methods for Nanoscale Materials Analysis by Richard J. Matyi
Interactive Crystallography Timeline from the Royal Institution
Primary databases
Crystallography Open Database (COD)
Protein Data Bank (PDB)
Nucleic Acid Databank (NDB)
Cambridge Structural Database (CSD)
Inorganic Crystal Structure Database (ICSD)
Biological Macromolecule Crystallization Database (BMCD)
Derivative databases
PDBsum
Proteopedia – the collaborative, 3D encyclopedia of proteins and other molecules
RNABase
HIC-Up database of PDB ligands
Structural Classification of Proteins database
CATH Protein Structure Classification
List of transmembrane proteins with known 3D structure
Orientations of Proteins in Membranes database
Structural validation
MolProbity structural validation suite
ProSA-web
NQ-Flipper (check for unfavorable rotamers of Asn and Gln residues)
DALI server (identifies proteins similar to a given protein)
Laboratory techniques in condensed matter physics
Crystallography
Diffraction
Materials science
Protein structure
Protein methods
Protein imaging
Synchrotron-related techniques
Articles containing video clips
Crystallography | 0.781554 | 0.9972 | 0.779366 |
Isomorphism | In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word isomorphism is derived from the Ancient Greek: ἴσος isos "equal", and μορφή morphe "form" or "shape".
The interest in isomorphisms lies in the fact that two isomorphic objects have the same properties (excluding further information such as additional structure or names of objects). Thus isomorphic structures cannot be distinguished from the point of view of structure only, and may be identified. In mathematical jargon, one says that two objects are .
An automorphism is an isomorphism from a structure to itself. An isomorphism between two structures is a canonical isomorphism (a canonical map that is an isomorphism) if there is only one isomorphism between the two structures (as is the case for solutions of a universal property), or if the isomorphism is much more natural (in some sense) than other isomorphisms. For example, for every prime number , all fields with elements are canonically isomorphic, with a unique isomorphism. The isomorphism theorems provide canonical isomorphisms that are not unique.
The term is mainly used for algebraic structures. In this case, mappings are called homomorphisms, and a homomorphism is an isomorphism if and only if it is bijective.
In various areas of mathematics, isomorphisms have received specialized names, depending on the type of structure under consideration. For example:
An isometry is an isomorphism of metric spaces.
A homeomorphism is an isomorphism of topological spaces.
A diffeomorphism is an isomorphism of spaces equipped with a differential structure, typically differentiable manifolds.
A symplectomorphism is an isomorphism of symplectic manifolds.
A permutation is an automorphism of a set.
In geometry, isomorphisms and automorphisms are often called transformations, for example rigid transformations, affine transformations, projective transformations.
Category theory, which can be viewed as a formalization of the concept of mapping between structures, provides a language that may be used to unify the approach to these different aspects of the basic idea.
Examples
Logarithm and exponential
Let be the multiplicative group of positive real numbers, and let be the additive group of real numbers.
The logarithm function satisfies for all so it is a group homomorphism. The exponential function satisfies for all so it too is a homomorphism.
The identities and show that and are inverses of each other. Since is a homomorphism that has an inverse that is also a homomorphism, is an isomorphism of groups.
The function is an isomorphism which translates multiplication of positive real numbers into addition of real numbers. This facility makes it possible to multiply real numbers using a ruler and a table of logarithms, or using a slide rule with a logarithmic scale.
Integers modulo 6
Consider the group the integers from 0 to 5 with addition modulo 6. Also consider the group the ordered pairs where the x coordinates can be 0 or 1, and the y coordinates can be 0, 1, or 2, where addition in the x-coordinate is modulo 2 and addition in the y-coordinate is modulo 3.
These structures are isomorphic under addition, under the following scheme:
or in general
For example, which translates in the other system as
Even though these two groups "look" different in that the sets contain different elements, they are indeed isomorphic: their structures are exactly the same. More generally, the direct product of two cyclic groups and is isomorphic to if and only if m and n are coprime, per the Chinese remainder theorem.
Relation-preserving isomorphism
If one object consists of a set X with a binary relation R and the other object consists of a set Y with a binary relation S then an isomorphism from X to Y is a bijective function such that:
S is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, well-order, strict weak order, total preorder (weak order), an equivalence relation, or a relation with any other special properties, if and only if R is.
For example, R is an ordering ≤ and S an ordering then an isomorphism from X to Y is a bijective function such that
Such an isomorphism is called an or (less commonly) an .
If then this is a relation-preserving automorphism.
Applications
In algebra, isomorphisms are defined for all algebraic structures. Some are more specifically studied; for example:
Linear isomorphisms between vector spaces; they are specified by invertible matrices.
Group isomorphisms between groups; the classification of isomorphism classes of finite groups is an open problem.
Ring isomorphism between rings.
Field isomorphisms are the same as ring isomorphism between fields; their study, and more specifically the study of field automorphisms is an important part of Galois theory.
Just as the automorphisms of an algebraic structure form a group, the isomorphisms between two algebras sharing a common structure form a heap. Letting a particular isomorphism identify the two structures turns this heap into a group.
In mathematical analysis, the Laplace transform is an isomorphism mapping hard differential equations into easier algebraic equations.
In graph theory, an isomorphism between two graphs G and H is a bijective map f from the vertices of G to the vertices of H that preserves the "edge structure" in the sense that there is an edge from vertex u to vertex v in G if and only if there is an edge from to in H. See graph isomorphism.
In mathematical analysis, an isomorphism between two Hilbert spaces is a bijection preserving addition, scalar multiplication, and inner product.
In early theories of logical atomism, the formal relationship between facts and true propositions was theorized by Bertrand Russell and Ludwig Wittgenstein to be isomorphic. An example of this line of thinking can be found in Russell's Introduction to Mathematical Philosophy.
In cybernetics, the good regulator or Conant–Ashby theorem is stated "Every good regulator of a system must be a model of that system". Whether regulated or self-regulating, an isomorphism is required between the regulator and processing parts of the system.
Category theoretic view
In category theory, given a category C, an isomorphism is a morphism that has an inverse morphism that is, and For example, a bijective linear map is an isomorphism between vector spaces, and a bijective continuous function whose inverse is also continuous is an isomorphism between topological spaces, called a homeomorphism.
Two categories and are isomorphic if there exist functors and which are mutually inverse to each other, that is, (the identity functor on ) and (the identity functor on ).
Isomorphism vs. bijective morphism
In a concrete category (roughly, a category whose objects are sets (perhaps with extra structure) and whose morphisms are structure-preserving functions), such as the category of topological spaces or categories of algebraic objects (like the category of groups, the category of rings, and the category of modules), an isomorphism must be bijective on the underlying sets. In algebraic categories (specifically, categories of varieties in the sense of universal algebra), an isomorphism is the same as a homomorphism which is bijective on underlying sets. However, there are concrete categories in which bijective morphisms are not necessarily isomorphisms (such as the category of topological spaces).
Relation to equality
Although there are cases where isomorphic objects can be considered equal, one must distinguish and . Equality is when two objects are the same, and therefore everything that is true about one object is true about the other. On the other hand, isomorphisms are related to some structure, and two isomorphic objects share only the properties that are related to this structure.
For example, the sets
are ; they are merely different representations—the first an intensional one (in set builder notation), and the second extensional (by explicit enumeration)—of the same subset of the integers. By contrast, the sets and are not since they do not have the same elements. They are isomorphic as sets, but there are many choices (in fact 6) of an isomorphism between them: one isomorphism is
while another is
and no one isomorphism is intrinsically better than any other. On this view and in this sense, these two sets are not equal because one cannot consider them : one can choose an isomorphism between them, but that is a weaker claim than identity—and valid only in the context of the chosen isomorphism.
Also, integers and even numbers are isomorphic as ordered sets and abelian groups (for addition), but cannot be considered equal sets, since one is a proper subset of the other.
On the other hand, when sets (or other mathematical objects) are defined only by their properties, without considering the nature of their elements, one often considers them to be equal. This is generally the case with solutions of universal properties.
For example, the rational numbers are usually defined as equivalence classes of pairs of integers, although nobody thinks of a rational number as a set (equivalence class). The universal property of the rational numbers is essentially that they form a field that contains the integers and does not contain any proper subfield. It results that given two fields with these properties, there is a unique field isomorphism between them. This allows identifying these two fields, since every property of one of them can be transferred to the other through the isomorphism. For example the real numbers that are obtained by dividing two integers (inside the real numbers) form the smallest subfield of the real numbers. There is thus a unique isomorphism from the rational numbers (defined as equivalence classes of pairs) to the quotients of two real numbers that are integers. This allows identifying these two sorts of rational numbers.
See also
Bisimulation
Equivalence relation
Heap (mathematics)
Isometry
Isomorphism class
Isomorphism theorem
Universal property
Coherent isomorphism
Balanced category
Notes
References
Further reading
External links
Morphisms
Equivalence (mathematics) | 0.781077 | 0.997801 | 0.77936 |
Forensic toxicology | Forensic toxicology is a multidisciplinary field that combines the principles of toxicology with expertise in disciplines such as analytical chemistry, pharmacology and clinical chemistry to aid medical or legal investigation of death, poisoning, and drug use. The paramount focus for forensic toxicology is not the legal implications of the toxicological investigation or the methodologies employed, but rather the acquisition and accurate interpretation of results. Toxicological analyses can encompass a wide array of samples. In the course of an investigation, a forensic toxicologist must consider the context of an investigation, in particular any physical symptoms recorded, and any evidence collected at a crime scene that may narrow the search, such as pill bottles, powders, trace residue, and any available chemicals. Armed with this contextual information and samples to examine, the forensic toxicologist is tasked with identifying the specific toxic substances present, quantifying their concentrations, and assessing their likely impact on the individual involved.
In the United States, forensic toxicology compromises three distinct disciplines: Postmortem toxicology, Human Performance toxicology, and Forensic Drug Testing (FDT). Postmortem toxicology involves analyzing biological specimens obtained during an autopsy to identify the impact of drugs, alcohol, and poisons. A broad array of biological specimens, including blood, urine, gastric contents, oral fluids, hair, and tissues, may undergo analysis. Forensic toxicologists collaborate with pathologists, medical examiners, and coroners to ascertain the cause and manner of death. Human Performance toxicology examines the dose-response relationship between drugs present in the body and their effects. This field plays a pivotal role in shaping and implementing laws related to activities such as driving under the influence of alcohol or drugs. Lastly, Forensic Drug Testing (FDT) pertains to detecting drug use in contexts such as the workplace, sport doping, drug-related probation, and screenings for new job applicants.
Identifying the ingested substance ingested is frequently challenging due to the body's natural processes (as outlined in ADME). It is uncommon for a chemical to persist in its original form once inside the body. For instance, heroin rapidly undergoes metabolism, ultimately converting to morphine. Consequently, a thorough examination of factors such as injection marks and chemical purity becomes imperative for an accurate diagnosis. Additionally, the substance might undergo dilution as it disperses throughout the body. Unlike a regulated dose of a drug, which may contain grams or milligrams of the active constituent, an individual sample under investigation may only consist of micrograms or nanograms.
How certain substances affect your body
Alcohol
Alcohol gains access to the central nervous system by entering the blood stream through the lining of the stomach and small intestine. Subsequently, it transverses the blood brain barrier via the circulatory system. The absorbed alcohol can diminish reflexes, disrupt nerve impulses, prolong muscle responses, and impact various other physiological functions throughout the body.
Marijuana
Similar to alcohol, marijuana is absorbed into the bloodstream and crosses the blood brain barrier. Notably, the THC released from marijuana binds to the CB-1 cannabinoid receptors, inducing various effects. These effects encompass mood changes, altered perception of time, and heightened sensitivity, among others.
Cocaine
Cocaine, in contrast to marijuana or alcohol, is a powerful stimulant. Upon entering the bloodstream, it rapidly reaches the brain within minutes, causing a significant surge in dopamine levels. The effects of cocaine are intense but short-lived, typically lasting about 30 minutes. The primary method of administration is through nasal insufflation (snorting), although it can also be smoked in crystal rock form. The rapid increase in dopamine levels during use contributes to a pronounced and challenging comedown, often prompting individuals to seek higher doses in subsequent use to achieve the same effects as experienced previously. This pattern can contribute to the development of addiction. The effects of cocaine use include increased energy and euphoria, accompanied by potential negative effects such as paranoia, rapid heart rate, and anxiety, among others.
Examples
Urine
A urine sample, originating from the bladder, is obtainable both voluntarily and taken post-mortem. Notably, urine is less prone to viral infections such as HIV or Hepatitis B in comparison to blood samples. Many drugs exhibit higher concentrations and more prolonged detection in urine compared to blood. The collection of urine samples is a non-invasive process that doesn't necessitate professional assistance. While urine is commonly used for qualitative analysis, it does not provide indications of impairment since the presence of drugs in urine merely signifies prior exposure. The duration of drug detection in urine varies; for instance, alcohol is detectable for 7–12 hours, cocaine metabolites for 2–4 days, and morphine for 48–74 hours. Marijuana, a substance with variable detection times depending on usage patterns, can be detected for 3 days after a single use, 5–7 days for moderate use (four times per week), 10–15 days for daily use, and less than 30 days for long-term heavy use, contingent upon frequency and intensity of consumption.
Blood
A blood sample of approximately is usually sufficient to screen and confirm most common toxic substances. A blood sample provides the toxicologist with a profile of the substance that the subject was influenced by at the time of collection; for this reason, it is the sample of choice for measuring blood alcohol content in drunk driving cases.
Hair
Hair is capable of recording medium to long-term or high dosage substance abuse. Chemicals in the bloodstream may be transferred to the growing hair and stored in the follicle, providing a rough timeline of drug intake events. Head hair grows at rate of approximately 1 to 1.5 cm a month, and so cross sections from different sections of the follicle can give estimates as to when a substance was ingested. Testing for drugs in hair is not standard throughout the population. The darker and coarser the hair the more drug that will be found in the hair. If two people consumed the same amount of drugs, the person with the darker and coarser hair will have more drug in their hair than the lighter haired person when tested. This raises issues of possible racial bias in substance tests with hair samples. Hair samples are analyzed using enzyme-linked immunosorbent assay (ELISA). In ELISA, an antigen must be immobilized to a solid surface and then complexed with an antibody that is linked to an enzyme.
Bone Marrow
Bone marrow can be used for testing but that depends on the quality and availability of the bones. So far there is no proof that says that certain bones are better than others when it comes to testing. Extracting bone marrow from larger bones is easier than smaller bones. Forensic toxicologists often use bone marrow to find what type poisons used, which can include cocaine or ethanol. Ethanol specifically is one of the most abused drugs worldwide, be it through alcohol consumption and abuse being a leading cause in death. Suicides, car crashes, and a variety of crimes are often performed under severe alcohol influence. The process of ethanol determination allows forensic toxicologists to utilize bone marrow post-mortem and isolate the ethanol level a person had been, and the metabolic speed of breakdown at which can be traced back to time of death.
Other
Other bodily fluids and organs may provide samples, particularly samples collected during an autopsy. A common autopsy sample is the gastric contents of the deceased, which can be useful for detecting undigested pills or liquids that were ingested prior to death. In highly decomposed bodies, traditional samples may no longer be available. The vitreous humour from the eye may be used, as the fibrous layer of the eyeball and the eye socket of the skull protects the sample from trauma and adulteration. Other common organs used for toxicology are the brain, liver, and spleen.
Detection and classification
Detection of drugs and pharmaceuticals in biological samples is usually done by an initial screening and then a confirmation of the compound(s), which may include a quantitation of the compound(s). The screening and confirmation are usually, but not necessarily, done with different analytical methods. Every analytical method used in forensic toxicology should be carefully tested by performing a validation of the method to ensure correct and indisputable results at all times. The choice of method for testing is highly dependent on what kind of substance one expects to find and the material on which the testing is performed. Customarily, a classification scheme is utilized that places poisons in categories such as: corrosive agents, gases and volatile agents, metallic poisons, non-volatile organic agents, and miscellaneous.
Immunoassays
Immunoassays require drawing blood and using the antibodies to find a reaction with substances such as drugs. The substances must be specific. It is the most common drug screening technique. Using the targeted drug the test will tell you if it is positive or negative to that drug. There can be 4 results when taking the test. Those results can be a true-positive, a false-negative, a false-positive, and a true-negative.
Gas chromatography-mass spectrometry
Gas chromatography-mass spectrometry (GC-MS) is a widely used analytical technique for the detection of volatile compounds. Ionization techniques most frequently used in forensic toxicology include electron ionization (EI) or chemical ionization (CI), with EI being preferred in forensic analysis due to its detailed mass spectra and its large library of spectra. However, chemical ionization can provide greater sensitivity for certain compounds that have high electron affinity functional groups.
Liquid chromatography-mass spectrometry
Liquid chromatography-mass spectrometry (LC-MS) has the capability to analyze compounds that are polar and less volatile. Derivatization is not required for these analytes as it would be in GC-MS, which simplifies sample preparation. As an alternative to immunoassay screening which generally requires confirmation with another technique, LC-MS offers greater selectivity and sensitivity. This subsequently reduces the possibility of a false negative result that has been recorded in immunoassay drug screening with synthetic cathinones and cannabinoids. A disadvantage of LC-MS on comparison to other analytical techniques such as GC-MS, is the high instrumentation cost. However, recent advances in LC-MS have led to higher resolution and sensitivity which assists in the evaluation of spectra to identify forensic analytes.
Detection of metals
The compounds suspected of containing a metal are traditionally analyzed by the destruction of the organic matrix by chemical or thermal oxidation. This leaves the metal to be identified and quantified in the inorganic residue, and it can be detected using such methods as the Reinsch test, emission spectroscopy or X-ray diffraction. Unfortunately, while this identifies the metals present it removes the original compound, and so hinders efforts to determine what may have been ingested. The toxic effects of various metallic compounds can vary considerably.
See also
Arsenic poisoning
Drug test
References
External links
Toxicology | 0.788481 | 0.988308 | 0.779262 |
Immunochemistry | Immunochemistry is the study of the chemistry of the immune system. This involves the study of the properties, functions, interactions and production of the chemical components (antibodies/immunoglobulins, toxin, epitopes of proteins like CD4, antitoxins, cytokines/chemokines, antigens) of the immune system. It also include immune responses and determination of immune materials/products by immunochemical assays.
In addition, immunochemistry is the study of the identities and functions of the components of the immune system. Immunochemistry is also used to describe the application of immune system components, in particular antibodies, to chemically labelled antigen molecules for visualization.
Various methods in immunochemistry have been developed and refined, and used in scientific study, from virology to molecular evolution. Immunochemical techniques include: enzyme-linked immunosorbent assay, immunoblotting (e.g., Western blot assay), precipitation and agglutination reactions, immunoelectrophoresis, immunophenotyping, immunochromatographic assay and cyflometry.
One of the earliest examples of immunochemistry is the Wasserman test to detect syphilis. Svante Arrhenius was also one of the pioneers in the field; he published Immunochemistry in 1907 which described the application of the methods of physical chemistry to the study of the theory of toxins and antitoxins.
Immunochemistry is also studied from the aspect of using antibodies to label epitopes of interest in cells (immunocytochemistry) or tissues (immunohistochemistry).
References
Branches of immunology | 0.80166 | 0.971697 | 0.77897 |
Electron transport chain | An electron transport chain (ETC) is a series of protein complexes and other molecules which transfer electrons from electron donors to electron acceptors via redox reactions (both reduction and oxidation occurring simultaneously) and couples this electron transfer with the transfer of protons (H+ ions) across a membrane. Many of the enzymes in the electron transport chain are embedded within the membrane.
The flow of electrons through the electron transport chain is an exergonic process. The energy from the redox reactions creates an electrochemical proton gradient that drives the synthesis of adenosine triphosphate (ATP). In aerobic respiration, the flow of electrons terminates with molecular oxygen as the final electron acceptor. In anaerobic respiration, other electron acceptors are used, such as sulfate.
In an electron transport chain, the redox reactions are driven by the difference in the Gibbs free energy of reactants and products. The free energy released when a higher-energy electron donor and acceptor convert to lower-energy products, while electrons are transferred from a lower to a higher redox potential, is used by the complexes in the electron transport chain to create an electrochemical gradient of ions. It is this electrochemical gradient that drives the synthesis of ATP via coupling with oxidative phosphorylation with ATP synthase.
In eukaryotic organisms, the electron transport chain, and site of oxidative phosphorylation, is found on the inner mitochondrial membrane. The energy released by reactions of oxygen and reduced compounds such as cytochrome c and (indirectly) NADH and FADH is used by the electron transport chain to pump protons into the intermembrane space, generating the electrochemical gradient over the inner mitochondrial membrane. In photosynthetic eukaryotes, the electron transport chain is found on the thylakoid membrane. Here, light energy drives electron transport through a proton pump and the resulting proton gradient causes subsequent synthesis of ATP. In bacteria, the electron transport chain can vary between species but it always constitutes a set of redox reactions that are coupled to the synthesis of ATP through the generation of an electrochemical gradient and oxidative phosphorylation through ATP synthase.
Mitochondrial electron transport chains
Most eukaryotic cells have mitochondria, which produce ATP from reactions of oxygen with products of the citric acid cycle, fatty acid metabolism, and amino acid metabolism. At the inner mitochondrial membrane, electrons from NADH and FADH pass through the electron transport chain to oxygen, which provides the energy driving the process as it is reduced to water. The electron transport chain comprises an enzymatic series of electron donors and acceptors. Each electron donor will pass electrons to an acceptor of higher redox potential, which in turn donates these electrons to another acceptor, a process that continues down the series until electrons are passed to oxygen, the terminal electron acceptor in the chain. Each reaction releases energy because a higher-energy donor and acceptor convert to lower-energy products. Via the transferred electrons, this energy is used to generate a proton gradient across the mitochondrial membrane by "pumping" protons into the intermembrane space, producing a state of higher free energy that has the potential to do work. This entire process is called oxidative phosphorylation since ADP is phosphorylated to ATP by using the electrochemical gradient that the redox reactions of the electron transport chain have established driven by energy-releasing reactions of oxygen.
Mitochondrial redox carriers
Energy associated with the transfer of electrons down the electron transport chain is used to pump protons from the mitochondrial matrix into the intermembrane space, creating an electrochemical proton gradient (ΔpH) across the inner mitochondrial membrane. This proton gradient is largely but not exclusively responsible for the mitochondrial membrane potential (ΔΨ). It allows ATP synthase to use the flow of H+ through the enzyme back into the matrix to generate ATP from adenosine diphosphate (ADP) and inorganic phosphate. Complex I (NADH coenzyme Q reductase; labeled I) accepts electrons from the Krebs cycle electron carrier nicotinamide adenine dinucleotide (NADH), and passes them to coenzyme Q (ubiquinone; labeled Q), which also receives electrons from Complex II (succinate dehydrogenase; labeled II). Q passes electrons to Complex III (cytochrome bc1 complex; labeled III), which passes them to cytochrome c (cyt c). Cyt c passes electrons to Complex IV (cytochrome c oxidase; labeled IV).
Four membrane-bound complexes have been identified in mitochondria. Each is an extremely complex transmembrane structure that is embedded in the inner membrane. Three of them are proton pumps. The structures are electrically connected by lipid-soluble electron carriers and water-soluble electron carriers. The overall electron transport chain can be summarized as follows:
NADH, H → Complex I → Q → Complex III → cytochrome c → Complex IV → HO
↑
Complex II
↑
Succinate
Complex I
In Complex I (NADH ubiquinone oxidoreductase, Type I NADH dehydrogenase, or mitochondrial complex I; ), two electrons are removed from NADH and transferred to a lipid-soluble carrier, ubiquinone (Q). The reduced product, ubiquinol (QH), freely diffuses within the membrane, and Complex I translocates four protons (H) across the membrane, thus producing a proton gradient. Complex I is one of the main sites at which premature electron leakage to oxygen occurs, thus being one of the main sites of production of superoxide.
The pathway of electrons is as follows:
NADH is oxidized to NAD, by reducing flavin mononucleotide to FMNH in one two-electron step. FMNH is then oxidized in two one-electron steps, through a semiquinone intermediate. Each electron thus transfers from the FMNH to an Fe–S cluster, from the Fe-S cluster to ubiquinone (Q). Transfer of the first electron results in the free-radical (semiquinone) form of Q, and transfer of the second electron reduces the semiquinone form to the ubiquinol form, QH. During this process, four protons are translocated from the mitochondrial matrix to the intermembrane space. As the electrons move through the complex an electron current is produced along the 180 Angstrom width of the complex within the membrane. This current powers the active transport of four protons to the intermembrane space per two electrons from NADH.
Complex II
In Complex II (succinate dehydrogenase or succinate-CoQ reductase; ) additional electrons are delivered into the quinone pool (Q) originating from succinate and transferred (via flavin adenine dinucleotide (FAD)) to Q. Complex II consists of four protein subunits: succinate dehydrogenase (SDHA); succinate dehydrogenase [ubiquinone] iron–sulfur subunit mitochondrial (SDHB); succinate dehydrogenase complex subunit C (SDHC); and succinate dehydrogenase complex subunit D (SDHD). Other electron donors (e.g., fatty acids and glycerol 3-phosphate) also direct electrons into Q (via FAD). Complex II is a parallel electron transport pathway to Complex I, but unlike Complex I, no protons are transported to the intermembrane space in this pathway. Therefore, the pathway through Complex II contributes less energy to the overall electron transport chain process.
Complex III
In Complex III (cytochrome bc1 complex or CoQH-cytochrome c reductase; ), the Q-cycle contributes to the proton gradient by an asymmetric absorption/release of protons. Two electrons are removed from QH at the QO site and sequentially transferred to two molecules of cytochrome c, a water-soluble electron carrier located within the intermembrane space. The two other electrons sequentially pass across the protein to the Qi site where the quinone part of ubiquinone is reduced to quinol. A proton gradient is formed by one quinol (2H+2e-) oxidations at the Qo site to form one quinone (2H+2e-) at the Qi site. (In total, four protons are translocated: two protons reduce quinone to quinol and two protons are released from two ubiquinol molecules.)
QH2 + 2(Fe^{III}) + 2 H -> Q + 2(Fe^{II}) + 4 H
When electron transfer is reduced (by a high membrane potential or respiratory inhibitors such as antimycin A), Complex III may leak electrons to molecular oxygen, resulting in superoxide formation.
This complex is inhibited by dimercaprol (British Anti-Lewisite, BAL), naphthoquinone and antimycin.
Complex IV
In Complex IV (cytochrome c oxidase; ), sometimes called cytochrome AA3, four electrons are removed from four molecules of cytochrome c and transferred to molecular oxygen (O) and four protons, producing two molecules of water. The complex contains coordinated copper ions and several heme groups. At the same time, eight protons are removed from the mitochondrial matrix (although only four are translocated across the membrane), contributing to the proton gradient. The exact details of proton pumping in Complex IV are still under study. Cyanide is an inhibitor of Complex IV.
Coupling with oxidative phosphorylation
According to the chemiosmotic coupling hypothesis, proposed by Nobel Prize in Chemistry winner Peter D. Mitchell, the electron transport chain and oxidative phosphorylation are coupled by a proton gradient across the inner mitochondrial membrane. The efflux of protons from the mitochondrial matrix creates an electrochemical gradient (proton gradient). This gradient is used by the FF ATP synthase complex to make ATP via oxidative phosphorylation. ATP synthase is sometimes described as Complex V of the electron transport chain. The F component of ATP synthase acts as an ion channel that provides for a proton flux back into the mitochondrial matrix. It is composed of a, b and c subunits. Protons in the inter-membrane space of mitochondria first enter the ATP synthase complex through an a subunit channel. Then protons move to the c subunits. The number of c subunits determines how many protons are required to make the F turn one full revolution. For example, in humans, there are 8 c subunits, thus 8 protons are required. After c subunits, protons finally enter the matrix through an a subunit channel that opens into the mitochondrial matrix. This reflux releases free energy produced during the generation of the oxidized forms of the electron carriers (NAD and Q) with energy provided by O. The free energy is used to drive ATP synthesis, catalyzed by the F component of the complex.Coupling with oxidative phosphorylation is a key step for ATP production. However, in specific cases, uncoupling the two processes may be biologically useful. The uncoupling protein, thermogenin—present in the inner mitochondrial membrane of brown adipose tissue—provides for an alternative flow of protons back to the inner mitochondrial matrix. Thyroxine is also a natural uncoupler. This alternative flow results in thermogenesis rather than ATP production.
Reverse electron flow
Reverse electron flow is the transfer of electrons through the electron transport chain through the reverse redox reactions. Usually requiring a significant amount of energy to be used, this can reduce the oxidized forms of electron donors. For example, NAD+ can be reduced to NADH by Complex I. There are several factors that have been shown to induce reverse electron flow. However, more work needs to be done to confirm this. One example is blockage of ATP synthase, resulting in a build-up of protons and therefore a higher proton-motive force, inducing reverse electron flow.
Prokaryotic electron transport chains
In eukaryotes, NADH is the most important electron donor. The associated electron transport chain is NADH → Complex I → Q → Complex III → cytochrome c → Complex IV → O where Complexes I, III and IV are proton pumps, while Q and cytochrome c are mobile electron carriers. The electron acceptor for this process is molecular oxygen.
In prokaryotes (bacteria and archaea) the situation is more complicated, because there are several different electron donors and several different electron acceptors. The generalized electron transport chain in bacteria is:
Donor Donor Donor
↓ ↓ ↓
dehydrogenase → quinone → bc → cytochrome
↓ ↓
oxidase(reductase) oxidase(reductase)
↓ ↓
Acceptor Acceptor
Electrons can enter the chain at three levels: at the level of a dehydrogenase, at the level of the quinone pool, or at the level of a mobile cytochrome electron carrier. These levels correspond to successively more positive redox potentials, or to successively decreased potential differences relative to the terminal electron acceptor. In other words, they correspond to successively smaller Gibbs free energy changes for the overall redox reaction.
Individual bacteria use multiple electron transport chains, often simultaneously. Bacteria can use a number of different electron donors, a number of different dehydrogenases, a number of different oxidases and reductases, and a number of different electron acceptors. For example, E. coli (when growing aerobically using glucose and oxygen as an energy source) uses two different NADH dehydrogenases and two different quinol oxidases, for a total of four different electron transport chains operating simultaneously.
A common feature of all electron transport chains is the presence of a proton pump to create an electrochemical gradient over a membrane. Bacterial electron transport chains may contain as many as three proton pumps, like mitochondria, or they may contain two or at least one.
Electron donors
In the current biosphere, the most common electron donors are organic molecules. Organisms that use organic molecules as an electron source are called organotrophs. Chemoorganotrophs (animals, fungi, protists) and photolithotrophs (plants and algae) constitute the vast majority of all familiar life forms.
Some prokaryotes can use inorganic matter as an electron source. Such an organism is called a (chemo)lithotroph ("rock-eater"). Inorganic electron donors include hydrogen, carbon monoxide, ammonia, nitrite, sulfur, sulfide, manganese oxide, and ferrous iron. Lithotrophs have been found growing in rock formations thousands of meters below the surface of Earth. Because of their volume of distribution, lithotrophs may actually outnumber organotrophs and phototrophs in our biosphere.
The use of inorganic electron donors such as hydrogen as an energy source is of particular interest in the study of evolution. This type of metabolism must logically have preceded the use of organic molecules and oxygen as an energy source.
Dehydrogenases: equivalants to complexes I and II
Bacteria can use several different electron donors. When organic matter is the electron source, the donor may be NADH or succinate, in which case electrons enter the electron transport chain via NADH dehydrogenase (similar to Complex I in mitochondria) or succinate dehydrogenase (similar to Complex II). Other dehydrogenases may be used to process different energy sources: formate dehydrogenase, lactate dehydrogenase, glyceraldehyde-3-phosphate dehydrogenase, H dehydrogenase (hydrogenase), electron transport chain. Some dehydrogenases are also proton pumps, while others funnel electrons into the quinone pool. Most dehydrogenases show induced expression in the bacterial cell in response to metabolic needs triggered by the environment in which the cells grow. In the case of lactate dehydrogenase in E. coli, the enzyme is used aerobically and in combination with other dehydrogenases. It is inducible and is expressed when the concentration of DL-lactate in the cell is high.
Quinone carriers
Quinones are mobile, lipid-soluble carriers that shuttle electrons (and protons) between large, relatively immobile macromolecular complexes embedded in the membrane. Bacteria use ubiquinone (Coenzyme Q, the same quinone that mitochondria use) and related quinones such as menaquinone (Vitamin K). Archaea in the genus Sulfolobus use caldariellaquinone. The use of different quinones is due to slight changes in redox potentials caused by changes in structure. The change in redox potentials of these quinones may be suited to changes in the electron acceptors or variations of redox potentials in bacterial complexes.
Proton pumps
A proton pump is any process that creates a proton gradient across a membrane. Protons can be physically moved across a membrane, as seen in mitochondrial Complexes I and IV. The same effect can be produced by moving electrons in the opposite direction. The result is the disappearance of a proton from the cytoplasm and the appearance of a proton in the periplasm. Mitochondrial Complex III is this second type of proton pump, which is mediated by a quinone (the Q cycle).
Some dehydrogenases are proton pumps, while others are not. Most oxidases and reductases are proton pumps, but some are not. Cytochrome bc1 is a proton pump found in many, but not all, bacteria (not in E. coli). As the name implies, bacterial bc1 is similar to mitochondrial bc1 (Complex III).
Cytochrome electron carriers
Cytochromes are proteins that contain iron. They are found in two very different environments.
Some cytochromes are water-soluble carriers that shuttle electrons to and from large, immobile macromolecular structures imbedded in the membrane. The mobile cytochrome electron carrier in mitochondria is cytochrome c. Bacteria use a number of different mobile cytochrome electron carriers.
Other cytochromes are found within macromolecules such as Complex III and Complex IV. They also function as electron carriers, but in a very different, intramolecular, solid-state environment.
Electrons may enter an electron transport chain at the level of a mobile cytochrome or quinone carrier. For example, electrons from inorganic electron donors (nitrite, ferrous iron, electron transport chain) enter the electron transport chain at the cytochrome level. When electrons enter at a redox level greater than NADH, the electron transport chain must operate in reverse to produce this necessary, higher-energy molecule.
Electron acceptors and terminal oxidase/reductase
As there are a number of different electron donors (organic matter in organotrophs, inorganic matter in lithotrophs), there are a number of different electron acceptors, both organic and inorganic. As with other steps of the ETC, an enzyme is required to help with the process.
If oxygen is available, it is most often used as the terminal electron acceptor in aerobic bacteria and facultative anaerobes. An oxidase reduces the O to water while oxidizing something else. In mitochondria, the terminal membrane complex (Complex IV) is cytochrome oxidase, which oxidizes the cytochrome. Aerobic bacteria use a number of differet terminal oxidases. For example, E. coli (a facultative anaerobe) does not have a cytochrome oxidase or a bc1 complex. Under aerobic conditions, it uses two different terminal quinol oxidases (both proton pumps) to reduce oxygen to water.
Bacterial terminal oxidases can be split into classes according to the molecules act as terminal electron acceptors. Class I oxidases are cytochrome oxidases and use oxygen as the terminal electron acceptor. Class II oxidases are quinol oxidases and can use a variety of terminal electron acceptors. Both of these classes can be subdivided into categories based on what redox-active components they contain. E.g. Heme aa3 Class 1 terminal oxidases are much more efficient than Class 2 terminal oxidases.
Mostly in anaerobic environments different electron acceptors are used, including nitrate, nitrite, ferric iron, sulfate, carbon dioxide, and small organic molecules such as fumarate. When bacteria grow in anaerobic environments, the terminal electron acceptor is reduced by an enzyme called a reductase. E. coli can use fumarate reductase, nitrate reductase, nitrite reductase, DMSO reductase, or trimethylamine-N-oxide reductase, depending on the availability of these acceptors in the environment.
Most terminal oxidases and reductases are inducible. They are synthesized by the organism as needed, in response to specific environmental conditions.
Photosynthetic
In oxidative phosphorylation, electrons are transferred from an electron donor such as NADH to an acceptor such as O through an electron transport chain, releasing energy. In photophosphorylation, the energy of sunlight is used to create a high-energy electron donor which can subsequently reduce oxidized components and couple to ATP synthesis via proton translocation by the electron transport chain.
Photosynthetic electron transport chains, like the mitochondrial chain, can be considered as a special case of the bacterial systems. They use mobile, lipid-soluble quinone carriers (phylloquinone and plastoquinone) and mobile, water-soluble carriers (cytochromes). They also contain a proton pump. The proton pump in all photosynthetic chains resembles mitochondrial Complex III. The commonly-held theory of symbiogenesis proposes that both organelles descended from bacteria.
See also
Charge-transfer complex
CoRR hypothesis
Electron equivalent
Hydrogen hypothesis
Respirasome
Electric bacteria
References
Further reading
– Editorial commentary mentioning two unusual ETCs: that of Geobacter sulfurreducens and that of cable bacteria. Also has schematic of E. coli ETC.
External links
Khan Academy, video lecture
KEGG pathway: Oxidative phosphorylation, overlaid with genes found in Pseudomonas fluorescens Pf0-1. Click "help" for a how-to.
Cellular respiration
Integral membrane proteins | 0.780906 | 0.997395 | 0.778872 |
Biomolecule | A biomolecule or biological molecule is loosely defined as a molecule produced by a living organism and essential to one or more typically biological processes. Biomolecules include large macromolecules such as proteins, carbohydrates, lipids, and nucleic acids, as well as small molecules such as vitamins and hormones. A general name for this class of material is biological materials. Biomolecules are an important element of living organisms, those biomolecules are often endogenous, produced within the organism but organisms usually need exogenous biomolecules, for example certain nutrients, to survive.
Biology and its subfields of biochemistry and molecular biology study biomolecules and their reactions. Most biomolecules are organic compounds, and just four elements—oxygen, carbon, hydrogen, and nitrogen—make up 96% of the human body's mass. But many other elements, such as the various biometals, are also present in small amounts.
The uniformity of both specific types of molecules (the biomolecules) and of certain metabolic pathways are invariant features among the wide diversity of life forms; thus these biomolecules and metabolic pathways are referred to as "biochemical universals" or "theory of material unity of the living beings", a unifying concept in biology, along with cell theory and evolution theory.
Types of biomolecules
A diverse range of biomolecules exist, including:
Small molecules:
Lipids, fatty acids, glycolipids, sterols, monosaccharides
Vitamins
Hormones, neurotransmitters
Metabolites
Monomers, oligomers and polymers:
Nucleosides and nucleotides
Nucleosides are molecules formed by attaching a nucleobase to a ribose or deoxyribose ring. Examples of these include cytidine (C), uridine (U), adenosine (A), guanosine (G), and thymidine (T).
Nucleosides can be phosphorylated by specific kinases in the cell, producing nucleotides.
Both DNA and RNA are polymers, consisting of long, linear molecules assembled by polymerase enzymes from repeating structural units, or monomers, of mononucleotides. DNA uses the deoxynucleotides C, G, A, and T, while RNA uses the ribonucleotides (which have an extra hydroxyl(OH) group on the pentose ring) C, G, A, and U. Modified bases are fairly common (such as with methyl groups on the base ring), as found in ribosomal RNA or transfer RNAs or for discriminating the new from old strands of DNA after replication.
Each nucleotide is made of an acyclic nitrogenous base, a pentose and one to three phosphate groups. They contain carbon, nitrogen, oxygen, hydrogen and phosphorus. They serve as sources of chemical energy (adenosine triphosphate and guanosine triphosphate), participate in cellular signaling (cyclic guanosine monophosphate and cyclic adenosine monophosphate), and are incorporated into important cofactors of enzymatic reactions (coenzyme A, flavin adenine dinucleotide, flavin mononucleotide, and nicotinamide adenine dinucleotide phosphate).
DNA and RNA structure
DNA structure is dominated by the well-known double helix formed by Watson-Crick base-pairing of C with G and A with T. This is known as B-form DNA, and is overwhelmingly the most favorable and common state of DNA; its highly specific and stable base-pairing is the basis of reliable genetic information storage. DNA can sometimes occur as single strands (often needing to be stabilized by single-strand binding proteins) or as A-form or Z-form helices, and occasionally in more complex 3D structures such as the crossover at Holliday junctions during DNA replication.
RNA, in contrast, forms large and complex 3D tertiary structures reminiscent of proteins, as well as the loose single strands with locally folded regions that constitute messenger RNA molecules. Those RNA structures contain many stretches of A-form double helix, connected into definite 3D arrangements by single-stranded loops, bulges, and junctions. Examples are tRNA, ribosomes, ribozymes, and riboswitches. These complex structures are facilitated by the fact that RNA backbone has less local flexibility than DNA but a large set of distinct conformations, apparently because of both positive and negative interactions of the extra OH on the ribose. Structured RNA molecules can do highly specific binding of other molecules and can themselves be recognized specifically; in addition, they can perform enzymatic catalysis (when they are known as "ribozymes", as initially discovered by Tom Cech and colleagues).
Saccharides
Monosaccharides are the simplest form of carbohydrates with only one simple sugar. They essentially contain an aldehyde or ketone group in their structure. The presence of an aldehyde group in a monosaccharide is indicated by the prefix aldo-. Similarly, a ketone group is denoted by the prefix keto-. Examples of monosaccharides are the hexoses, glucose, fructose, Trioses, Tetroses, Heptoses, galactose, pentoses, ribose, and deoxyribose. Consumed fructose and glucose have different rates of gastric emptying, are differentially absorbed and have different metabolic fates, providing multiple opportunities for two different saccharides to differentially affect food intake. Most saccharides eventually provide fuel for cellular respiration.
Disaccharides are formed when two monosaccharides, or two single simple sugars, form a bond with removal of water. They can be hydrolyzed to yield their saccharin building blocks by boiling with dilute acid or reacting them with appropriate enzymes. Examples of disaccharides include sucrose, maltose, and lactose.
Polysaccharides are polymerized monosaccharides, or complex carbohydrates. They have multiple simple sugars. Examples are starch, cellulose, and glycogen. They are generally large and often have a complex branched connectivity. Because of their size, polysaccharides are not water-soluble, but their many hydroxy groups become hydrated individually when exposed to water, and some polysaccharides form thick colloidal dispersions when heated in water. Shorter polysaccharides, with 3 to 10 monomers, are called oligosaccharides.
A fluorescent indicator-displacement molecular imprinting sensor was developed for discriminating saccharides. It successfully discriminated three brands of orange juice beverage. The change in fluorescence intensity of the sensing films resulting is directly related to the saccharide concentration.
Lignin
Lignin is a complex polyphenolic macromolecule composed mainly of beta-O4-aryl linkages. After cellulose, lignin is the second most abundant biopolymer and is one of the primary structural components of most plants. It contains subunits derived from p-coumaryl alcohol, coniferyl alcohol, and sinapyl alcohol, and is unusual among biomolecules in that it is racemic. The lack of optical activity is due to the polymerization of lignin which occurs via free radical coupling reactions in which there is no preference for either configuration at a chiral center.
Lipid
Lipids (oleaginous) are chiefly fatty acid esters, and are the basic building blocks of biological membranes. Another biological role is energy storage (e.g., triglycerides). Most lipids consist of a polar or hydrophilic head (typically glycerol) and one to three non polar or hydrophobic fatty acid tails, and therefore they are amphiphilic. Fatty acids consist of unbranched chains of carbon atoms that are connected by single bonds alone (saturated fatty acids) or by both single and double bonds (unsaturated fatty acids). The chains are usually 14-24 carbon groups long, but it is always an even number.
For lipids present in biological membranes, the hydrophilic head is from one of three classes:
Glycolipids, whose heads contain an oligosaccharide with 1-15 saccharide residues.
Phospholipids, whose heads contain a positively charged group that is linked to the tail by a negatively charged phosphate group.
Sterols, whose heads contain a planar steroid ring, for example, cholesterol.
Other lipids include prostaglandins and leukotrienes which are both 20-carbon fatty acyl units synthesized from arachidonic acid.
They are also known as fatty acids
Amino acids
Amino acids contain both amino and carboxylic acid functional groups. (In biochemistry, the term amino acid is used when referring to those amino acids in which the amino and carboxylate functionalities are attached to the same carbon, plus proline which is not actually an amino acid).
Modified amino acids are sometimes observed in proteins; this is usually the result of enzymatic modification after translation (protein synthesis). For example, phosphorylation of serine by kinases and dephosphorylation by phosphatases is an important control mechanism in the cell cycle. Only two amino acids other than the standard twenty are known to be incorporated into proteins during translation, in certain organisms:
Selenocysteine is incorporated into some proteins at a UGA codon, which is normally a stop codon.
Pyrrolysine is incorporated into some proteins at a UAG codon. For instance, in some methanogens in enzymes that are used to produce methane.
Besides those used in protein synthesis, other biologically important amino acids include carnitine (used in lipid transport within a cell), ornithine, GABA and taurine.
Protein structure
The particular series of amino acids that form a protein is known as that protein's primary structure. This sequence is determined by the genetic makeup of the individual. It specifies the order of side-chain groups along the linear polypeptide "backbone".
Proteins have two types of well-classified, frequently occurring elements of local structure defined by a particular pattern of hydrogen bonds along the backbone: alpha helix and beta sheet. Their number and arrangement is called the secondary structure of the protein. Alpha helices are regular spirals stabilized by hydrogen bonds between the backbone CO group (carbonyl) of one amino acid residue and the backbone NH group (amide) of the i+4 residue. The spiral has about 3.6 amino acids per turn, and the amino acid side chains stick out from the cylinder of the helix. Beta pleated sheets are formed by backbone hydrogen bonds between individual beta strands each of which is in an "extended", or fully stretched-out, conformation. The strands may lie parallel or antiparallel to each other, and the side-chain direction alternates above and below the sheet. Hemoglobin contains only helices, natural silk is formed of beta pleated sheets, and many enzymes have a pattern of alternating helices and beta-strands. The secondary-structure elements are connected by "loop" or "coil" regions of non-repetitive conformation, which are sometimes quite mobile or disordered but usually adopt a well-defined, stable arrangement.
The overall, compact, 3D structure of a protein is termed its tertiary structure or its "fold". It is formed as result of various attractive forces like hydrogen bonding, disulfide bridges, hydrophobic interactions, hydrophilic interactions, van der Waals force etc.
When two or more polypeptide chains (either of identical or of different sequence) cluster to form a protein, quaternary structure of protein is formed. Quaternary structure is an attribute of polymeric (same-sequence chains) or heteromeric (different-sequence chains) proteins like hemoglobin, which consists of two "alpha" and two "beta" polypeptide chains.
Apoenzymes
An apoenzyme (or, generally, an apoprotein) is the protein without any small-molecule cofactors, substrates, or inhibitors bound. It is often important as an inactive storage, transport, or secretory form of a protein. This is required, for instance, to protect the secretory cell from the activity of that protein.
Apoenzymes become active enzymes on addition of a cofactor. Cofactors can be either inorganic (e.g., metal ions and iron-sulfur clusters) or organic compounds, (e.g., [Flavin group|flavin] and heme). Organic cofactors can be either prosthetic groups, which are tightly bound to an enzyme, or coenzymes, which are released from the enzyme's active site during the reaction.
Isoenzymes
Isoenzymes, or isozymes, are multiple forms of an enzyme, with slightly different protein sequence and closely similar but usually not identical functions. They are either products of different genes, or else different products of alternative splicing. They may either be produced in different organs or cell types to perform the same function, or several isoenzymes may be produced in the same cell type under differential regulation to suit the needs of changing development or environment. LDH (lactate dehydrogenase) has multiple isozymes, while fetal hemoglobin is an example of a developmentally regulated isoform of a non-enzymatic protein. The relative levels of isoenzymes in blood can be used to diagnose problems in the organ of secretion .
See also
Biomolecular engineering
List of biomolecules
Metabolism
Multi-state modeling of biomolecules
References
External links
Society for Biomolecular Sciences provider of a forum for education and information exchange among professionals within drug discovery and related disciplines.
Molecules
Biochemistry
Organic compounds | 0.782199 | 0.995643 | 0.778791 |
Assimilation (biology) | Assimilation is the process of absorption of vitamins, minerals, and other chemicals from food as part of the nutrition of an organism. In humans, this is always done with a chemical breakdown (enzymes and acids) and physical breakdown (oral mastication and stomach churning). Chemical alteration of substances in the bloodstream by the liver or cellular secretions. Although a few similar compounds can be absorbed in digestion bio assimilation, the bioavailability of many compounds is dictated by this second process since both the liver and cellular secretions can be very specific in their metabolic action (see chirality). This second process is where the absorbed food reaches the cells via the liver.
Most foods are composed of largely indigestible components depending on the enzymes and effectiveness of an animal's digestive tract. The most well-known of these indigestible compounds is cellulose; the basic chemical polymer in the makeup of plant cell walls. Most animals, however, do not produce cellulase; the enzyme needed to digest cellulose. However, some animals and species have developed symbiotic relationships with cellulase-producing bacteria (see termites and metamonads.) This allows termites to use the energy-dense cellulose carbohydrate. Other such enzymes are known to significantly improve bio-assimilation of nutrients. Because of the use of bacterial derivatives, enzymatic dietary supplements now contain such enzymes as amylase, glucoamylase, protease, invertase, peptidase, lipase, lactase, phytase, and cellulase.
Examples of biological assimilation
Photosynthesis, a process whereby carbon dioxide and water are transformed into a number of organic molecules in plant cells.
Nitrogen fixation from the soil into organic molecules by symbiotic bacteria which live in the roots of certain plants, such as Leguminosae.
Magnesium supplements orotate, oxide, sulfate, citrate, and glycerate are all structurally similar. However, oxide and sulfate are not water-soluble and do not enter the bloodstream, while orotate and glycerate have normal exiguous liver conversion. Chlorophyll sources or magnesium citrate are highly bioassimilable.
The absorption of nutrients into the body after digestion in the intestine and its transformation in biological tissues and fluids.
See also
Anabolism
Biochemistry
Nutrition
Respiration
Transportation
Excretion
References
Biological processes
Metabolism | 0.791714 | 0.983392 | 0.778565 |
Biomedicine | Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners.
Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century.
It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine.
Overview
Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy.
Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
Biomedicine involves the study of (patho-) physiological processes with methods from biology and physiology. Approaches range from understanding molecular interactions to the study of the consequences at the in vivo level. These processes are studied with the particular point of view of devising new strategies for diagnosis and therapy.
Depending on the severity of the disease, biomedicine pinpoints a problem within a patient and fixes the problem through medical intervention. Medicine focuses on curing diseases rather than improving one's health.
In social sciences biomedicine is described somewhat differently. Through an anthropological lens biomedicine extends beyond the realm of biology and scientific facts; it is a socio-cultural system which collectively represents reality. While biomedicine is traditionally thought to have no bias due to the evidence-based practices, Gaines & Davis-Floyd (2004) highlight that biomedicine itself has a cultural basis and this is because biomedicine reflects the norms and values of its creators.
Molecular biology
Molecular biology is the process of synthesis and regulation of a cell's DNA, RNA, and protein. Molecular biology consists of different techniques including Polymerase chain reaction, Gel electrophoresis, and macromolecule blotting to manipulate DNA.
Polymerase chain reaction is done by placing a mixture of the desired DNA, DNA polymerase, primers, and nucleotide bases into a machine. The machine heats up and cools down at various temperatures to break the hydrogen bonds binding the DNA and allows the nucleotide bases to be added onto the two DNA templates after it has been separated.
Gel electrophoresis is a technique used to identify similar DNA between two unknown samples of DNA. This process is done by first preparing an agarose gel. This jelly-like sheet will have wells for DNA to be poured into. An electric current is applied so that the DNA, which is negatively charged due to its phosphate groups is attracted to the positive electrode. Different rows of DNA will move at different speeds because some DNA pieces are larger than others. Thus if two DNA samples show a similar pattern on the gel electrophoresis, one can tell that these DNA samples match.
Macromolecule blotting is a process performed after gel electrophoresis. An alkaline solution is prepared in a container. A sponge is placed into the solution and an agarose gel is placed on top of the sponge. Next, nitrocellulose paper is placed on top of the agarose gel and a paper towels are added on top of the nitrocellulose paper to apply pressure. The alkaline solution is drawn upwards towards the paper towel. During this process, the DNA denatures in the alkaline solution and is carried upwards to the nitrocellulose paper. The paper is then placed into a plastic bag and filled with a solution full of the DNA fragments, called the probe, found in the desired sample of DNA. The probes anneal to the complementary DNA of the bands already found on the nitrocellulose sample. Afterwards, probes are washed off and the only ones present are the ones that have annealed to complementary DNA on the paper. Next the paper is stuck onto an x ray film. The radioactivity of the probes creates black bands on the film, called an autoradiograph. As a result, only similar patterns of DNA to that of the probe are present on the film. This allows us the compare similar DNA sequences of multiple DNA samples. The overall process results in a precise reading of similarities in both similar and different DNA sample.
Biochemistry
Biochemistry is the science of the chemical processes which takes place within living organisms. Living organisms need essential elements to survive, among which are carbon, hydrogen, nitrogen, oxygen, calcium, and phosphorus. These elements make up the four macromolecules that living organisms need to survive: carbohydrates, lipids, proteins, and nucleic acids.
Carbohydrates, made up of carbon, hydrogen, and oxygen, are energy-storing molecules. The simplest carbohydrate is glucose, CHO, is used in cellular respiration to produce ATP, adenosine triphosphate, which supplies cells with energy.
Proteins are chains of amino acids that function, among other things, to contract skeletal muscle, as catalysts, as transport molecules, and as storage molecules. Protein catalysts can facilitate biochemical processes by lowering the activation energy of a reaction. Hemoglobins are also proteins, carrying oxygen to an organism's cells.
Lipids, also known as fats, are small molecules derived from biochemical subunits from either the ketoacyl or isoprene groups. Creating eight distinct categories: fatty acids, glycerolipids, glycerophospholipids, sphingolipids, saccharolipids, and polyketides (derived from condensation of ketoacyl subunits); and sterol lipids and prenol lipids (derived from condensation of isoprene subunits). Their primary purpose is to store energy over the long term. Due to their unique structure, lipids provide more than twice the amount of energy that carbohydrates do. Lipids can also be used as insulation. Moreover, lipids can be used in hormone production to maintain a healthy hormonal balance and provide structure to cell membranes.
Nucleic acids are a key component of DNA, the main genetic information-storing substance, found oftentimes in the cell nucleus, and controls the metabolic processes of the cell. DNA consists of two complementary antiparallel strands consisting of varying patterns of nucleotides. RNA is a single strand of DNA, which is transcribed from DNA and used for DNA translation, which is the process for making proteins out of RNA sequences.
See also
References
External links
Branches of biology
Veterinary medicine
Western culture | 0.786157 | 0.9903 | 0.778531 |
Ionization | Ionization (or ionisation specifically in Britain, Ireland, Australia and New Zealand) is the process by which an atom or a molecule acquires a negative or positive charge by gaining or losing electrons, often in conjunction with other chemical changes. The resulting electrically charged atom or molecule is called an ion. Ionization can result from the loss of an electron after collisions with subatomic particles, collisions with other atoms, molecules, electrons, positrons, protons, antiprotons and ions, or through the interaction with electromagnetic radiation. Heterolytic bond cleavage and heterolytic substitution reactions can result in the formation of ion pairs. Ionization can occur through radioactive decay by the internal conversion process, in which an excited nucleus transfers its energy to one of the inner-shell electrons causing it to be ejected.
Uses
Everyday examples of gas ionization occur within a fluorescent lamp or other electrical discharge lamps. It is also used in radiation detectors such as the Geiger-Müller counter or the ionization chamber. The ionization process is widely used in a variety of equipment in fundamental science (e.g., mass spectrometry) and in medical treatment (e.g., radiation therapy). It is also widely used for air purification, though studies have shown harmful effects of this application.
Production of ions
Negatively charged ions are produced when a free electron collides with an atom and is subsequently trapped inside the electric potential barrier, releasing any excess energy. The process is known as electron capture ionization.
Positively charged ions are produced by transferring an amount of energy to a bound electron in a collision with charged particles (e.g. ions, electrons or positrons) or with photons. The threshold amount of the required energy is known as ionization potential. The study of such collisions is of fundamental importance with regard to the few-body problem, which is one of the major unsolved problems in physics. Kinematically complete experiments, i.e. experiments in which the complete momentum vector of all collision fragments (the scattered projectile, the recoiling target-ion, and the ejected electron) are determined, have contributed to major advances in the theoretical understanding of the few-body problem in recent years.
Adiabatic ionization
Adiabatic ionization is a form of ionization in which an electron is removed from or added to an atom or molecule in its lowest energy state to form an ion in its lowest energy state.
The Townsend discharge is a good example of the creation of positive ions and free electrons due to ion impact. It is a cascade reaction involving electrons in a region with a sufficiently high electric field in a gaseous medium that can be ionized, such as air. Following an original ionization event, due to such as ionizing radiation, the positive ion drifts towards the cathode, while the free electron drifts towards the anode of the device. If the electric field is strong enough, the free electron gains sufficient energy to liberate a further electron when it next collides with another molecule. The two free electrons then travel towards the anode and gain sufficient energy from the electric field to cause impact ionization when the next collisions occur; and so on. This is effectively a chain reaction of electron generation, and is dependent on the free electrons gaining sufficient energy between collisions to sustain the avalanche.
Ionization efficiency is the ratio of the number of ions formed to the number of electrons or photons used.
Ionization energy of atoms
The trend in the ionization energy of atoms is often used to demonstrate the periodic behavior of atoms with respect to the atomic number, as summarized by ordering atoms in Mendeleev's table. This is a valuable tool for establishing and understanding the ordering of electrons in atomic orbitals without going into the details of wave functions or the ionization process. An example is presented in the figure to the right. The periodic abrupt decrease in ionization potential after rare gas atoms, for instance, indicates the emergence of a new shell in alkali metals. In addition, the local maximums in the ionization energy plot, moving from left to right in a row, are indicative of s, p, d, and f sub-shells.
Semi-classical description of ionization
Classical physics and the Bohr model of the atom can qualitatively explain photoionization and collision-mediated ionization. In these cases, during the ionization process, the energy of the electron exceeds the energy difference of the potential barrier it is trying to pass. The classical description, however, cannot describe tunnel ionization since the process involves the passage of electron through a classically forbidden potential barrier.
Quantum mechanical description of ionization
The interaction of atoms and molecules with sufficiently strong laser pulses or with other charged particles leads to the ionization to singly or multiply charged ions. The ionization rate, i.e. the ionization probability in unit time, can be calculated using quantum mechanics. (There are classical methods available also, like the Classical Trajectory Monte Carlo Method (CTMC) ,but it is not overall accepted and often criticized by the community.) There are two quantum mechanical methods exist, perturbative and non-perturbative methods like time-dependent coupled-channel or time independent close coupling methods where the wave function is expanded in a finite basis set. There are numerous options available e.g. B-splines or Coulomb wave packets. Another non-perturbative method is to solve the corresponding Schrödinger equation fully numerically on a lattice.
In general, the analytic solutions are not available, and the approximations required for manageable numerical calculations do not provide accurate enough results. However, when the laser intensity is sufficiently high, the detailed structure of the atom or molecule can be ignored and analytic solution for the ionization rate is possible.
Tunnel ionization
Tunnel ionization is ionization due to quantum tunneling. In classical ionization, an electron must have enough energy to make it over the potential barrier, but quantum tunneling allows the electron simply to go through the potential barrier instead of going all the way over it because of the wave nature of the electron. The probability of an electron's tunneling through the barrier drops off exponentially with the width of the potential barrier. Therefore, an electron with a higher energy can make it further up the potential barrier, leaving a much thinner barrier to tunnel through and thus a greater chance to do so. In practice, tunnel ionization is observable when the atom or molecule is interacting with near-infrared strong laser pulses. This process can be understood as a process by which a bounded electron, through the absorption of more than one photon from the laser field, is ionized. This picture is generally known as multiphoton ionization (MPI).
Keldysh modeled the MPI process as a transition of the electron from the ground state of the atom to the Volkov states. In this model the perturbation of the ground state by the laser field is neglected and the details of atomic structure in determining the ionization probability are not taken into account. The major difficulty with Keldysh's model was its neglect of the effects of Coulomb interaction on the final state of the electron. As it is observed from figure, the Coulomb field is not very small in magnitude compared to the potential of the laser at larger distances from the nucleus. This is in contrast to the approximation made by neglecting the potential of the laser at regions near the nucleus. Perelomov et al. included the Coulomb interaction at larger internuclear distances. Their model (which we call the PPT model) was derived for short range potential and includes the effect of the long range Coulomb interaction through the first order correction in the quasi-classical action. Larochelle et al. have compared the theoretically predicted ion versus intensity curves of rare gas atoms interacting with a Ti:Sapphire laser with experimental measurement. They have shown that the total ionization rate predicted by the PPT model fit very well the experimental ion yields for all rare gases in the intermediate regime of the Keldysh parameter.
The rate of MPI on atom with an ionization potential in a linearly polarized laser with frequency is given by
where
is the Keldysh parameter,
,
is the peak electric field of the laser and
.
The coefficients , and are given by
The coefficient is given by
where
Quasi-static tunnel ionization
The quasi-static tunneling (QST) is the ionization whose rate can be satisfactorily predicted by the ADK model, i.e. the limit of the PPT model when approaches zero. The rate of QST is given by
As compared to the absence of summation over n, which represent different above threshold ionization (ATI) peaks, is remarkable.
Strong field approximation for the ionization rate
The calculations of PPT are done in the E-gauge, meaning that the laser field is taken as electromagnetic waves. The ionization rate can also be calculated in A-gauge, which emphasizes the particle nature of light (absorbing multiple photons during ionization). This approach was adopted by Krainov model based on the earlier works of Faisal and Reiss. The resulting rate is given by
where:
with being the ponderomotive energy,
is the minimum number of photons necessary to ionize the atom,
is the double Bessel function,
with the angle between the momentum of the electron, p, and the electric field of the laser, F,
FT is the three-dimensional Fourier transform, and
incorporates the Coulomb correction in the SFA model.
Population trapping
In calculating the rate of MPI of atoms only transitions to the continuum states are considered. Such an approximation is acceptable as long as there is no multiphoton resonance between the ground state and some excited states. However, in real situation of interaction with pulsed lasers, during the evolution of laser intensity, due to different Stark shift of the ground and excited states there is a possibility that some excited state go into multiphoton resonance with the ground state. Within the dressed atom picture, the ground state dressed by photons and the resonant state undergo an avoided crossing at the resonance intensity . The minimum distance, , at the avoided crossing is proportional to the generalized Rabi frequency, coupling the two states. According to Story et al., the probability of remaining in the ground state, , is given by
where is the time-dependent energy difference between the two dressed states. In interaction with a short pulse, if the dynamic resonance is reached in the rising or the falling part of the pulse, the population practically remains in the ground state and the effect of multiphoton resonances may be neglected. However, if the states go onto resonance at the peak of the pulse, where , then the excited state is populated. After being populated, since the ionization potential of the excited state is small, it is expected that the electron will be instantly ionized.
In 1992, de Boer and Muller showed that Xe atoms subjected to short laser pulses could survive in the highly excited states 4f, 5f, and 6f. These states were believed to have been excited by the dynamic Stark shift of the levels into multiphoton resonance with the field during the rising part of the laser pulse. Subsequent evolution of the laser pulse did not completely ionize these states, leaving behind some highly excited atoms. We shall refer to this phenomenon as "population trapping".
We mention the theoretical calculation that incomplete ionization occurs whenever there is parallel resonant excitation into a common level with ionization loss. We consider a state such as 6f of Xe which consists of 7 quasi-degnerate levels in the range of the laser bandwidth. These levels along with the continuum constitute a lambda system. The mechanism of the lambda type trapping is schematically presented in figure. At the rising part of the pulse (a) the excited state (with two degenerate levels 1 and 2) are not in multiphoton resonance with the ground state. The electron is ionized through multiphoton coupling with the continuum. As the intensity of the pulse is increased the excited state and the continuum are shifted in energy due to the Stark shift. At the peak of the pulse (b) the excited states go into multiphoton resonance with the ground state. As the intensity starts to decrease (c), the two state are coupled through continuum and the population is trapped in a coherent superposition of the two states. Under subsequent action of the same pulse, due to interference in the transition amplitudes of the lambda system, the field cannot ionize the population completely and a fraction of the population will be trapped in a coherent superposition of the quasi degenerate levels. According to this explanation the states with higher angular momentum – with more sublevels – would have a higher probability of trapping the population. In general the strength of the trapping will be determined by the strength of the two photon coupling between the quasi-degenerate levels via the continuum. In 1996, using a very stable laser and by minimizing the masking effects of the focal region expansion with increasing intensity, Talebpour et al. observed structures on the curves of singly charged ions of Xe, Kr and Ar. These structures were attributed to electron trapping in the strong laser field. A more unambiguous demonstration of population trapping has been reported by T. Morishita and C. D. Lin.
Non-sequential multiple ionization
The phenomenon of non-sequential ionization (NSI) of atoms exposed to intense laser fields has been a subject of many theoretical and experimental studies since 1983. The pioneering work began with the observation of a "knee" structure on the Xe2+ ion signal versus intensity curve by L’Huillier et al. From the experimental point of view, the NS double ionization refers to processes which somehow enhance the rate of production of doubly charged ions by a huge factor at intensities below the saturation intensity of the singly charged ion. Many, on the other hand, prefer to define the NSI as a process by which two electrons are ionized nearly simultaneously. This definition implies that apart from the sequential channel there is another channel which is the main contribution to the production of doubly charged ions at lower intensities. The first observation of triple NSI in argon interacting with a 1 μm laser was reported by Augst et al. Later, systematically studying the NSI of all rare gas atoms, the quadruple NSI of Xe was observed. The most important conclusion of this study was the observation of the following relation between the rate of NSI to any charge state and the rate of tunnel ionization (predicted by the ADK formula) to the previous charge states;
where is the rate of quasi-static tunneling to i'th charge state and are some constants depending on the wavelength of the laser (but not on the pulse duration).
Two models have been proposed to explain the non-sequential ionization; the shake-off model and electron re-scattering model. The shake-off (SO) model, first proposed by Fittinghoff et al., is adopted from the field of ionization of atoms by X rays and electron projectiles where the SO process is one of the major mechanisms responsible for the multiple ionization of atoms. The SO model describes the NSI process as a mechanism where one electron is ionized by the laser field and the departure of this electron is so rapid that the remaining electrons do not have enough time to adjust themselves to the new energy states. Therefore, there is a certain probability that, after the ionization of the first electron, a second electron is excited to states with higher energy (shake-up) or even ionized (shake-off). We should mention that, until now, there has been no quantitative calculation based on the SO model, and the model is still qualitative.
The electron rescattering model was independently developed by Kuchiev, Schafer et al, Corkum, Becker and Faisal and Faisal and Becker. The principal features of the model can be understood easily from Corkum's version. Corkum's model describes the NS ionization as a process whereby an electron is tunnel ionized. The electron then interacts with the laser field where it is accelerated away from the nuclear core. If the electron has been ionized at an appropriate phase of the field, it will pass by the position of the remaining ion half a cycle later, where it can free an additional electron by electron impact. Only half of the time the electron is released with the appropriate phase and the other half it never return to the nuclear core. The maximum kinetic energy that the returning electron can have is 3.17 times the ponderomotive potential of the laser. Corkum's model places a cut-off limit on the minimum intensity ( is proportional to intensity) where ionization due to re-scattering can occur.
The re-scattering model in Kuchiev's version (Kuchiev's model) is quantum mechanical. The basic idea of the model is illustrated by Feynman diagrams in figure a. First both electrons are in the ground state of an atom. The lines marked a and b describe the corresponding atomic states. Then the electron a is ionized. The beginning of the ionization process is shown by the intersection with a sloped dashed line. where the MPI occurs. The propagation of the ionized electron in the laser field, during which it absorbs other photons (ATI), is shown by the full thick line. The collision of this electron with the parent atomic ion is shown by a vertical dotted line representing the Coulomb interaction between the electrons. The state marked with c describes the ion excitation to a discrete or continuum state. Figure b describes the exchange process. Kuchiev's model, contrary to Corkum's model, does not predict any threshold intensity for the occurrence of NS ionization.
Kuchiev did not include the Coulomb effects on the dynamics of the ionized electron. This resulted in the underestimation of the double ionization rate by a huge factor. Obviously, in the approach of Becker and Faisal (which is equivalent to Kuchiev's model in spirit), this drawback does not exist. In fact, their model is more exact and does not suffer from the large number of approximations made by Kuchiev. Their calculation results perfectly fit with the experimental results of Walker et al. Becker and Faisal have been able to fit the experimental results on the multiple NSI of rare gas atoms using their model. As a result, the electron re-scattering can be taken as the main mechanism for the occurrence of the NSI process.
Multiphoton ionization of inner-valence electrons and fragmentation of polyatomic molecules
The ionization of inner valence electrons are responsible for the fragmentation of polyatomic molecules in strong laser fields. According to a qualitative model the dissociation of the molecules occurs through a three-step mechanism:
MPI of electrons from the inner orbitals of the molecule which results in a molecular ion in ro-vibrational levels of an excited electronic state;
Rapid radiationless transition to the high-lying ro-vibrational levels of a lower electronic state; and
Subsequent dissociation of the ion to different fragments through various fragmentation channels.
The short pulse induced molecular fragmentation may be used as an ion source for high performance mass spectroscopy. The selectivity provided by a short pulse based source is superior to that expected when using the conventional electron ionization based sources, in particular when the identification of optical isomers is required.
Kramers–Henneberger frame
The Kramers–Henneberger frame is the non-inertial frame moving with the free electron under the influence of the harmonic laser pulse, obtained by applying a translation to the laboratory frame equal to the quiver motion of a classical electron in the laboratory frame. In other words, in the Kramers–Henneberger frame the classical electron is at rest. Starting in the lab frame (velocity gauge), we may describe the electron with the Hamiltonian:
In the dipole approximation, the quiver motion of a classical electron in the laboratory frame for an arbitrary field can be obtained from the vector potential of the electromagnetic field:
where for a monochromatic plane wave.
By applying a transformation to the laboratory frame equal to the quiver motion one moves to the ‘oscillating’ or ‘Kramers–Henneberger’ frame, in which the classical electron is at rest. By a phase factor transformation for convenience one obtains the ‘space-translated’ Hamiltonian, which is unitarily equivalent to the lab-frame Hamiltonian, which contains the original potential centered on the oscillating point :
The utility of the KH frame lies in the fact that in this frame the laser-atom interaction can be reduced to the form of an oscillating potential energy, where the natural parameters describing the electron dynamics are and (sometimes called the “excursion amplitude’, obtained from ).
From here one can apply Floquet theory to calculate quasi-stationary solutions of the TDSE. In high frequency Floquet theory, to lowest order in the system reduces to the so-called ‘structure equation’, which has the form of a typical energy-eigenvalue Schrödinger equation containing the ‘dressed potential’ (the cycle-average of the oscillating potential). The interpretation of the presence of is as follows: in the oscillating frame, the nucleus has an oscillatory motion of trajectory and can be seen as the potential of the smeared out nuclear charge along its trajectory.
The KH frame is thus employed in theoretical studies of strong-field ionization and atomic stabilization (a predicted phenomenon in which the ionization probability of an atom in a high-intensity, high-frequency field actually decreases for intensities above a certain threshold) in conjunction with high-frequency Floquet theory.
Dissociation – distinction
A substance may dissociate without necessarily producing ions. As an example, the molecules of table sugar dissociate in water (sugar is dissolved) but exist as intact neutral entities. Another subtle event is the dissociation of sodium chloride (table salt) into sodium and chlorine ions. Although it may seem as a case of ionization, in reality the ions already exist within the crystal lattice. When salt is dissociated, its constituent ions are simply surrounded by water molecules and their effects are visible (e.g. the solution becomes electrolytic). However, no transfer or displacement of electrons occurs.
See also
Above threshold ionization
Chemical ionization
Electron ionization
Ionization chamber – Instrument for detecting gaseous ionization, used in ionizing radiation measurements
Ion source
Photoionization
Thermal ionization
Townsend avalanche – The chain reaction of ionization occurring in a gas with an applied electric field
Table
References
External links
Ions
Molecular physics
Atomic physics
Physical chemistry
Quantum chemistry
Mass spectrometry | 0.780493 | 0.99742 | 0.778479 |
Idiosyncrasy | An idiosyncrasy is a unique feature of something. The term is often used to express peculiarity.
Etymology
The term "idiosyncrasy" originates from Greek , "a peculiar temperament, habit of body" (from , "one's own", , "with" and , "blend of the four humors" (temperament)) or literally "particular mingling".
Idiosyncrasy is sometimes used as a synonym for eccentricity, as these terms "are not always clearly distinguished when they denote an act, a practice, or a characteristic that impresses the observer as strange or singular." Eccentricity, however, "emphasizes the idea of divergence from the usual or customary; idiosyncrasy implies a following of one's particular temperament or bent especially in trait, trick, or habit; the former often suggests mental aberration, the latter, strong individuality and independence of action".
Linguistics
The term can also be applied to symbols or words. Idiosyncratic symbols mean one thing for a particular person, as a blade could mean war, but to someone else, it could symbolize a surgery.
Idiosyncratic property
In phonology, an idiosyncratic property contrasts with a systematic regularity. While systematic regularities in the sound system of a language are useful for identifying phonological rules during analysis of the forms morphemes can take, idiosyncratic properties are those whose occurrence is not determined by those rules. For example, the fact that the English word cab starts with the sound /k/ is an idiosyncratic property; on the other hand that its vowel is longer than in the English word cap is a systematic regularity, as it arises from the fact that the final consonant is voiced rather than voiceless.
Medicine
Disease
Idiosyncrasy defined the way physicians conceived diseases in the 19th century. They considered each disease as a unique condition, related to each patient. This understanding began to change in the 1870s, when discoveries made by researchers in Europe permitted the advent of a "scientific medicine", a precursor to the evidence-based medicine that is the standard of practice today.
Pharmacology
The term idiosyncratic drug reaction denotes an aberrant or bizarre reaction or hypersensitivity to a substance, without connection to the pharmacology of the drug. It is what is known as a Type B reaction. Type B reactions have the following characteristics: they are usually unpredictable, might not be picked up by toxicological screening, not necessarily dose-related, incidence and morbidity low but mortality is high. Type B reactions are most commonly immunological (e.g. penicillin allergy).
Psychiatry and psychology
The word is used for the personal way a given individual reacts, perceives and experiences: a certain dish made of meat may cause nostalgic memories in one person and disgust in another. These reactions are called idiosyncratic.
Economics
In portfolio theory, risks of price changes due to the unique circumstances of a specific security, as opposed to the overall market, are called "idiosyncratic risks". This specific risk, also called unsystematic, can be nulled out of a portfolio through diversification. Pooling multiple securities means the specific risks cancel out. In complete markets, there is no compensation for idiosyncratic risk—that is, a security's idiosyncratic risk does not matter for its price. For instance, in a complete market in which the capital asset pricing model holds, the price of a security is determined by the amount of systematic risk in its returns. Net income received, or losses suffered, by a landlord from renting of one or two properties is subject to idiosyncratic risk due to the numerous things that can happen to real property and variable behavior of tenants.
According to one macroeconomic model including a financial sector, hedging idiosyncratic risk can be self-defeating as amid the "risk reduction" experts are encouraged to increase their leverage. This works for small shocks but leads to higher vulnerability for larger shocks and makes the system less stable. Thus, while securitisation in principle reduces the costs of idiosyncratic shocks, it ends up amplifying systemic risks in equilibrium.
In econometrics, "idiosyncratic error" is used to describe error—that is, unobserved factors that impact the dependent variable—from panel data that both changes over time and across units (individuals, firms, cities, towns, etc.).
See also
Humorism
Portfolio theory
References
External links
Allergology
Deviance (sociology)
Inborn errors of metabolism
Medical terminology
Effects of external causes | 0.780981 | 0.996792 | 0.778476 |
Adenylylation | Adenylylation, more commonly known as AMPylation, is a process in which an adenosine monophosphate (AMP) molecule is covalently attached to the amino acid side chain of a protein. This covalent addition of AMP to a hydroxyl side chain of the protein is a post-translational modification. Adenylylation involves a phosphodiester bond between a hydroxyl group of the molecule undergoing adenylylation, and the phosphate group of the adenosine monophosphate nucleotide (i.e. adenylic acid). Enzymes that are capable of catalyzing this process are called AMPylators.
The known amino acids to be targeted in the protein are tyrosine and threonine, and sometimes serine. When charges on a protein undergo a change, it affects the characteristics of the protein, normally by altering its shape via interactions of the amino acids which make up the protein. AMPylation can have various effects on the protein. These are properties of the protein like, stability, enzymatic activity, co-factor binding, and many other functional capabilities of a protein. Another function of adenylylation is amino acids activation, which is catalyzed by tRNA aminoacyl synthetase. The most commonly identified protein to receive AMPylation are GTPases, and glutamine synthetase.
Adenylylators
Enzymes responsible for AMPylation, called AMPylators or Adenylyltransferase, fall into two different families, all depending on their structural properties and mechanism used. AMPylator is created by two catalytic homologous halves. One half is responsible for catalyzing the adenylylation reaction, while the other half catalyzes the phosphorolytic deadenylylation reaction. These two families are the DNA-β-polymerase-like and the Fic family.
DNA-β-polymerase-like, is a family of Nucleotidyltransferase. It more specifically is known as the GlnE family. There is a specific motif that is used to clarify this particular family. The motif consists of a three stranded β-sheet which is part of magnesium ion coordination and phosphate binding. Aspartate is essential for the activity to occur in this family.
The Fic domain belongs to Fido (Fic/Doc) superfamilyFic family, which is a filamentation induced by cyclic AMP domain, is known to perform AMPylation. This term was coined when VopS from Vibrio parahaemolyticus was discovered to modify RhoGTPases with AMP on a serine.
This family of proteins are found in all domains of life on earth. It is mediated via a mechanism of ATP-binding-site alpha helix motif. Infectious bacteria use this domain to interrupt phagocytosis and cause cell death. Fic domains are evolutionarily conserved domains in prokaryotes and eukaryotes that belong to the Fido domain superfamily.
AMPylators have been shown to be comparable to kinases due to their ATP hydrolysis activity and reversible transfer of the metabolite to a hydroxyl side chain of the protein substrate. However, AMPylation catalyse a nucleophilic attack on the α-phosphate group, while kinase in the phosphorylation reaction targets γ-phosphate. The nucleophilic attack of AMPylation leads to release Pyrophosphate and the AMP-modified protein are the products of the AMPylation reaction.
De-adenylylators
De-AMPylation is the reverse reaction in which the AMP molecule is detached from the amino acid side of a chain protein.
There are three known mechanisms for this reaction.
The bacterial GS-ATase (GlnE) encodes a bipartite protein with separate N-terminal AMPylation and C-terminal de-AMPylation domains whose activity is regulated by PII and associated posttranslational modifications. De-AMPylation of its substrate AMPylated glutamine synthetase proceeds by a phosphorolytic reaction between the adenyl-tyrosine of GS and orthophosphate, leading to the formation of ADP and unmodified glutamine synthetase.
SidD, a protein introduced in the host cell by the pathogenic bacteria Legionella pneumophila, de-AMPylates Rab1 a host protein AMPylated by a different Legionella pneumophila enzyme, the AMPylase SidM. Whilst the benefit to the pathogen of introducing these two antagonistic effectors in the host remains unclear, the biochemical reaction carried out by SidD involves the use of a phosphatase-like domain to catalyse the hydrolytic removal of the AMP from tyrosine 77 of the host's Rab1.
In animal cells the removal of AMP from threonine 518 of BiP/Grp78 is catalysed by the same enzyme, FICD, that AMPylates BiP. Unlike the bacterial GS-ATase, FICD carries out both reactions with same catalytic domain.
Adenylylation in Prokaryotes
Bacterial homeostasis
AMPylation is involved in bacterial homeostasis. The most famous example is AMPylator GS-ATase (GlnE), which contributes in complex regulation of nitrogen metabolism through AMPylation of glutamine synthetase that was introduced in the AMPylation and DeAMPylation parts.
Another example of AMPylators that play a role in bacterial homeostasis is the class I Fic AMPylators (FicT), which modifies the GyrB subunit of DNA gyrase, the conserved tyrosine residue for ATP binding of ParE subunit at Topoisomerase IV. This DNA gyrase inactivation by AMPylation leads to the activation of SOS response, which is the cellular response to DNA damage. The activity of FicT AMPylation is reversible and only leads to growth arrest, but not cell death. Therefore, FicT AMPylation plays a role in regulating cell stress, which is shown in the Wolbachia bacteria that the level of FicT increases in response to doxycycline.
A Class III Fic AMPylator NmFic of N. meningtidis is also found to modify AMPylate GyrB at the conserved tyrosine for ATP binding. This shows that Fic domains are highly conserved that indicates the important role of AMPylation in regulating cellular stress in bacteria. The regulation of NmFic involves the concentration-dependent monomerization and autoAMPylation for activation of NmFic activity.
Bacterial pathogenicity
Bacteria proteins, also known as effectors, have been shown to use AMPylation. Effectors such as VopS, IbpA, and DrrA, have been shown to AMPylate host GTPases and cause actin cytoskeleton changes. GTPases are common targets of AMPylators. Rho, Rab, and Arf GTPase families are involved in actin cytoskeleton dynamics and vesicular trafficking. They also play roles in cellular control mechanisms such as phagocytosis in the host cell.
The pathogen enhances or prevents its internalization by either inducing or inhibiting host cell phagocytosis. Vibrio parahaemolyticus is a Gram-negative bacterium that causes food poisoning as a result of raw or undercooked seafood consumption in humans. VopS, a type III effector found in Vibrio parahaemolyticus, contains a Fic domain that has a conserved HPFx(D/E)GN(G/K)R motif that contains a histidine residue essential for AMPylation. VopS blocks actin assembly by modifying threonine residue in the switch 1 region of Rho GTPases. The transfer of an AMP moiety using ATP to the threonine residue results in steric hindrance, and thus prevents Rho GTPases from interacting with downstream effectors. VopS also adenylates RhoA and cell division cycle 42 (CDC42), leading to a disaggregation of the actin filament network. As a result, the host cell's actin cytoskeleton control is disabled, leading to cell rounding.
IbpA is secreted into eukaryotic cells from H. somni, a Gram-negative bacterium in cattle that causes respiratory epithelium infection. This effector contains two Fic domains at the C-terminal region. AMPylation of the IbpA Fic domain of Rho family GTPases is responsible for its cytotoxicity. Both Fic domains have similar effects on host cells' cytoskeleton as VopS. The AMPylation on a tyrosine residue of the switch 1 region blocks the interaction of the GTPases with downstream substrates such as PAK.
DrrA is the Dot/Icm type IV translocation system substrate DrrA from Legionella pneumophila. It is the effector secreted by L. pneumophila to modify GTPases of the host cells. This modification increases the survival of bacteria in host cells. DrrA is composed of Rab1b specific guanine nucleotide exchange factor (GEF) domain, a C-terminal lipid binding domain and an N-terminal domain with unclear cytotoxic properties. Research works show that N-terminal and full-length DrrA shows AMPylators activity toward host's Rab1b protein (Ras related protein), which is also the substrate of Rab1b GEF domain. Rab1b protein is the GTPase Rab to regulate vesicle transportation and membrane fusion. The adenylation by bacteria AMPylators prolong GTP-bound state of Rab1b. Thus, the role of effector DrrA is connected toward the benefits of bacteria's vacuoles for their replication during the infection.
Adenlylylation in Eukaryotes
Plants and yeasts have no known endogenous AMPylating enzymes, but animal genomes are endowed with a single copy of a gene encoding a Fic-domain AMPylase, that was likely acquired by an early ancestor of animals via horizontal gene transfer from a prokaryote. The human protein referred to commonly as FICD, had been previously identified as Huntingtin associated protein E (HypE; an assignment arising from a yeast two-hybrid screen, but of questionable relevance, as Huntingtin and HypE/FICD are localised to different cellular compartments). CG9523 Homologues in Drosophila melanogaster (CG9523) and C. elegans (Fic-1) have also received attention. In all animals FICD has a similar structure. It is a type II transmembrane domain protein, with a short cytoplasmic domain followed by membrane anchor that holds the protein in the endoplasmic reticulum (ER) and long C-terminal portion that resides in ER and encompasses tetratricopeptide repeats (TPRs) followed by a catalytic Fic domain.
Endoplasmic reticulum
The discovery of an animal cell AMPylase, followed by the discovery of its ER localisation and that BiP is a prominent substrate for its activity were important breakthroughs. BiP (also known as Grp78) had long been known to undergo an inactivating post-translational modification, but it nature remain elusive. Widely assumed to be ADP-ribosylation, it turns out to be FICD-mediated AMPylation, as inactivating the FICD gene in cells abolished all measurable post-translational modification of BiP.
BiP is an ER-localised protein chaperone whose activity is tightly regulated at the transcriptional level via a gene-expression program known as the Unfolded Protein Response (UPR). The UPR is a homeostatic process that couples the transcription rate of BiP (and many other proteins) to the burden of unfolded proteins in the ER (so-called ER stress) to help maintain ER proteostasis. AMPylation adds another rapid post-translational layer of control of BiP's activity, as modification of Thr518 of BiP's substrate-binding domain with an AMP locks the chaperone into an inactive conformation. This modification is selectively deployed as ER stress wanes, to inactivate surplus BiP. However, as ER stress rises again, the same enzyme, FICD, catalyses the opposite reaction, BiP de-AMPylation.
An understanding of the structural basis of BiP AMPylation and de-AMPylation is gradually emerging, as are clues to the allostery that might regulate the switch in FICD's activity but important details of this process as it occurs in cells remain to be discovered.
The role of FICD in BiP AMPylation (and de-AMPylation) on Thr518 is well supported by biochemical and structural studies. Evidence has also been presented that in some circumstances FICD may AMPylate a different residue, Thr366 in BiP's nucleotide binding domain.
Caenorhabditis elegans
Fic-1 is the only Fic protein present in the genetic code of C. elegans. It is primarily found in the ER nuclear envelope of adult germline cells and embryotic cells, but small amounts may be found within the cytoplasm. This extra-ER pool of FICD-1s is credited with AMPylation of core histones and eEF1-A type translation factors within the nematode.
Though varying AMPylation levels did not create any noticeable effects within the nematode's behaviour or physiology, Fic-1 knockout worms were more susceptible to infection by Pseudomonas aeruginosa compared to the counterparts with active Fic-1 domains, implying a link between AMPylation of cellular targets and immune responses within nematodes.
Drosophila melanogaster
Flies lacking in FICD (CG9523) have been described as blind. Initially, this defect was attributed to a role for FICD on the cell surface of capitate projections - a putative site of neurotransmitter recycling however a later study implicated FICD-mediated AMPylation of BiP Thr366 in the visual problem
Clinical significance
The presynaptic protein α-synuclein was found to be a target for FICD AMPylation. During HypE-mediated adenylylation of αSyn, aggregation of αSyn decreases and both neurotoxicity and ER stress were discovered to decrease in vitro. Thus, adenylylation of αSyn is possibly a protective response to ER stress and αSyn aggregation. However, as aSyn and FICD reside in different compartments further research needs to be done confirm the significance of these claims.
Detection
Chemical handles
Chemical handles are used to detect post-translationally modified proteins. Recently, there is a N6pATP that contains an alkynyl tag (propargyl) at the N6 position of the adenine of ATP. This N6pATP combines with the click reaction to detect AMPylated proteins. To detect unrecognized modified protein and label VopS substrates, ATP derivatives with a fluorophore at the adenine N6 NH2 is utilized to do that.
Antibody-based method
Antibody is famous for its high affinity and selectivity, so it is the good way to detect AMPylated proteins. Recently, ɑ- AMP antibodies is used to directly detect and isolate AMPylated proteins (especially AMPylated tyrosine and AMPylated threonine) from cells and cell lysates. AMPylation is a post-translational modification, so it will modify protein properties by giving the polar character of AMP and hydrophobicity. Thus, instead of using antibodies that detect a whole peptide sequence, raising AMP antibodies directly targeted to specific amino acids are preferred.
Mass spectrometry
Previously, many science works used Mass Spectrometry (MS) in different fragmentation modes to detect AMPylated peptides. In responses to the distinctive fragmentation techniques, AMPylated protein sequences disintegrated at different parts of AMP. While electron transfer dissociation (ETD) creates minimum fragments and less complicated spectra, collision-induced dissociation (CID) and high-energy collision (HCD) fragmentation generate characteristic ions suitable for AMPylated proteins identification by generating multiple AMP fragments. Due to AMP's stability, peptide fragmentation spectra is easy to read manually or with search engines.
Inhibitors
Inhibitors of protein AMPylation with inhibitory constant (Ki) ranging from 6 - 50 μM and at least 30-fold selectivity versus HypE have been discovered.
References
Biochemistry | 0.78666 | 0.989393 | 0.778316 |
Chemiosmosis | Chemiosmosis is the movement of ions across a semipermeable membrane bound structure, down their electrochemical gradient. An important example is the formation of adenosine triphosphate (ATP) by the movement of hydrogen ions (H+) across a membrane during cellular respiration or photosynthesis.
Hydrogen ions, or protons, will diffuse from a region of high proton concentration to a region of lower proton concentration, and an electrochemical concentration gradient of protons across a membrane can be harnessed to make ATP. This process is related to osmosis, the movement of water across a selective membrane, which is why it is called "chemiosmosis".
ATP synthase is the enzyme that makes ATP by chemiosmosis. It allows protons to pass through the membrane and uses the free energy difference to convert phosphorylate adenosine diphosphate (ADP) into ATP. The ATP synthase contains two parts: CF0 (present in thylakoid membrane) and CF1 (protrudes on the outer surface of thylakoid membrane). The breakdown of the proton gradient leads to conformational change in CF1—providing enough energy in the process to convert ADP to ATP. The generation of ATP by chemiosmosis occurs in mitochondria and chloroplasts, as well as in most bacteria and archaea. For instance, in chloroplasts during photosynthesis, an electron transport chain pumps H+ ions (protons) in the stroma (fluid) through the thylakoid membrane to the thylakoid spaces. The stored energy is used to photophosphorylate ADP, making ATP, as protons move through ATP synthase.
The chemiosmotic hypothesis
Peter D. Mitchell proposed the chemiosmotic hypothesis in 1961. In brief, the hypothesis was that most adenosine triphosphate (ATP) synthesis in respiring cells comes from the electrochemical gradient across the inner membranes of mitochondria by using the energy of NADH and FADH2 formed during the oxidative breakdown of energy-rich molecules such as glucose.
Molecules such as glucose are metabolized to produce acetyl CoA as a fairly energy-rich intermediate. The oxidation of acetyl coenzyme A (acetyl-CoA) in the mitochondrial matrix is coupled to the reduction of a carrier molecule such as nicotinamide adenine dinucleotide (NAD) and flavin adenine dinucleotide (FAD).
The carriers pass electrons to the electron transport chain (ETC) in the inner mitochondrial membrane, which in turn pass them to other proteins in the ETC. The energy at every redox transfer step is used to pump protons from the matrix into the intermembrane space, storing energy in the form of a transmembrane electrochemical gradient. The protons move back across the inner membrane through the enzyme ATP synthase. The flow of protons back into the matrix of the mitochondrion via ATP synthase provides enough energy for ADP to combine with inorganic phosphate to form ATP.
This was a radical proposal at the time, and was not well accepted. The prevailing view was that the energy of electron transfer was stored as a stable high potential intermediate, a chemically more conservative concept. The problem with the older paradigm is that no high energy intermediate was ever found, and the evidence for proton pumping by the complexes of the electron transfer chain grew too great to be ignored. Eventually the weight of evidence began to favor the chemiosmotic hypothesis, and in 1978 Peter D. Mitchell was awarded the Nobel Prize in Chemistry.
Chemiosmotic coupling is important for ATP production in mitochondria, chloroplasts
and many bacteria and archaea.
Proton-motive force
The movement of ions across the membrane depends on a combination of two factors:
Diffusion force caused by a concentration gradient - all particles tend to diffuse from higher concentration to lower.
Electrostatic force caused by electrical potential gradient - cations like protons H+ tend to diffuse down the electrical potential, from the positive (P) side of the membrane to the negative (N) side. Anions diffuse spontaneously in the opposite direction.
These two gradients taken together can be expressed as an electrochemical gradient.
Lipid bilayers of biological membranes, however, are barriers for ions. This is why energy can be stored as a combination of these two gradients across the membrane. Only special membrane proteins like ion channels can sometimes allow ions to move across the membrane (see also: Membrane transport). In the chemiosmotic hypothesis a transmembrane ATP synthase is central to convert energy of spontaneous flow of protons through them into chemical energy of ATP bonds.
Hence researchers created the term proton-motive force (PMF), derived from the electrochemical gradient mentioned earlier. It can be described as the measure of the potential energy stored (chemiosmotic potential) as a combination of proton and voltage (electrical potential) gradients across a membrane. The electrical gradient is a consequence of the charge separation across the membrane (when the protons H+ move without a counterion, such as chloride Cl−).
In most cases the proton-motive force is generated by an electron transport chain which acts as a proton pump, using the Gibbs free energy of redox reactions to pump protons (hydrogen ions) out across the membrane, separating the charge across the membrane. In mitochondria, energy released by the electron transport chain is used to move protons from the mitochondrial matrix (N side) to the intermembrane space (P side). Moving the protons out of the mitochondrion creates a lower concentration of positively charged protons inside it, resulting in excess negative charge on the inside of the membrane. The electrical potential gradient is about -170 mV , negative inside (N). These gradients - charge difference and the proton concentration difference both create a combined electrochemical gradient across the membrane, often expressed as the proton-motive force (PMF). In mitochondria, the PMF is almost entirely made up of the electrical component but in chloroplasts the PMF is made up mostly of the pH gradient because the charge of protons H+ is neutralized by the movement of Cl− and other anions. In either case, the PMF needs to be greater than about 460 mV (45 kJ/mol) for the ATP synthase to be able to make ATP.
Equations
The proton-motive force is derived from the Gibbs free energy. Let N denote the inside of a cell, and P denote the outside. Then
where
is the Gibbs free energy change per unit amount of cations transferred from P to N;
is the charge number of the cation ;
is the electric potential of N relative to P;
and are the cation concentrations at P and N, respectively;
is the Faraday constant;
is the gas constant; and
is the temperature.
The molar Gibbs free energy change is frequently interpreted as a molar electrochemical ion potential .
For an electrochemical proton gradient and as a consequence:
where
.
Mitchell defined the proton-motive force (PMF) as
.
For example, implies . At this equation takes the form:
.
Note that for spontaneous proton import from the P side (relatively more positive and acidic) to the N side (relatively more negative and alkaline), is negative (similar to ) whereas PMF is positive (similar to redox cell potential ).
It is worth noting that, as with any transmembrane transport process, the PMF is directional. The sign of the transmembrane electric potential difference is chosen to represent the change in potential energy per unit charge flowing into the cell as above. Furthermore, due to redox-driven proton pumping by coupling sites, the proton gradient is always inside-alkaline. For both of these reasons, protons flow in spontaneously, from the P side to the N side; the available free energy is used to synthesize ATP (see below). For this reason, PMF is defined for proton import, which is spontaneous. PMF for proton export, i.e., proton pumping as catalyzed by the coupling sites, is simply the negative of PMF(import).
The spontaneity of proton import (from the P to the N side) is universal in all bioenergetic membranes. This fact was not recognized before the 1990s, because the chloroplast thylakoid lumen was interpreted as an interior phase, but in fact it is topologically equivalent to the exterior of the chloroplast. Azzone et al. stressed that the inside phase (N side of the membrane) is the bacterial cytoplasm, mitochondrial matrix, or chloroplast stroma; the outside (P) side is the bacterial periplasmic space, mitochondrial intermembrane space, or chloroplast lumen. Furthermore, 3D tomography of the mitochondrial inner membrane shows its extensive invaginations to be stacked, similar to thylakoid disks; hence the mitochondrial intermembrane space is topologically quite similar to the chloroplast lumen.:
The energy expressed here as Gibbs free energy, electrochemical proton gradient, or proton-motive force (PMF), is a combination of two gradients across the membrane:
the concentration gradient (via ) and
electric potential gradient .
When a system reaches equilibrium, ; nevertheless, the concentrations on either side of the membrane need not be equal. Spontaneous movement across the potential membrane is determined by both concentration and electric potential gradients.
The molar Gibbs free energy of ATP synthesis
is also called phosphorylation potential. The equilibrium concentration ratio can be calculated by comparing and , for example in case of the mammalian mitochondrion:
H+ / ATP = ΔGp / (Δp / 10.4 kJ·mol−1/mV) = 40.2 kJ·mol−1 / (173.5 mV / 10.4 kJ·mol−1/mV) = 40.2 / 16.7 = 2.4. The actual ratio of the proton-binding c-subunit to the ATP-synthesizing beta-subunit copy numbers is 8/3 = 2.67, showing that under these conditions, the mitochondrion functions at 90% (2.4/2.67) efficiency.
In fact, the thermodynamic efficiency is mostly lower in eukaryotic cells because ATP must be exported from the matrix to the cytoplasm, and ADP and phosphate must be imported from the cytoplasm. This "costs" one "extra" proton import per ATP, hence the actual efficiency is only 65% (= 2.4/3.67).
In mitochondria
The complete breakdown of glucose releasing its energy is called cellular respiration. The last steps of this process occur in mitochondria. The reduced molecules NADH and FADH2 are generated by the Krebs cycle, glycolysis, and pyruvate processing. These molecules pass electrons to an electron transport chain, which releases the energy of oxygen to create a proton gradient across the inner mitochondrial membrane. ATP synthase then uses the energy stored in this gradient to make ATP. This process is called oxidative phosphorylation because it uses energy released by the oxidation of NADH and FADH2 to phosphorylate ADP into ATP.
In plants
The light reactions of photosynthesis generate ATP by the action of chemiosmosis. The photons in sunlight are received by the antenna complex of Photosystem II, which excites electrons to a higher energy level. These electrons travel down an electron transport chain, causing protons to be actively pumped across the thylakoid membrane into the thylakoid lumen. These protons then flow down their electrochemical potential gradient through an enzyme called ATP-synthase, creating ATP by the phosphorylation of ADP to ATP. The electrons from the initial light reaction reach Photosystem I, then are raised to a higher energy level by light energy and then received by an electron acceptor and reduce NADP+ to NADPH. The electrons lost from Photosystem II get replaced by the oxidation of water, which is "split" into protons and oxygen by the oxygen-evolving complex (OEC, also known as WOC, or the water-oxidizing complex). To generate one molecule of diatomic oxygen, 10 photons must be absorbed by Photosystems I and II, four electrons must move through the two photosystems, and 2 NADPH are generated (later used for carbon dioxide fixation in the Calvin Cycle).
In prokaryotes
Bacteria and archaea also can use chemiosmosis to generate ATP. Cyanobacteria, green sulfur bacteria, and purple bacteria synthesize ATP by a process called photophosphorylation. These bacteria use the energy of light to create a proton gradient using a photosynthetic electron transport chain. Non-photosynthetic bacteria such as E. coli also contain ATP synthase. In fact, mitochondria and chloroplasts are the product of endosymbiosis and trace back to incorporated prokaryotes. This process is described in the endosymbiotic theory. The origin of the mitochondrion triggered the origin of eukaryotes, and the origin of the plastid the origin of the Archaeplastida, one of the major eukaryotic supergroups.
Chemiosmotic phosphorylation is the third pathway that produces ATP from inorganic phosphate and an ADP molecule. This process is part of oxidative phosphorylation.
Emergence of chemiosmosis
Thermal cycling model
A stepwise model for the emergence of chemiosmosis, a key element in the origin of life on earth, proposes that primordial organisms used thermal cycling as an energy source (thermosynthesis), functioning essentially as a heat engine:
self-organized convection in natural waters causing thermal cycling →
added β-subunit of F1 ATP Synthase
(generated ATP by thermal cycling of subunit during suspension in convection cell: thermosynthesis) →
added membrane and Fo ATP Synthase moiety
(generated ATP by change in electrical polarization of membrane during thermal cycling: thermosynthesis) →
added metastable, light-induced electric dipoles in membrane
(primitive photosynthesis) →
added quinones and membrane-spanning light-induced electric dipoles
(today's bacterial photosynthesis, which makes use of chemiosmosis).
External proton gradient model
Deep-sea hydrothermal vents, emitting hot acidic or alkaline water, would have created external proton gradients. These provided energy that primordial organisms could have exploited. To keep the flows separate, such an organism could have wedged itself in the rock of the hydrothermal vent, exposed to the hydrothermal flow on one side and the more alkaline water on the other. As long as the organism's membrane (or passive ion channels within it) is permeable to protons, the mechanism can function without ion pumps. Such a proto-organism could then have evolved further mechanisms such as ion pumps and ATP synthase.
Meteoritic quinones
A proposed alternative source to chemiosmotic energy developing across membranous structures is if an electron acceptor, ferricyanide, is within a vesicle and the electron donor is outside, quinones transported by carbonaceous meteorites pick up electrons and protons from the donor. They would release electrons across the lipid membrane by diffusion to ferricyanide within the vesicles and release protons which produces gradients above pH 2, the process is conducive to the development of proton gradients.
See also
Cellular respiration
Citric acid cycle
Electrochemical gradient
Glycolysis
Oxidative phosphorylation
References
Further reading
Biochemistry textbook reference, from the NCBI bookshelf –
A set of experiments aiming to test some tenets of the chemiosmotic theory –
External links
Chemiosmosis (University of Wisconsin)
Biochemical reactions
Cell biology
Cellular respiration | 0.785934 | 0.990259 | 0.778278 |
Conservative replacement | A conservative replacement (also called a conservative mutation or a conservative substitution or a homologous replacement) is an amino acid replacement in a protein that changes a given amino acid to a different amino acid with similar biochemical properties (e.g. charge, hydrophobicity and size).
Conversely, a radical replacement, or radical substitution, is an amino acid replacement that exchanges an initial amino acid by a final amino acid with different physicochemical properties.
Description
There are 20 naturally occurring amino acids, however some of these share similar characteristics. For example, leucine and isoleucine are both aliphatic, branched hydrophobes. Similarly, aspartic acid and glutamic acid are both small, negatively charged residues.
Although there are many ways to classify amino acids, they are often sorted into six main classes on the basis of their structure and the general chemical characteristics of their side chains (R groups).
Physicochemical distances aim at quantifying the intra-class and inter-class dissimilarity between amino acids based on their measurable properties, and many such measures have been proposed in the literature. Owing to their simplicity, two of the most commonly used measures are the ones of Grantham (1974) and Miyata et al (1979). A conservative replacement is therefore an exchange between two amino acids separated by a small physicochemical distance. Conversely, a radical replacement is an exchange between two amino acids separated by a large physicochemical distance.
Impact on function
Conservative replacements in proteins often have a better effect on function than non-conservative replacements. The reduced effect of conservative replacements on function can also be seen in the occurrence of different replacements in nature. Non-conservative replacements between proteins are far more likely to be removed by natural selection due to their deleterious effects.
See also
Segregating site
Ultra-conserved element
Sequence alignment
Sequence alignment software
References
Biochemistry
Amino acids | 0.804656 | 0.967198 | 0.778261 |
Cell signaling | In biology, cell signaling (cell signalling in British English) is the process by which a cell interacts with itself, other cells, and the environment. Cell signaling is a fundamental property of all cellular life in prokaryotes and eukaryotes.
Typically, the signaling process involves three components: the signal, the receptor, and the effector.
In biology, signals are mostly chemical in nature, but can also be physical cues such as pressure, voltage, temperature, or light. Chemical signals are molecules with the ability to bind and activate a specific receptor. These molecules, also referred as ligands, are chemically diverse, including ions (e.g. Na+, K+, Ca++, etc.), lipids (e.g. steroid, prostaglandin), peptides (e.g. insulin, ACTH), carbohydrates, glycosylated proteins (proteoglycans), nucleic acids, etc. Peptide and lipid ligands are particularly important, as most hormones belong to these classes of chemicals. Peptides are usually polar, hydrophilic molecules. As such they are unable to diffuse freely across the bi-lipid layer of the plasma membrane, so their action is mediated by a cell membrane bound receptor. On the other hand, liposoluble chemicals such as steroid hormones, can diffuse passively across the plasma membrane and interact with intracellular receptors.
Cell signaling can occur over short or long distances, and can be further classified as autocrine, intracrine, juxtacrine, paracrine, or endocrine. Autocrine signaling occurs when the chemical signal acts on the same cell that produced the signaling chemical. Intracrine signaling occurs when the chemical signal produced by a cell acts on receptors located in the cytoplasm or nucleus of the same cell. Juxtacrine signaling occurs between physically adjacent cells. Paracrine signaling occurs between nearby cells. Endocrine interaction occurs between distant cells, with the chemical signal usually carried by the blood.
Receptors are complex proteins or tightly bound multimer of proteins, located in the plasma membrane or within the interior of the cell such as in the cytoplasm, organelles, and nucleus. Receptors have the ability to detect a signal either by binding to a specific chemical or by undergoing a conformational change when interacting with physical agents. It is the specificity of the chemical interaction between a given ligand and its receptor that confers the ability to trigger a specific cellular response. Receptors can be broadly classified into cell membrane receptors and intracellular receptors.
Cell membrane receptors can be further classified into ion channel linked receptors, G-Protein coupled receptors and enzyme linked receptors.
Ion channels receptors are large transmembrane proteins with a ligand activated gate function. When these receptors are activated, they may allow or block passage of specific ions across the cell membrane. Most receptors activated by physical stimuli such as pressure or temperature belongs to this category.
G-protein receptors are multimeric proteins embedded within the plasma membrane. These receptors have extracellular, trans-membrane and intracellular domains. The extracellular domain is responsible for the interaction with a specific ligand. The intracellular domain is responsible for the initiation of a cascade of chemical reactions which ultimately triggers the specific cellular function controlled by the receptor.
Enzyme-linked receptors are transmembrane proteins with an extracellular domain responsible for binding a specific ligand and an intracellular domain with enzymatic or catalytic activity. Upon activation the enzymatic portion is responsible for promoting specific intracellular chemical reactions.
Intracellular receptors have a different mechanism of action. They usually bind to lipid soluble ligands that diffuse passively through the plasma membrane such as steroid hormones. These ligands bind to specific cytoplasmic transporters that shuttle the hormone-transporter complex inside the nucleus where specific genes are activated and the synthesis of specific proteins is promoted.
The effector component of the signaling pathway begins with signal transduction. In this process, the signal, by interacting with the receptor, starts a series of molecular events within the cell leading to the final effect of the signaling process. Typically the final effect consists in the activation of an ion channel (ligand-gated ion channel) or the initiation of a second messenger system cascade that propagates the signal through the cell. Second messenger systems can amplify or modulate a signal, in which activation of a few receptors results in multiple secondary messengers being activated, thereby amplifying the initial signal (the first messenger). The downstream effects of these signaling pathways may include additional enzymatic activities such as proteolytic cleavage, phosphorylation, methylation, and ubiquitinylation.
Signaling molecules can be synthesized from various biosynthetic pathways and released through passive or active transports, or even from cell damage.
Each cell is programmed to respond to specific extracellular signal molecules, and is the basis of development, tissue repair, immunity, and homeostasis. Errors in signaling interactions may cause diseases such as cancer, autoimmunity, and diabetes.
Taxonomic range
In many small organisms such as bacteria, quorum sensing enables individuals to begin an activity only when the population is sufficiently large. This signaling between cells was first observed in the marine bacterium Aliivibrio fischeri, which produces light when the population is dense enough. The mechanism involves the production and detection of a signaling molecule, and the regulation of gene transcription in response. Quorum sensing operates in both gram-positive and gram-negative bacteria, and both within and between species.
In slime molds, individual cells aggregate together to form fruiting bodies and eventually spores, under the influence of a chemical signal, known as an acrasin. The individuals move by chemotaxis, i.e. they are attracted by the chemical gradient. Some species use cyclic AMP as the signal; others such as Polysphondylium violaceum use a dipeptide known as glorin.
In plants and animals, signaling between cells occurs either through release into the extracellular space, divided in paracrine signaling (over short distances) and endocrine signaling (over long distances), or by direct contact, known as juxtacrine signaling such as notch signaling. Autocrine signaling is a special case of paracrine signaling where the secreting cell has the ability to respond to the secreted signaling molecule. Synaptic signaling is a special case of paracrine signaling (for chemical synapses) or juxtacrine signaling (for electrical synapses) between neurons and target cells.
Extracellular signal
Synthesis and release
Many cell signals are carried by molecules that are released by one cell and move to make contact with another cell. Signaling molecules can belong to several chemical classes: lipids, phospholipids, amino acids, monoamines, proteins, glycoproteins, or gases. Signaling molecules binding surface receptors are generally large and hydrophilic (e.g. TRH, Vasopressin, Acetylcholine), while those entering the cell are generally small and hydrophobic (e.g. glucocorticoids, thyroid hormones, cholecalciferol, retinoic acid), but important exceptions to both are numerous, and the same molecule can act both via surface receptors or in an intracrine manner to different effects. In animal cells, specialized cells release these hormones and send them through the circulatory system to other parts of the body. They then reach target cells, which can recognize and respond to the hormones and produce a result. This is also known as endocrine signaling. Plant growth regulators, or plant hormones, move through cells or by diffusing through the air as a gas to reach their targets. Hydrogen sulfide is produced in small amounts by some cells of the human body and has a number of biological signaling functions. Only two other such gases are currently known to act as signaling molecules in the human body: nitric oxide and carbon monoxide.
Exocytosis
Exocytosis is the process by which a cell transports molecules such as neurotransmitters and proteins out of the cell. As an active transport mechanism, exocytosis requires the use of energy to transport material. Exocytosis and its counterpart, endocytosis, the process that brings substances into the cell, are used by all cells because most chemical substances important to them are large polar molecules that cannot pass through the hydrophobic portion of the cell membrane by passive transport. Exocytosis is the process by which a large amount of molecules are released; thus it is a form of bulk transport. Exocytosis occurs via secretory portals at the cell plasma membrane called porosomes. Porosomes are permanent cup-shaped lipoprotein structures at the cell plasma membrane, where secretory vesicles transiently dock and fuse to release intra-vesicular contents from the cell.
In exocytosis, membrane-bound secretory vesicles are carried to the cell membrane, where they dock and fuse at porosomes and their contents (i.e., water-soluble molecules) are secreted into the extracellular environment. This secretion is possible because the vesicle transiently fuses with the plasma membrane. In the context of neurotransmission, neurotransmitters are typically released from synaptic vesicles into the synaptic cleft via exocytosis; however, neurotransmitters can also be released via reverse transport through membrane transport proteins.
Forms of Cell Signaling
Autocrine
Autocrine signaling involves a cell secreting a hormone or chemical messenger (called the autocrine agent) that binds to autocrine receptors on that same cell, leading to changes in the cell itself. This can be contrasted with paracrine signaling, intracrine signaling, or classical endocrine signaling.
Intracrine
In intracrine signaling, the signaling chemicals are produced inside the cell and bind to cytosolic or nuclear receptors without being secreted from the cell.. In intracrine signaling, signals are relayed without being secreted from the cell. The intracrine signals not being secreted outside of the cell is what sets apart intracrine signaling from the other cell signaling mechanisms such as autocrine signaling. In both autocrine and intracrine signaling, the signal has an effect on the cell that produced it.
Juxtacrine
Juxtacrine signaling is a type of cell–cell or cell–extracellular matrix signaling in multicellular organisms that requires close contact. There are three types:
A membrane ligand (protein, oligosaccharide, lipid) and a membrane protein of two adjacent cells interact.
A communicating junction links the intracellular compartments of two adjacent cells, allowing transit of relatively small molecules.
An extracellular matrix glycoprotein and a membrane protein interact.
Additionally, in unicellular organisms such as bacteria, juxtacrine signaling means interactions by membrane contact. Juxtacrine signaling has been observed for some growth factors, cytokine and chemokine cellular signals, playing an important role in the immune response. Juxtacrine signalling via direct membrane contacts is also present between neuronal cell bodies and motile processes of microglia both during development, and in the adult brain.
Paracrine
In paracrine signaling, a cell produces a signal to induce changes in nearby cells, altering the behaviour of those cells. Signaling molecules known as paracrine factors diffuse over a relatively short distance (local action), as opposed to cell signaling by endocrine factors, hormones which travel considerably longer distances via the circulatory system; juxtacrine interactions; and autocrine signaling. Cells that produce paracrine factors secrete them into the immediate extracellular environment. Factors then travel to nearby cells in which the gradient of factor received determines the outcome. However, the exact distance that paracrine factors can travel is not certain.
Paracrine signals such as retinoic acid target only cells in the vicinity of the emitting cell. Neurotransmitters represent another example of a paracrine signal.
Some signaling molecules can function as both a hormone and a neurotransmitter. For example, epinephrine and norepinephrine can function as hormones when released from the adrenal gland and are transported to the heart by way of the blood stream. Norepinephrine can also be produced by neurons to function as a neurotransmitter within the brain. Estrogen can be released by the ovary and function as a hormone or act locally via paracrine or autocrine signaling.
Although paracrine signaling elicits a diverse array of responses in the induced cells, most paracrine factors utilize a relatively streamlined set of receptors and pathways. In fact, different organs in the body - even between different species - are known to utilize a similar sets of paracrine factors in differential development. The highly conserved receptors and pathways can be organized into four major families based on similar structures: fibroblast growth factor (FGF) family, Hedgehog family, Wnt family, and TGF-β superfamily. Binding of a paracrine factor to its respective receptor initiates signal transduction cascades, eliciting different responses.
Endocrine
Endocrine signals are called hormones. Hormones are produced by endocrine cells and they travel through the blood to reach all parts of the body. Specificity of signaling can be controlled if only some cells can respond to a particular hormone. Endocrine signaling involves the release of hormones by internal glands of an organism directly into the circulatory system, regulating distant target organs. In vertebrates, the hypothalamus is the neural control center for all endocrine systems. In humans, the major endocrine glands are the thyroid gland and the adrenal glands. The study of the endocrine system and its disorders is known as endocrinology.
Receptors
Cells receive information from their neighbors through a class of proteins known as receptors. Receptors may bind with some molecules (ligands) or may interact with physical agents like light, mechanical temperature, pressure, etc. Reception occurs when the target cell (any cell with a receptor protein specific to the signal molecule) detects a signal, usually in the form of a small, water-soluble molecule, via binding to a receptor protein on the cell surface, or once inside the cell, the signaling molecule can bind to intracellular receptors, other elements, or stimulate enzyme activity (e.g. gasses), as in intracrine signaling.
Signaling molecules interact with a target cell as a ligand to cell surface receptors, and/or by entering into the cell through its membrane or endocytosis for intracrine signaling. This generally results in the activation of second messengers, leading to various physiological effects. In many mammals, early embryo cells exchange signals with cells of the uterus. In the human gastrointestinal tract, bacteria exchange signals with each other and with human epithelial and immune system cells. For the yeast Saccharomyces cerevisiae during mating, some cells send a peptide signal (mating factor pheromones) into their environment. The mating factor peptide may bind to a cell surface receptor on other yeast cells and induce them to prepare for mating.
Cell surface receptors
Cell surface receptors play an essential role in the biological systems of single- and multi-cellular organisms and malfunction or damage to these proteins is associated with cancer, heart disease, and asthma. These trans-membrane receptors are able to transmit information from outside the cell to the inside because they change conformation when a specific ligand binds to it. There are three major types: Ion channel linked receptors, G protein–coupled receptors, and enzyme-linked receptors.
Ion channel linked receptors
Ion channel linked receptors are a group of transmembrane ion-channel proteins which open to allow ions such as Na+, K+, Ca2+, and/or Cl− to pass through the membrane in response to the binding of a chemical messenger (i.e. a ligand), such as a neurotransmitter.
When a presynaptic neuron is excited, it releases a neurotransmitter from vesicles into the synaptic cleft. The neurotransmitter then binds to receptors located on the postsynaptic neuron. If these receptors are ligand-gated ion channels, a resulting conformational change opens the ion channels, which leads to a flow of ions across the cell membrane. This, in turn, results in either a depolarization, for an excitatory receptor response, or a hyperpolarization, for an inhibitory response.
These receptor proteins are typically composed of at least two different domains: a transmembrane domain which includes the ion pore, and an extracellular domain which includes the ligand binding location (an allosteric binding site). This modularity has enabled a 'divide and conquer' approach to finding the structure of the proteins (crystallising each domain separately). The function of such receptors located at synapses is to convert the chemical signal of presynaptically released neurotransmitter directly and very quickly into a postsynaptic electrical signal. Many LICs are additionally modulated by allosteric ligands, by channel blockers, ions, or the membrane potential. LICs are classified into three superfamilies which lack evolutionary relationship: cys-loop receptors, ionotropic glutamate receptors and ATP-gated channels.
G protein–coupled receptors
G protein-coupled receptors are a large group of evolutionarily-related proteins that are cell surface receptors that detect molecules outside the cell and activate cellular responses. Coupling with G proteins, they are called seven-transmembrane receptors because they pass through the cell membrane seven times. The G-protein acts as a "middle man" transferring the signal from its activated receptor to its target and therefore indirectly regulates that target protein. Ligands can bind either to extracellular N-terminus and loops (e.g. glutamate receptors) or to the binding site within transmembrane helices (Rhodopsin-like family). They are all activated by agonists although a spontaneous auto-activation of an empty receptor can also be observed.
G protein-coupled receptors are found only in eukaryotes, including yeast, choanoflagellates, and animals. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases.
There are two principal signal transduction pathways involving the G protein-coupled receptors: cAMP signal pathway and phosphatidylinositol signal pathway. When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G protein by exchanging the GDP bound to the G protein for a GTP. The G protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13).
G protein-coupled receptors are an important drug target and approximately 34% of all Food and Drug Administration (FDA) approved drugs target 108 members of this family. The global sales volume for these drugs is estimated to be 180 billion US dollars . It is estimated that GPCRs are targets for about 50% of drugs currently on the market, mainly due to their involvement in signaling pathways related to many diseases i.e. mental, metabolic including endocrinological disorders, immunological including viral infections, cardiovascular, inflammatory, senses disorders, and cancer. The long ago discovered association between GPCRs and many endogenous and exogenous substances, resulting in e.g. analgesia, is another dynamically developing field of pharmaceutical research.
Enzyme-linked receptors
Enzyme-linked receptors (or catalytic receptors) are transmembrane receptors that, upon activation by an extracellular ligand, causes enzymatic activity on the intracellular side. Hence a catalytic receptor is an integral membrane protein possessing both enzymatic, catalytic, and receptor functions.
They have two important domains, an extra-cellular ligand binding domain and an intracellular domain, which has a catalytic function; and a single transmembrane helix. The signaling molecule binds to the receptor on the outside of the cell and causes a conformational change on the catalytic function located on the receptor inside the cell. Examples of the enzymatic activity include:
Receptor tyrosine kinase, as in fibroblast growth factor receptor. Most enzyme-linked receptors are of this type.
Serine/threonine-specific protein kinase, as in bone morphogenetic protein
Guanylate cyclase, as in atrial natriuretic factor receptor
Intracellular receptors
Intracellular receptors exist freely in the cytoplasm, nucleus, or can be bound to organelles or membranes. For example, the presence of nuclear and mitochondrial receptors is well documented. The binding of a ligand to the intracellular receptor typically induces a response in the cell. Intracellular receptors often have a level of specificity, this allows the receptors to initiate certain responses when bound to a corresponding ligand. Intracellular receptors typically act on lipid soluble molecules. The receptors bind to a group of DNA binding proteins. Upon binding, the receptor-ligand complex translocates to the nucleus where they can alter patterns of gene expression.
Steroid hormone receptor
Steroid hormone receptors are found in the nucleus, cytosol, and also on the plasma membrane of target cells. They are generally intracellular receptors (typically cytoplasmic or nuclear) and initiate signal transduction for steroid hormones which lead to changes in gene expression over a time period of hours to days. The best studied steroid hormone receptors are members of the nuclear receptor subfamily 3 (NR3) that include receptors for estrogen (group NR3A) and 3-ketosteroids (group NR3C). In addition to nuclear receptors, several G protein-coupled receptors and ion channels act as cell surface receptors for certain steroid hormones.
Mechanisms of Receptor Down-Regulation
Receptor mediated endocytosis is common way of turning receptors "off". Endocytic down regulation is regarded as a means for reducing receptor signaling. The process involves the binding of a ligand to the receptor, which then triggers the formation of coated pits, the coated pits transform to coated vesicles and are transported to the endosome.
Receptor Phosphorylation is another type of receptor down-regulation. Biochemical changes can reduce receptor affinity for a ligand.
Reducing the sensitivity of the receptor is a result of receptors being occupied for a long time. This results in a receptor adaptation in which the receptor no longer responds to the signaling molecule. Many receptors have the ability to change in response to ligand concentration.
Signal transduction pathways
When binding to the signaling molecule, the receptor protein changes in some way and starts the process of transduction, which can occur in a single step or as a series of changes in a sequence of different molecules (called a signal transduction pathway). The molecules that compose these pathways are known as relay molecules. The multistep process of the transduction stage is often composed of the activation of proteins by addition or removal of phosphate groups or even the release of other small molecules or ions that can act as messengers. The amplifying of a signal is one of the benefits to this multiple step sequence. Other benefits include more opportunities for regulation than simpler systems do and the fine-tuning of the response, in both unicellular and multicellular organism.
In some cases, receptor activation caused by ligand binding to a receptor is directly coupled to the cell's response to the ligand. For example, the neurotransmitter GABA can activate a cell surface receptor that is part of an ion channel. GABA binding to a GABAA receptor on a neuron opens a chloride-selective ion channel that is part of the receptor. GABAA receptor activation allows negatively charged chloride ions to move into the neuron, which inhibits the ability of the neuron to produce action potentials. However, for many cell surface receptors, ligand-receptor interactions are not directly linked to the cell's response. The activated receptor must first interact with other proteins inside the cell before the ultimate physiological effect of the ligand on the cell's behavior is produced. Often, the behavior of a chain of several interacting cell proteins is altered following receptor activation. The entire set of cell changes induced by receptor activation is called a signal transduction mechanism or pathway.
A more complex signal transduction pathway is the MAPK/ERK pathway, which involves changes of protein–protein interactions inside the cell, induced by an external signal. Many growth factors bind to receptors at the cell surface and stimulate cells to progress through the cell cycle and divide. Several of these receptors are kinases that start to phosphorylate themselves and other proteins when binding to a ligand. This phosphorylation can generate a binding site for a different protein and thus induce protein–protein interaction. In this case, the ligand (called epidermal growth factor, or EGF) binds to the receptor (called EGFR). This activates the receptor to phosphorylate itself. The phosphorylated receptor binds to an adaptor protein (GRB2), which couples the signal to further downstream signaling processes. For example, one of the signal transduction pathways that are activated is called the mitogen-activated protein kinase (MAPK) pathway. The signal transduction component labeled as "MAPK" in the pathway was originally called "ERK," so the pathway is called the MAPK/ERK pathway. The MAPK protein is an enzyme, a protein kinase that can attach phosphate to target proteins such as the transcription factor MYC and, thus, alter gene transcription and, ultimately, cell cycle progression. Many cellular proteins are activated downstream of the growth factor receptors (such as EGFR) that initiate this signal transduction pathway.
Some signaling transduction pathways respond differently, depending on the amount of signaling received by the cell. For instance, the hedgehog protein activates different genes, depending on the amount of hedgehog protein present.
Complex multi-component signal transduction pathways provide opportunities for feedback, signal amplification, and interactions inside one cell between multiple signals and signaling pathways.
A specific cellular response is the result of the transduced signal in the final stage of cell signaling. This response can essentially be any cellular activity that is present in a body. It can spur the rearrangement of the cytoskeleton, or even as catalysis by an enzyme. These three steps of cell signaling all ensure that the right cells are behaving as told, at the right time, and in synchronization with other cells and their own functions within the organism. At the end, the end of a signal pathway leads to the regulation of a cellular activity. This response can take place in the nucleus or in the cytoplasm of the cell. A majority of signaling pathways control protein synthesis by turning certain genes on and off in the nucleus.
In unicellular organisms such as bacteria, signaling can be used to 'activate' peers from a dormant state, enhance virulence, defend against bacteriophages, etc. In quorum sensing, which is also found in social insects, the multiplicity of individual signals has the potentiality to create a positive feedback loop, generating coordinated response. In this context, the signaling molecules are called autoinducers. This signaling mechanism may have been involved in evolution from unicellular to multicellular organisms. Bacteria also use contact-dependent signaling, notably to limit their growth.
Signaling molecules used by multicellular organisms are often called pheromones. They can have such purposes as alerting against danger, indicating food supply, or assisting in reproduction.
Short-term cellular responses
.
Regulating gene activity
.
Notch signaling pathway
Notch is a cell surface protein that functions as a receptor. Animals have a small set of genes that code for signaling proteins that interact specifically with Notch receptors and stimulate a response in cells that express Notch on their surface. Molecules that activate (or, in some cases, inhibit) receptors can be classified as hormones, neurotransmitters, cytokines, and growth factors, in general called receptor ligands. Ligand receptor interactions such as that of the Notch receptor interaction, are known to be the main interactions responsible for cell signaling mechanisms and communication. notch acts as a receptor for ligands that are expressed on adjacent cells. While some receptors are cell-surface proteins, others are found inside cells. For example, estrogen is a hydrophobic molecule that can pass through the lipid bilayer of the membranes. As part of the endocrine system, intracellular estrogen receptors from a variety of cell types can be activated by estrogen produced in the ovaries.
In the case of Notch-mediated signaling, the signal transduction mechanism can be relatively simple. As shown in Figure 2, the activation of Notch can cause the Notch protein to be altered by a protease. Part of the Notch protein is released from the cell surface membrane and takes part in gene regulation. Cell signaling research involves studying the spatial and temporal dynamics of both receptors and the components of signaling pathways that are activated by receptors in various cell types. Emerging methods for single-cell mass-spectrometry analysis promise to enable studying signal transduction with single-cell resolution.
In notch signaling, direct contact between cells allows for precise control of cell differentiation during embryonic development. In the worm Caenorhabditis elegans, two cells of the developing gonad each have an equal chance of terminally differentiating or becoming a uterine precursor cell that continues to divide. The choice of which cell continues to divide is controlled by competition of cell surface signals. One cell will happen to produce more of a cell surface protein that activates the Notch receptor on the adjacent cell. This activates a feedback loop or system that reduces Notch expression in the cell that will differentiate and that increases Notch on the surface of the cell that continues as a stem cell.
See also
Scaffold protein
Biosemiotics
Molecular cellular cognition
Crosstalk (biology)
Bacterial outer membrane vesicles
Membrane vesicle trafficking
Host–pathogen interaction
Retinoic acid
JAK-STAT signaling pathway
Imd pathway
Localisation signal
Oscillation
Protein dynamics
Systems biology
Lipid signaling
Redox signaling
Signaling cascade
Cell Signaling Technology – an antibody development and production company
Netpath – a curated resource of signal transduction pathways in humans
Synthetic Biology Open Language
Nanoscale networking – leveraging biological signaling to construct ad hoc in vivo communication networks
Soliton model in neuroscience – physical communication via sound waves in membranes
Temporal feedback
References
Further reading
"The Inside Story of Cell Communication". learn.genetics.utah.edu. Retrieved 2018-10-20.
"When Cell Communication Goes Wrong". learn.genetics.utah.edu. Retrieved 2018-10-24.
External links
NCI-Nature Pathway Interaction Database: authoritative information about signaling pathways in human cells.
Signaling Pathways Project: cell signaling hypothesis generation knowledgebase constructed using biocurated archived transcriptomic and ChIP-Seq datasets
Cell biology
Cell communication
Systems biology
Human female endocrine system | 0.781263 | 0.996133 | 0.778242 |
Metaphysics | Metaphysics is the branch of philosophy that examines the basic structure of reality. It is traditionally seen as the study of mind-independent features of the world, but some modern theorists view it as an inquiry into the fundamental categories of human understanding. It is sometimes characterized as first philosophy to suggest that it is more fundamental than other forms of philosophical inquiry.
Metaphysics encompasses a wide range of general and abstract topics. It investigates the nature of existence, the features all entities have in common, and their division into categories of being. An influential division is between particulars and universals. Particulars are individual unique entities, like a specific apple. Universals are general repeatable entities that characterize particulars, like the color red. Modal metaphysics examines what it means for something to be possible or necessary. Metaphysicians also explore the concepts of space, time, and change, and their connection to causality and the laws of nature. Other topics include how mind and matter are related, whether everything in the world is predetermined, and whether there is free will.
Metaphysicians use various methods to conduct their inquiry. Traditionally, they rely on rational intuitions and abstract reasoning but have more recently also included empirical approaches associated with scientific theories. Due to the abstract nature of its topic, metaphysics has received criticisms questioning the reliability of its methods and the meaningfulness of its theories. Metaphysics is relevant to many fields of inquiry that often implicitly rely on metaphysical concepts and assumptions.
The roots of metaphysics lie in antiquity with speculations about the nature and origin of universe, like those found in the Upanishads in ancient India, Daoism in ancient China, and pre-Socratic philosophy in ancient Greece. During the subsequent medieval period in the West, discussions about the nature of universals were influenced by the philosophies of Plato and Aristotle. The modern period saw the emergence of various comprehensive systems of metaphysics, many of which embraced idealism. In the 20th century, a "revolt against idealism" was started, metaphysics was once declared meaningless, and then revived with various criticisms of earlier theories and new approaches to metaphysical inquiry.
Definition
Metaphysics is the study of the most general features of reality, including existence, objects and their properties, possibility and necessity, space and time, change, causation, and the relation between matter and mind. It is one of the oldest branches of philosophy.
The precise nature of metaphysics is disputed and its characterization has changed in the course of history. Some approaches see metaphysics as a unified field and give a wide-sweeping definition by understanding it as the study of "fundamental questions about the nature of reality" or as an inquiry into the essences of things. Another approach doubts that the different areas of metaphysics share a set of underlying features and provides instead a fine-grained characterization by listing all the main topics investigated by metaphysicians. Some definitions are descriptive by providing an account of what metaphysicians do while others are normative and prescribe what metaphysicians ought to do.
Two historically influential definitions in ancient and medieval philosophy understand metaphysics as the science of the first causes and as the study of being qua being, that is, the topic of what all beings have in common and to what fundamental categories they belong. In the modern period, the scope of metaphysics expanded to include topics such as the distinction between mind and body and free will. Some philosophers follow Aristotle in describing metaphysics as "first philosophy", suggesting that it is the most basic inquiry upon which all other branches of philosophy depend in some way.
Metaphysics is traditionally understood as a study of mind-independent features of reality. Starting with Immanuel Kant's critical philosophy, an alternative conception gained prominence that focuses on conceptual schemes rather than external reality. Kant distinguishes transcendent metaphysics, which aims to describe the objective features of reality beyond sense experience, from critical metaphysics, which outlines the aspects and principles underlying all human thought and experience. Philosopher P. F. Strawson further explored the role of conceptual schemes, contrasting descriptive metaphysics, which articulates conceptual schemes commonly used to understand the world, with revisionary metaphysics, which aims to produce better conceptual schemes.
Metaphysics differs from the individual sciences by studying the most general and abstract aspects of reality. The individual sciences, by contrast, examine more specific and concrete features and restrict themselves to certain classes of entities, such as the focus on physical things in physics, living entities in biology, and cultures in anthropology. It is disputed to what extent this contrast is a strict dichotomy rather than a gradual continuum.
Etymology
The word metaphysics has its origin in the ancient Greek words metá (μετά, meaning , , and ) and phusiká (φυσικά), as a short form of ta metá ta phusiká, meaning . This is often interpreted to mean that metaphysics discusses topics that, due to their generality and comprehensiveness, lie beyond the realm of physics and its focus on empirical observation. Metaphysics got its name by a historical accident when Aristotle's book on this subject was published. Aristotle did not use the term metaphysics but his editor (likely Andronicus of Rhodes) may have coined it for its title to indicate that this book should be studied after Aristotle's book published on physics: literally after physics. The term entered the English language through the Latin word metaphysica.
Branches
The nature of metaphysics can also be characterized in relation to its main branches. An influential division from early modern philosophy distinguishes between general and special or specific metaphysics. General metaphysics, also called ontology, takes the widest perspective and studies the most fundamental aspects of being. It investigates the features that all entities share and how entities can be divided into different categories. Categories are the most general kinds, such as substance, property, relation, and fact. Ontologists research which categories there are, how they depend on one another, and how they form a system of categories that provides a comprehensive classification of all entities.
Special metaphysics considers being from more narrow perspectives and is divided into subdisciplines based on the perspective they take. Metaphysical cosmology examines changeable things and investigates how they are connected to form a world as a totality extending through space and time. Rational psychology focuses on metaphysical foundations and problems concerning the mind, such as its relation to matter and the freedom of the will. Natural theology studies the divine and its role as the first cause. The scope of special metaphysics overlaps with other philosophical disciplines, making it unclear whether a topic belongs to it or to areas like philosophy of mind and theology.
Applied metaphysics is a relatively young subdiscipline. It belongs to applied philosophy and studies the applications of metaphysics, both within philosophy and other fields of inquiry. In areas like ethics and philosophy of religion, it addresses topics like the ontological foundations of moral claims and religious doctrines. Beyond philosophy, its applications include the use of ontologies in artificial intelligence, economics, and sociology to classify entities. In psychiatry and medicine, it examines the metaphysical status of diseases.
Meta-metaphysics is the metatheory of metaphysics and investigates the nature and methods of metaphysics. It examines how metaphysics differs from other philosophical and scientific disciplines and assesses its relevance to them. Even though discussions of these topics have a long history in metaphysics, meta-metaphysics has only recently developed into a systematic field of inquiry.
Topics
Existence and categories of being
Metaphysicians often regard existence or being as one of the most basic and general concepts. To exist means to form part of reality, distinguishing real entities from imaginary ones. According to the orthodox view, existence is a property of properties: if an entity exists then its properties are instantiated. A different position states that existence is a property of individuals, meaning that it is similar to other properties, such as shape or size. It is controversial whether all entities have this property. According to Alexius Meinong, there are nonexistent objects, including merely possible objects like Santa Claus and Pegasus. A related question is whether existence is the same for all entities or whether there are different modes or degrees of existence. For instance, Plato held that Platonic forms, which are perfect and immutable ideas, have a higher degree of existence than matter, which can only imperfectly reflect Platonic forms.
Another key concern in metaphysics is the division of entities into distinct groups based on underlying features they share. Theories of categories provide a system of the most fundamental kinds or the highest genera of being by establishing a comprehensive inventory of everything. One of the earliest theories of categories was proposed by Aristotle, who outlined a system of 10 categories. He argued that substances (e.g. man and horse), are the most important category since all other categories like quantity (e.g. four), quality (e.g. white), and place (e.g. in Athens) are said of substances and depend on them. Kant understood categories as fundamental principles underlying human understanding and developed a system of 12 categories, divided into the four classes quantity, quality, relation, and modality. More recent theories of categories were proposed by C. S. Peirce, Edmund Husserl, Samuel Alexander, Roderick Chisholm, and E. J. Lowe. Many philosophers rely on the contrast between concrete and abstract objects. According to a common view, concrete objects, like rocks, trees, and human beings, exist in space and time, undergo changes, and impact each other as cause and effect, whereas abstract objects, like numbers and sets, exist outside space and time, are immutable, and do not engage in causal relations.
Particulars
Particulars are individual entities and include both concrete objects, like Aristotle, the Eiffel Tower, or a specific apple, and abstract objects, like the number 2 or a specific set in mathematics. Also called individuals, they are unique, non-repeatable entities and contrast with universals, like the color red, which can at the same time exist in several places and characterize several particulars. A widely held view is that particulars instantiate universals but are not themselves instantiated by something else, meaning that they exist in themselves while universals exist in something else. Substratum theory analyzes each particular as a substratum, also called bare particular, together with various properties. The substratum confers individuality to the particular while the properties express its qualitative features or what it is like. This approach is rejected by bundle theorists, who state that particulars are only bundles of properties without an underlying substratum. Some bundle theorists include in the bundle an individual essence, called haecceity, to ensure that each bundle is unique. Another proposal for concrete particulars is that they are individuated by their space-time location.
Concrete particulars encountered in everyday life, like rocks, tables, and organisms, are complex entities composed of various parts. For example, a table is made up of a tabletop and legs, each of which is itself made up of countless particles. The relation between parts and wholes is studied by mereology. The problem of the many is about which groups of entities form mereological wholes, for instance, whether a dust particle on the tabletop is part of the table. According to mereological universalists, every collection of entities forms a whole, meaning that the parts of the table without the dust particle form one whole while they together with it form a second whole. Mereological moderatists hold that certain conditions must be met for a group of entities to compose a whole, for example, that the entities touch one another. Mereological nihilists reject the idea of wholes altogether, claiming that there are no tables and chairs but only particles that are arranged table-wise and chair-wise. A related mereological problem is whether there are simple entities that have no parts, as atomists claim, or not, as continuum theorists contend.
Universals
Universals are general entities, encompassing both properties and relations, that express what particulars are like and how they resemble one another. They are repeatable, meaning that they are not limited to a unique existent but can be instantiated by different particulars at the same time. For example, the particulars Nelson Mandela and Mahatma Gandhi instantiate the universal humanity, similar to how a strawberry and a ruby instantiate the universal red.
A topic discussed since ancient philosophy, the problem of universals consists in the challenge of characterizing the ontological status of universals. Realists argue that universals are real, mind-independent entities that exist in addition to particulars. According to Platonic realists, universals exist independently of particulars, which implies that the universal red would continue to exist even if there were no red things. A more moderate form of realism, inspired by Aristotle, states that universals depend on particulars, meaning that they are only real if they are instantiated. Nominalists reject the idea that universals exist in either form. For them, the world is composed exclusively of particulars. Conceptualists offer an intermediate position, stating that universals exist, but only as concepts in the mind used to order experience by classifying entities.
Natural and social kinds are often understood as special types of universals. Entities belonging to the same natural kind share certain fundamental features characteristic of the structure of the natural world. In this regard, natural kinds are not an artificially constructed classification but are discovered, usually by the natural sciences, and include kinds like electrons, , and tigers. Scientific realists and anti-realists disagree about whether natural kinds exist. Social kinds, like money and baseball, are studied by social metaphysics and characterized as useful social constructions that, while not purely fictional, do not reflect the fundamental structure of mind-independent reality.
Possibility and necessity
The concepts of possibility and necessity convey what can or must be the case, expressed in statements like "it is possible to find a cure for cancer" and "it is necessary that two plus two equals four". They belong to modal metaphysics, which investigates the metaphysical principles underlying them, in particular, why some modal statements are true while others are false. Some metaphysicians hold that modality is a fundamental aspect of reality, meaning that besides facts about what is the case, there are additional facts about what could or must be the case. A different view argues that modal truths are not about an independent aspect of reality but can be reduced to non-modal characteristics, for example, to facts about what properties or linguistic descriptions are compatible with each other or to fictional statements.
Borrowing a term from German philosopher Gottfried Wilhelm Leibniz's theodicy, many metaphysicians use the concept of possible worlds to analyze the meaning and ontological ramifications of modal statements. A possible world is a complete and consistent way of how things could have been. For example, the dinosaurs were wiped out in the actual world but there are possible worlds in which they are still alive. According to possible world semantics, a statement is possibly true if it is true in at least one possible world, whereas it is necessarily true if it is true in all possible worlds. Modal realists argue that possible worlds exist as concrete entities in the same sense as the actual world, with the main difference being that the actual world is the world we live in while other possible worlds are inhabited by counterparts. This view is controversial and various alternatives have been suggested, for example, that possible worlds only exist as abstract objects or are similar to stories told in works of fiction.
Space, time, and change
Space and time are dimensions that entities occupy. Spacetime realists state that space and time are fundamental aspects of reality and exist independently of the human mind. Spacetime idealists, by contrast, hold that space and time are constructs of the human mind, created to organize and make sense of reality. Spacetime absolutism or substantivalism understands spacetime as a distinct object, with some metaphysicians conceptualizing it as a container that holds all other entities within it. Spacetime relationism sees spacetime not as an object but as a network of relations between objects, such as the spatial relation of being next to and the temporal relation of coming before.
In the metaphysics of time, an important contrast is between the A-series and the B-series. According to the A-series theory, the flow of time is real, meaning that events are categorized into the past, present, and future. The present continually moves forward in time and events that are in the present now will eventually change their status and lie in the past. From the perspective of the B-series theory, time is static, and events are ordered by the temporal relations earlier-than and later-than without any essential difference between past, present, and future. Eternalism holds that past, present, and future are equally real, whereas presentism asserts that only entities in the present exist.
Material objects persist through time and change in the process, like a tree that grows or loses leaves. The main ways of conceptualizing persistence through time are endurantism and perdurantism. According to endurantism, material objects are three-dimensional entities that are wholly present at each moment. As they change, they gain or lose properties but otherwise remain the same. Perdurantists see material objects as four-dimensional entities that extend through time and are made up of different temporal parts. At each moment, only one part of the object is present, not the object as a whole. Change means that an earlier part is qualitatively different from a later part. For example, when a banana ripens, there is an unripe part followed by a ripe part.
Causality
Causality is the relation between cause and effect whereby one entity produces or affects another entity. For instance, if a person bumps a glass and spills its contents then the bump is the cause and the spill is the effect. Besides the single-case causation between particulars in this example, there is also general-case causation expressed in statements such as "smoking causes cancer". The term agent causation is used when people and their actions cause something. Causation is usually interpreted deterministically, meaning that a cause always brings about its effect. This view is rejected by probabilistic theories, which claim that the cause merely increases the probability that the effect occurs. This view can explain that smoking causes cancer even though this does not happen in every single case.
The regularity theory of causation, inspired by David Hume's philosophy, states that causation is nothing but a constant conjunction in which the mind apprehends that one phenomenon, like putting one's hand in a fire, is always followed by another phenomenon, like a feeling of pain. According to nomic regularity theories, regularities manifest as laws of nature studied by science. Counterfactual theories focus not on regularities but on how effects depend on their causes. They state that effects owe their existence to the cause and would not occur without them. According to primitivism, causation is a basic concept that cannot be analyzed in terms of non-causal concepts, such as regularities or dependence relations. One form of primitivism identifies causal powers inherent in entities as the underlying mechanism. Eliminativists reject the above theories by holding that there is no causation.
Mind and free will
Mind encompasses phenomena like thinking, perceiving, feeling, and desiring as well as the underlying faculties responsible for these phenomena. The mind–body problem is the challenge of clarifying the relation between physical and mental phenomena. According to Cartesian dualism, minds and bodies are distinct substances. They causally interact with each other in various ways but can, at least in principle, exist on their own. This view is rejected by monists, who argue that reality is made up of only one kind. According to idealism, everything is mental, including physical objects, which may be understood as ideas or perceptions of conscious minds. Materialists, by contrast, state that all reality is at its core material. Some deny that mind exists but the more common approach is to explain mind in terms of certain aspects of matter, such as brain states, behavioral dispositions, or functional roles. Neutral monists argue that reality is fundamentally neither material nor mental and suggest that matter and mind are both derivative phenomena. A key aspect of the mind–body problem is the hard problem of consciousness or how to explain that physical systems like brains can produce phenomenal consciousness.
The status of free will as the ability of a person to choose their actions is a central aspect of the mind–body problem. Metaphysicians are interested in the relation between free will and causal determinismthe view that everything in the universe, including human behavior, is determined by preceding events and laws of nature. It is controversial whether causal determinism is true, and, if so, whether this would imply that there is no free will. According to incompatibilism, free will cannot exist in a deterministic world since there is no true choice or control if everything is determined. Hard determinists infer from this that there is no free will, whereas libertarians conclude that determinism must be false. Compatibilists offer a third perspective, arguing that determinism and free will do not exclude each other, for instance, because a person can still act in tune with their motivation and choices even if they are determined by other forces. Free will plays a key role in ethics regarding the moral responsibility people have for what they do.
Others
Identity is a relation that every entity has to itself as a form of sameness. It refers to numerical identity when the very same entity is involved, as in the statement "the morning star is the evening star" (both are the planet Venus). In a slightly different sense, it encompasses qualitative identity, also called exact similarity and indiscernibility, which occurs when two distinct entities are exactly alike, such as perfect identical twins. The principle of the indiscernibility of identicals is widely accepted and holds that numerically identical entities exactly resemble one another. The converse principle, known as identity of indiscernibles or Leibniz's Law, is more controversial and states that two entities are numerically identical if they exactly resemble one another. Another distinction is between synchronic and diachronic identity. Synchronic identity relates an entity to itself at the same time, whereas diachronic identity is about the same entity at different times, as in statements like "the table I bought last year is the same as the table in my dining room now". Personal identity is a related topic in metaphysics that uses the term identity in a slightly different sense and concerns questions like what personhood is or what makes someone a person.
Various contemporary metaphysicians rely on the concepts of truth, truth-bearer, and truthmaker to conduct their inquiry. Truth is a property of being in accord with reality. Truth-bearers are entities that can be true or false, such as linguistic statements and mental representations. A truthmaker of a statement is the entity whose existence makes the statement true. For example, the statement "a tomato is red" is true because there exists a red tomato as its truthmaker. Based on this observation, it is possible to pursue metaphysical research by asking what the truthmakers of statements are, with different areas of metaphysics being dedicated to different types of statements. According to this view, modal metaphysics asks what makes statements about what is possible and necessary true while the metaphysics of time is interested in the truthmakers of temporal statements about the past, present, and future.
Methodology
Metaphysicians employ a variety of methods to develop metaphysical theories and formulate arguments for and against them. Traditionally, a priori methods have been the dominant approach. They rely on rational intuition and abstract reasoning from general principles rather than sensory experience. A posteriori approaches, by contrast, ground metaphysical theories in empirical observations and scientific theories. Some metaphysicians incorporate perspectives from fields such as physics, psychology, linguistics, and history into their inquiry. The two approaches are not mutually exclusive: it is possible to combine elements from both. The method a metaphysician chooses often depends on their understanding of the nature of metaphysics, for example, whether they see it as an inquiry into the mind-independent structure of reality, as metaphysical realists claim, or the principles underlying thought and experience, as some metaphysical anti-realists contend.
A priori approaches often rely on intuitionsnon-inferential impressions about the correctness of specific claims or general principles. For example, arguments for the A-theory of time, which states that time flows from the past through the present and into the future, often rely on pre-theoretical intuitions associated with the sense of the passage of time. Some approaches use intuitions to establish a small set of self-evident fundamental principles, known as axioms, and employ deductive reasoning to build complex metaphysical systems by drawing conclusions from these axioms. Intuition-based approaches can be combined with thought experiments, which help evoke and clarify intuitions by linking them to imagined situations. They use counterfactual thinking to assess the possible consequences of these situations. For example, to explore the relation between matter and consciousness, some theorists compare humans to philosophical zombieshypothetical creatures identical to humans but without conscious experience. A related method relies on commonly accepted beliefs instead of intuitions to formulate arguments and theories. The common-sense approach is often used to criticize metaphysical theories that deviate significantly from how the average person thinks about an issue. For example, common-sense philosophers have argued that mereological nihilism is false since it implies that commonly accepted things, like tables, do not exist.
Conceptual analysis, a method particularly prominent in analytic philosophy, aims to decompose metaphysical concepts into component parts to clarify their meaning and identify essential relations. In phenomenology, the method of eidetic variation is used to investigate essential structures underlying phenomena. This method involves imagining an object and varying its features to determine which ones are essential and cannot be changed. The transcendental method is a further approach and examines the metaphysical structure of reality by observing what entities there are and studying the conditions of possibility without which these entities could not exist.
Some approaches give less importance to a priori reasoning and view metaphysics as a practice continuous with the empirical sciences that generalizes their insights while making their underlying assumptions explicit. This approach is known as naturalized metaphysics and is closely associated with the work of Willard Van Orman Quine. He relies on the idea that true sentences from the sciences and other fields have ontological commitments, that is, they imply that certain entities exist. For example, if the sentence "some electrons are bonded to protons" is true then it can be used to justify that electrons and protons exist. Quine used this insight to argue that one can learn about metaphysics by closely analyzing scientific claims to understand what kind of metaphysical picture of the world they presuppose.
In addition to methods of conducting metaphysical inquiry, there are various methodological principles used to decide between competing theories by comparing their theoretical virtues. Ockham's Razor is a well-known principle that gives preference to simple theories, in particular, those that assume that few entities exist. Other principles consider explanatory power, theoretical usefulness, and proximity to established beliefs.
Criticism
Despite its status as one of the main branches of philosophy, metaphysics has received numerous criticisms questioning its legitimacy as a field of inquiry. One criticism argues that metaphysical inquiry is impossible because humans lack the cognitive capacities needed to access the ultimate nature of reality. This line of thought leads to skepticism about the possibility of metaphysical knowledge. Empiricists often follow this idea, like Hume, who argued that there is no good source of metaphysical knowledge since metaphysics lies outside the field of empirical knowledge and relies on dubious intuitions about the realm beyond sensory experience. A related argument favoring the unreliability of metaphysical theorizing points to the deep and lasting disagreements about metaphysical issues, suggesting a lack of overall progress.
Another criticism holds that the problem lies not with human cognitive abilities but with metaphysical statements themselves, which some claim are neither true nor false but meaningless. According to logical positivists, for instance, the meaning of a statement is given by the procedure used to verify it, usually through the observations that would confirm it. Based on this controversial assumption, they argue that metaphysical statements are meaningless since they make no testable predictions about experience.
A slightly weaker position allows metaphysical statements to have meaning while holding that metaphysical disagreements are merely verbal disputes about different ways to describe the world. According to this view, the disagreement in the metaphysics of composition about whether there are tables or only particles arranged table-wise is a trivial debate about linguistic preferences without any substantive consequences for the nature of reality. The position that metaphysical disputes have no meaning or no significant point is called metaphysical or ontological deflationism. This view is opposed by so-called serious metaphysicians, who contend that metaphysical disputes are about substantial features of the underlying structure of reality. A closely related debate between ontological realists and anti-realists concerns the question of whether there are any objective facts that determine which metaphysical theories are true. A different criticism, formulated by pragmatists, sees the fault of metaphysics not in its cognitive ambitions or the meaninglessness of its statements, but in its practical irrelevance and lack of usefulness.
Martin Heidegger criticized traditional metaphysics, saying that it fails to distinguish between individual entities and being as their ontological ground. His attempt to reveal the underlying assumptions and limitations in the history of metaphysics to "overcome metaphysics" influenced Jacques Derrida's method of deconstruction. Derrida employed this approach to criticize metaphysical texts for relying on opposing terms, like presence and absence, which he thought were inherently unstable and contradictory.
There is no consensus about the validity of these criticisms and whether they affect metaphysics as a whole or only certain issues or approaches in it. For example, it could be the case that certain metaphysical disputes are merely verbal while others are substantive.
Relation to other disciplines
Metaphysics is related to many fields of inquiry by investigating their basic concepts and relation to the fundamental structure of reality. For example, the natural sciences rely on concepts such as law of nature, causation, necessity, and spacetime to formulate their theories and predict or explain the outcomes of experiments. While scientists primarily focus on applying these concepts to specific situations, metaphysics examines their general nature and how they depend on each other. For instance, physicists formulate laws of nature, like laws of gravitation and thermodynamics, to describe how physical systems behave under various conditions. Metaphysicians, by contrast, examine what all laws of nature have in common, asking whether they merely describe contingent regularities or express necessary relations. New scientific discoveries have also influenced existing and inspired new metaphysical theories. Einstein's theory of relativity, for instance, prompted various metaphysicians to conceive space and time as a unified dimension rather than as independent dimensions. Empirically focused metaphysicians often rely on scientific theories to ground their theories about the nature of reality in empirical observations.
Similar issues arise in the social sciences where metaphysicians investigate their basic concepts and analyze their metaphysical implications. This includes questions like whether social facts emerge from non-social facts, whether social groups and institutions have mind-independent existence, and how they persist through time. Metaphysical assumptions and topics in psychology and psychiatry include the questions about the relation between body and mind, whether the nature of the human mind is historically fixed, and what the metaphysical status of diseases is.
Metaphysics is similar to both physical cosmology and theology in its exploration of the first causes and the universe as a whole. Key differences are that metaphysics relies on rational inquiry while physical cosmology gives more weight to empirical observations and theology incorporates divine revelation and other faith-based doctrines. Historically, cosmology and theology were considered subfields of metaphysics.
Metaphysics in the form of ontology plays a central role in computer science to classify objects and formally represent information about them. Unlike metaphysicians, computer scientists are usually not interested in providing a single all-encompassing characterization of reality as a whole. Instead, they employ many different ontologies, each one concerned only with a limited domain of entities. For instance, an organization may use an ontology with categories such as person, company, address, and name to represent information about clients and employees. Ontologies provide standards or conceptualizations for encoding and storing information in a structured way, enabling computational processes to use and transform their information for a variety of purposes. Some knowledge bases integrate information from various domains, which brings with it the challenge of handling data that was formulated using diverse ontologies. They address this by providing an upper ontology that defines concepts at a higher level of abstraction, applicable to all domains. Influential upper ontologies include Suggested Upper Merged Ontology and Basic Formal Ontology.
Logic as the study of correct reasoning is often used by metaphysicians as a tool to engage in their inquiry and express insights through precise logical formulas. Another relation between the two fields concerns the metaphysical assumptions associated with logical systems. Many logical systems like first-order logic rely on existential quantifiers to express existential statements. For instance, in the logical formula the existential quantifier is applied to the predicate to express that there are horses. Following Quine, various metaphysicians assume that existential quantifiers carry ontological commitments, meaning that existential statements imply that the entities over which one quantifies are part of reality.
History
The history of metaphysics examines how the inquiry into the basic structure of reality has evolved in the course of history. Metaphysics originated in the ancient period from speculations about the nature and origin of the cosmos. In ancient India, starting in the 7th century BCE, the Upanishads were written as religious and philosophical texts that examine how ultimate reality constitutes the ground of all being. They further explore the nature of the self and how it can reach liberation by understanding ultimate reality. This period also saw the emergence of Buddhism in the 6th century BCE, which denies the existence of an independent self and understands the world as a cyclic process. At about the same time in ancient China, the school of Daoism was formed and explored the natural order of the universe, known as Dao, and how it is characterized by the interplay of yin and yang as two correlated forces.
In ancient Greece, metaphysics emerged in the 6th century BCE with the pre-Socratic philosophers, who gave rational explanations of the cosmos as a whole by examining the first principles from which everything arises. Building on their work, Plato (427–347 BCE) formulated his theory of forms, which states that eternal forms or ideas possess the highest kind of reality while the material world is only an imperfect reflection of them. Aristotle (384–322 BCE) accepted Plato's idea that there are universal forms but held that they cannot exist on their own but depend on matter. He also proposed a system of categories and developed a comprehensive framework of the natural world through his theory of the four causes. Starting in the 4th century BCE, Hellenistic philosophy explored the rational order underlying the cosmos and the idea that it is made up of indivisible atoms. Neoplatonism emerged towards the end of the ancient period in the 3rd century CE and introduced the idea of "the One" as the transcendent and ineffable source of all creation.
Meanwhile, in Indian Buddhism, the Madhyamaka school developed the idea that all phenomena are inherently empty without a permanent essence. The consciousness-only doctrine of the Yogācāra school stated that experienced objects are mere transformations of consciousness and do not reflect external reality. The Hindu school of Samkhya philosophy introduced a metaphysical dualism with pure consciousness and matter as its fundamental categories. In China, the school of Xuanxue explored metaphysical problems such as the contrast between being and non-being.
Medieval Western philosophy was profoundly shaped by ancient Greek philosophy. Boethius (477–524 CE) sought to reconcile Plato's and Aristotle's theories of universals, proposing that universals can exist both in matter and mind. His theory inspired the development of nominalism and conceptualism, as in the thought of Peter Abelard (1079–1142 CE). Thomas Aquinas (1224–1274 CE) understood metaphysics as the discipline investigating different meanings of being, such as the contrast between substance and accident, and principles applying to all beings, such as the principle of identity. William of Ockham (1285–1347 CE) proposed Ockham's razor, a methodological principles to choose between competing metaphysical theories. Arabic–Persian philosophy flourished from the early 9th century CE to the late 12th century CE, integrating ancient Greek philosophies to interpret and clarify the teachings of the Quran. Avicenna (980–1037 CE) developed a comprehensive philosophical system that examined the contrast between existence and essence and distinguished between contingent and necessary existence. Medieval India saw the emergence of the monist school of Advaita Vedanta in the 8th century CE, which holds that everything is one and that the idea of many entities existing independently is an illusion. In China, Neo-Confucianism arose in the 9th century CE and explored the concept of li as the rational principle that is the ground of being and reflects the order of the universe.
In the early modern period, René Descartes (1596–1650) developed a substance dualism according to which body and mind exist as independent entities that causally interact. This idea was rejected by Baruch Spinoza (1632–1677), who formulated a monist philosophy suggesting that there is only one substance with both physical and mental attributes that develop side-by-side without interacting. Gottfried Wilhelm Leibniz (1646–1716) introduced the concept of possible worlds and articulated a metaphysical system known as monadology, which views the universe as a collection of simple substances synchronized without causal interaction. Christian Wolff (1679–1754), conceptualized the scope of metaphysics by distinguishing between general and special metaphysics. According to the idealism of George Berkeley (1685–1753), everything is mental, including material objects, which are ideas perceived by the mind. David Hume (1711–1776) made various contributions to metaphysics, including the regularity theory of causation and the idea that there are no necessary connections between distinct entities. His empiricist outlook led him to criticize metaphysical theories that seek ultimate principles inaccessible to sensory experience. This skeptical outlook was embraced by Immanuel Kant (1724–1804), who tried to reconceptualize metaphysics as an inquiry into the basic principles and categories of thought and understanding rather than seeing it as an attempt to comprehend mind-independent reality.
Many developments in the later modern period were shaped by Kant's philosophy. German idealists adopted his idealistic outlook in their attempt to find a unifying principle as the foundation of all reality. Georg Wilhelm Friedrich Hegel (1770–1831) developed a comprehensive system of philosophy that examines how absolute spirit manifests itself. He inspired the British idealism of Francis Herbert Bradley (1846–1924), who interpreted absolute spirit as the all-inclusive totality of being. Arthur Schopenhauer (1788–1860) was a strong critic of German idealism and articulated a different metaphysical vision, positing a blind and irrational will as the underlying principle of reality. Pragmatists like C. S. Peirce (1839–1914) and John Dewey (1859–1952) conceived metaphysics as an observational science of the most general features of reality and experience.
At the turn of the 20th century in analytic philosophy, philosophers such as Bertrand Russell (1872–1970) and G. E. Moore (1873–1958) led a "revolt against idealism". Logical atomists, like Russell and the early Ludwig Wittgenstein (1889–1951), conceived the world as a multitude of atomic facts, which later inspired metaphysicians such as D. M. Armstrong (1926–2014). Alfred North Whitehead (1861–1947) developed process metaphysics as an attempt to provide a holistic description of both the objective and the subjective realms.
Rudolf Carnap (1891–1970) and other logical positivists formulated a wide-ranging criticism of metaphysical statements, arguing that they are meaningless because there is no way to verify them. Other criticisms of traditional metaphysics identified misunderstandings of ordinary language as the source of many traditional metaphysical problems or challenged complex metaphysical deductions by appealing to common sense.
The decline of logical positivism led to a revival of metaphysical theorizing. Willard Van Orman Quine (1908–2000) tried to naturalize metaphysics by connecting it to the empirical sciences. His student David Lewis (1941–2001) employed the concept of possible worlds to formulate his modal realism. Saul Kripke (1940–2022) helped revive discussions of identity and essentialism, distinguishing necessity as a metaphysical notion from the epistemic notion of a priori.
In continental philosophy, Edmund Husserl (1859–1938) engaged in ontology through a phenomenological description of experience, while his student Martin Heidegger (1889–1976) developed fundamental ontology to clarify the meaning of being. Heidegger's philosophy inspired general criticisms of metaphysics by postmodern thinkers like Jacques Derrida (1930–2004). Gilles Deleuze's (1925–1995) approach to metaphysics challenged traditionally influential concepts like substance, essence, and identity by reconceptualizing the field through alternative notions such as multiplicity, event, and difference.
See also
Computational metaphysics
Doctor of Metaphysics
Enrico Berti's classification of metaphysics
Feminist metaphysics
Fundamental question of metaphysics
List of metaphysicians
Metaphysical grounding
References
Notes
Citations
Sources
External links
Metaphysics at Encyclopædia Britannica | 0.778428 | 0.999681 | 0.77818 |
Electrophoresis | Electrophoresis is the motion of charged dispersed particles or dissolved charged molecules relative to a fluid under the influence of a spatially uniform electric field. As a rule, these are zwitterions.
Electrophoresis is used in laboratories to separate macromolecules based on their charges. The technique normally applies a negative charge called cathode so protein molecules move towards a positive charge called anode. Therefore, electrophoresis of positively charged particles or molecules (cations) is sometimes called cataphoresis, while electrophoresis of negatively charged particles or molecules (anions) is sometimes called anaphoresis.
Electrophoresis is the basis for analytical techniques used in biochemistry for separating particles, molecules, or ions by size, charge, or binding affinity either freely or through a supportive medium using a one-directional flow of electrical charge. It is used extensively in DNA, RNA and protein analysis.
Liquid droplet electrophoresis is significantly different from the classic particle electrophoresis because of droplet characteristics such as a mobile surface charge and the nonrigidity of the interface. Also, the liquid–liquid system, where there is an interplay between the hydrodynamic and electrokinetic forces in both phases, adds to the complexity of electrophoretic motion.
History
Theory
Suspended particles have an electric surface charge, strongly affected by surface adsorbed species, on which an external electric field exerts an electrostatic Coulomb force. According to the double layer theory, all surface charges in fluids are screened by a diffuse layer of ions, which has the same absolute charge but opposite sign with respect to that of the surface charge. The electric field also exerts a force on the ions in the diffuse layer which has direction opposite to that acting on the surface charge. This latter force is not actually applied to the particle, but to the ions in the diffuse layer located at some distance from the particle surface, and part of it is transferred all the way to the particle surface through viscous stress. This part of the force is also called electrophoretic retardation force, or ERF in short.
When the electric field is applied and the charged particle to be analyzed is at steady movement through the diffuse layer, the total resulting force is zero:
Considering the drag on the moving particles due to the viscosity of the dispersant, in the case of low Reynolds number and moderate electric field strength E, the drift velocity of a dispersed particle v is simply proportional to the applied field, which leaves the electrophoretic mobility μe defined as:
The most well known and widely used theory of electrophoresis was developed in 1903 by Marian Smoluchowski:
,
where εr is the dielectric constant of the dispersion medium, ε0 is the permittivity of free space (C2 N−1 m−2), η is dynamic viscosity of the dispersion medium (Pa s), and ζ is zeta potential (i.e., the electrokinetic potential of the slipping plane in the double layer, units mV or V).
The Smoluchowski theory is very powerful because it works for dispersed particles of any shape at any concentration. It has limitations on its validity. For instance, it does not include Debye length κ−1 (units m). However, Debye length must be important for electrophoresis, as follows immediately from Figure 2,
"Illustration of electrophoresis retardation".
Increasing thickness of the double layer (DL) leads to removing the point of retardation force further from the particle surface. The thicker the DL, the smaller the retardation force must be.
Detailed theoretical analysis proved that the Smoluchowski theory is valid only for sufficiently thin DL, when particle radius a is much greater than the Debye length:
.
This model of "thin double layer" offers tremendous simplifications not only for electrophoresis theory but for many other electrokinetic theories. This model is valid for most aqueous systems, where the Debye length is usually only a few nanometers. It only breaks for nano-colloids in solution with ionic strength close to water.
The Smoluchowski theory also neglects the contributions from surface conductivity. This is expressed in modern theory as condition of small Dukhin number:
In the effort of expanding the range of validity of electrophoretic theories, the opposite asymptotic case was considered, when Debye length is larger than particle radius:
.
Under this condition of a "thick double layer", Erich Hückel predicted the following relation for electrophoretic mobility:
.
This model can be useful for some nanoparticles and non-polar fluids, where Debye length is much larger than in the usual cases.
There are several analytical theories that incorporate surface conductivity and eliminate the restriction of a small Dukhin number, pioneered by Theodoor Overbeek and F. Booth. Modern, rigorous theories valid for any Zeta potential and often any aκ stem mostly from Dukhin–Semenikhin theory.
In the thin double layer limit, these theories confirm the numerical solution to the problem provided by Richard W. O'Brien and Lee R. White.
For modeling more complex scenarios, these simplifications become inaccurate, and the electric field must be modeled spatially, tracking its magnitude and direction. Poisson's equation can be used to model this spatially-varying electric field. Its influence on fluid flow can be modeled with the Stokes law, while transport of different ions can be modeled using the Nernst–Planck equation. This combined approach is referred to as the Poisson-Nernst-Planck-Stokes equations. It has been validated for the electrophoresis of particles.
See also
References
Further reading
External links
List of relative mobilities
Colloidal chemistry
Biochemical separation processes
Electroanalytical methods
Instrumental analysis
Laboratory techniques | 0.781237 | 0.995999 | 0.778111 |
Leaching (chemistry) | Leaching is the process of a solute becoming detached or extracted from its carrier substance by way of a solvent.
Leaching is a naturally occurring process which scientists have adapted for a variety of applications with a variety of methods. Specific extraction methods depend on the soluble characteristics relative to the sorbent material such as concentration, distribution, nature, and size. Leaching can occur naturally seen from plant substances (inorganic and organic), solute leaching in soil, and in the decomposition of organic materials. Leaching can also be applied affectedly to enhance water quality and contaminant removal, as well as for disposal of hazardous waste products such as fly ash, or rare earth elements (REEs). Understanding leaching characteristics is important in preventing or encouraging the leaching process and preparing for it in the case where it is inevitable.
In an ideal leaching equilibrium stage, all the solute is dissolved by the solvent, leaving the carrier of the solute unchanged. The process of leaching however is not always ideal, and can be quite complex to understand and replicate, and often different methodologies will produce different results.
Leaching processes
There are many types of leaching scenarios; therefore, the extent of this topic is vast. In general, however, the three substances can be described as:
a carrier, substance A;
a solute, substance B;
and a solvent, substance C.
Substance A and B are somewhat homogenous in a system prior to the introduction of substance C. At the beginning of the leaching process, substance C will work at dissolving the surficial substance B at a fairly high rate. The rate of dissolution will decrease substantially once it needs to penetrate through the pores of substance A in order to continue targeting substance B. This penetration can often lead to dissolution of substance A, or the product of more than one solute, both unsatisfactory if specific leaching is desired. The physiochemical and biological properties of the carrier and solute should be considered when observing the leaching process, and certain properties may be more important depending on the material, the solvent, and their availability. These specific properties can include, but are not limited to:
Particle size
Solvent
Temperature
Agitation
Surface area
Homogeneity of the carrier and solute
Microorganism activity
Mineralogy
Intermediate products
Crystal structure
The general process is typically broken up and summarized into three parts:
Dissolution of surficial solute by solvent
Diffusion of inner-solute through the pores of the carrier to reach the solvent
Transfer of dissolved solute out of the system
Leaching processes for biological substances
Biological substances can experience leaching themselves, as well as be used for leaching as part of the solvent substance to recover heavy metals. Many plants experience leaching of phenolics, carbohydrates, and amino acids, and can experience as much as 30% mass loss from leaching, just from sources of water such as rain, dew, mist, and fog. These sources of water would be considered the solvent in the leaching process and can also lead to the leaching of organic nutrients from plants such as free sugars, pectic substances, and sugar alcohols. This can in turn lead to more diversity in plant species that may experience a more direct access to water. This type of leaching can often lead to the removal of an undesirable component from the solid by water, this process is called washing. A major concern for leaching of plants, is if pesticides are leached and carried through stormwater runoff,; this is not only necessary to plant health, but it is important to control because pesticides can be toxic to human and animal health.
Bioleaching is a term that describes the removal of metal cations from insoluble ores by biological oxidation and complexation processes. This process is done in most part to extract copper, cobalt, nickel, zinc, and uranium from insoluble sulfides or oxides. Bioleaching processes can also be used in the re-use of fly ash by recovering aluminum using sulfuric acid.
Leaching processes for fly ash
Coal fly ash is a product that experiences heavy amounts of leaching during disposal. Though the re-use of fly ash in other materials such as concrete and bricks is encouraged, still much of it in the United States is disposed of in holding ponds, lagoons, landfills, and slag heaps. These disposal sites all contain water where washing effects can cause leaching of many different major elements, depending on the type of fly ash and the location where it originated. The leaching of fly ash is only concerning if the fly ash has not been disposed of properly, such as in the case of the Kingston Fossil Plant in Roane County, Tennessee. The Tennessee Valley Authority Kingston Fossil Plant structural failure lead to massive destruction throughout the area and serious levels of contamination downstream to both Emory River and Clinch River.
Leaching processes in soil
Leaching in soil is highly dependent on the characteristics of the soil, which makes modeling efforts difficult. Most leaching comes from infiltration of water, a washing effect much like that described for the leaching process of biological substances. The leaching is typically described by solute transport models, such as Darcy's Law, mass flow expressions, and diffusion-dispersion understandings. Leaching is controlled largely by the hydraulic conductivity of the soil, which is dependent on particle size and relative density that the soil has been consolidated to via stress. Diffusion is controlled by other factors such as pore size and soil skeleton, tortuosity of flow path, and distribution of the solvent (water) and solutes.
Leaching for mineral extraction
Leaching can sometimes be used to extract valuable materials from a wastewater product/ raw materials. In the field of mineralogy, acid leaching is common to extract Metals such as vanadium, Cobalt, Nickel, Manganese, Iron etc. from raw materials/ reused materials. In recent years, there has been more attention given to metal leaching to recover precious metals from waste materials. For example, the extraction of valuable metals from wastewater.
Leaching mechanisms
Due to the assortment of leaching processes there are many variations in the data to be collected through laboratory methods and modeling, making it hard to interpret the data itself. Not only is the specified leaching process important, but also the focus of the experimentation itself. For instance, the focus could be directed toward mechanisms causing leaching, mineralogy as a group or individually, or the solvent that causes leaching. Most tests are done by evaluating mass loss due to a reagent, heat, or simply washing with water. A summary of various leaching processes and their respective laboratory tests can be viewed in the following table:
Environmentally friendly leaching
Some recent work has been done to see if organic acids can be used to leach lithium and cobalt from spent batteries with some success. Experiments performed with varying temperatures and concentrations of malic acid show that the optimal conditions are 2.0 m/L of organic acid at a temperature of 90 °C. The reaction had an overall efficiency exceeding 90% with no harmful byproducts.
4 LiCoO2(solid) + 12 C4H6O5(liquid) → 4 LiC4H5O5(liquid) + 4 Co(C4H6O5)2(liquid) + 6 H2O(liquid) + O2(gas)
The same analysis with citric acid showed similar results with an optimal temperature and concentration of 90 °C and 1.5 molar solution of citric acid.
See also
Extraction
Leachate
Parboiling
Surfactant leaching
Sorption
Weathering
References
Industrial processes
Solid-solid separation | 0.781876 | 0.995168 | 0.778098 |
Abiogenesis | Abiogenesis is the natural process by which life arises from non-living matter, such as simple organic compounds. The prevailing scientific hypothesis is that the transition from non-living to living entities on Earth was not a single event, but a process of increasing complexity involving the formation of a habitable planet, the prebiotic synthesis of organic molecules, molecular self-replication, self-assembly, autocatalysis, and the emergence of cell membranes. The transition from non-life to life has never been observed experimentally, but many proposals have been made for different stages of the process.
The study of abiogenesis aims to determine how pre-life chemical reactions gave rise to life under conditions strikingly different from those on Earth today. It primarily uses tools from biology and chemistry, with more recent approaches attempting a synthesis of many sciences. Life functions through the specialized chemistry of carbon and water, and builds largely upon four key families of chemicals: lipids for cell membranes, carbohydrates such as sugars, amino acids for protein metabolism, and nucleic acid DNA and RNA for the mechanisms of heredity. Any successful theory of abiogenesis must explain the origins and interactions of these classes of molecules.
Many approaches to abiogenesis investigate how self-replicating molecules, or their components, came into existence. Researchers generally think that current life descends from an RNA world, although other self-replicating and self-catalyzing molecules may have preceded RNA. Other approaches ("metabolism-first" hypotheses) focus on understanding how catalysis in chemical systems on the early Earth might have provided the precursor molecules necessary for self-replication. The classic 1952 Miller–Urey experiment demonstrated that most amino acids, the chemical constituents of proteins, can be synthesized from inorganic compounds under conditions intended to replicate those of the early Earth. External sources of energy may have triggered these reactions, including lightning, radiation, atmospheric entries of micro-meteorites and implosion of bubbles in sea and ocean waves.
While the last universal common ancestor of all modern organisms (LUCA) is thought to have been quite different from the origin of life, investigations into LUCA can guide research into early universal characteristics. A genomics approach has sought to characterise LUCA by identifying the genes shared by Archaea and Bacteria, members of the two major branches of life (with Eukaryotes included in the archaean branch in the two-domain system). It appears there are 355 genes common to all life; their functions imply that the LUCA was anaerobic with the Wood–Ljungdahl pathway, deriving energy by chemiosmosis, and maintaining its hereditary material with DNA, the genetic code, and ribosomes. Although the LUCA lived over 4 billion years ago (4 Gya), researchers believe it was far from the first form of life. Earlier cells might have had a leaky membrane and been powered by a naturally occurring proton gradient near a deep-sea white smoker hydrothermal vent.
Earth remains the only place in the universe known to harbor life. Geochemical and fossil evidence from the Earth informs most studies of abiogenesis. The Earth was formed at 4.54 Gya, and the earliest evidence of life on Earth dates from at least 3.8 Gya from Western Australia. Some studies have suggested that fossil micro-organisms may have lived within hydrothermal vent precipitates dated 3.77 to 4.28 Gya from Quebec, soon after ocean formation 4.4 Gya during the Hadean.
Overview
Life consists of reproduction with (heritable) variations. NASA defines life as "a self-sustaining chemical system capable of Darwinian [i.e., biological] evolution." Such a system is complex; the last universal common ancestor (LUCA), presumably a single-celled organism which lived some 4 billion years ago, already had hundreds of genes encoded in the DNA genetic code that is universal today. That in turn implies a suite of cellular machinery including messenger RNA, transfer RNA, and ribosomes to translate the code into proteins. Those proteins included enzymes to operate its anaerobic respiration via the Wood–Ljungdahl metabolic pathway, and a DNA polymerase to replicate its genetic material.
The challenge for abiogenesis (origin of life) researchers is to explain how such a complex and tightly interlinked system could develop by evolutionary steps, as at first sight all its parts are necessary to enable it to function. For example, a cell, whether the LUCA or in a modern organism, copies its DNA with the DNA polymerase enzyme, which is in turn produced by translating the DNA polymerase gene in the DNA. Neither the enzyme nor the DNA can be produced without the other. The evolutionary process could have involved molecular self-replication, self-assembly such as of cell membranes, and autocatalysis via RNA ribozymes. Nonetheless, the transition of non-life to life has never been observed experimentally, nor has there been a satisfactory chemical explanation.
The preconditions to the development of a living cell like the LUCA are clear enough, though disputed in their details: a habitable world is formed with a supply of minerals and liquid water. Prebiotic synthesis creates a range of simple organic compounds, which are assembled into polymers such as proteins and RNA. On the other side, the process after the LUCA is readily understood: biological evolution caused the development of a wide range of species with varied forms and biochemical capabilities. However, the derivation of living things such as LUCA from simple components is far from understood.
Although Earth remains the only place where life is known, the science of astrobiology seeks evidence of life on other planets. The 2015 NASA strategy on the origin of life aimed to solve the puzzle by identifying interactions, intermediary structures and functions, energy sources, and environmental factors that contributed to the diversity, selection, and replication of evolvable macromolecular systems, and mapping the chemical landscape of potential primordial informational polymers. The advent of polymers that could replicate, store genetic information, and exhibit properties subject to selection was, it suggested, most likely a critical step in the emergence of prebiotic chemical evolution. Those polymers derived, in turn, from simple organic compounds such as nucleobases, amino acids, and sugars that could have been formed by reactions in the environment. A successful theory of the origin of life must explain how all these chemicals came into being.
Pre-1960s conceptual history
Spontaneous generation
One ancient view of the origin of life, from Aristotle until the 19th century, is of spontaneous generation. This theory held that "lower" animals such as insects were generated by decaying organic substances, and that life arose by chance. This was questioned from the 17th century, in works like Thomas Browne's Pseudodoxia Epidemica. In 1665, Robert Hooke published the first drawings of a microorganism. In 1676, Antonie van Leeuwenhoek drew and described microorganisms, probably protozoa and bacteria. Van Leeuwenhoek disagreed with spontaneous generation, and by the 1680s convinced himself, using experiments ranging from sealed and open meat incubation and the close study of insect reproduction, that the theory was incorrect. In 1668 Francesco Redi showed that no maggots appeared in meat when flies were prevented from laying eggs. By the middle of the 19th century, spontaneous generation was considered disproven.
Panspermia
Another ancient idea dating back to Anaxagoras in the 5th century BC is panspermia, the idea that life exists throughout the universe, distributed by meteoroids, asteroids, comets and planetoids. It does not attempt to explain how life originated in itself, but shifts the origin of life on Earth to another heavenly body. The advantage is that life is not required to have formed on each planet it occurs on, but rather in a more limited set of locations, or even a single location, and then spread about the galaxy to other star systems via cometary or meteorite impact. Panspermia did not get much scientific support because it was largely used to deflect the need of an answer instead of explaining observable phenomena. Although the interest in panspermia grew when the study of meteorites found traces of organic materials in them, it is currently accepted that life started locally on Earth.
"A warm little pond": primordial soup
The idea that life originated from non-living matter in slow stages appeared in Herbert Spencer's 1864–1867 book Principles of Biology, and in William Turner Thiselton-Dyer's 1879 paper "On spontaneous generation and evolution". On 1 February 1871 Charles Darwin wrote about these publications to Joseph Hooker, and set out his own speculation, suggesting that the original spark of life may have begun in a "warm little pond, with all sorts of ammonia and phosphoric salts, light, heat, electricity, , present, that a compound was chemically formed ready to undergo still more complex changes." Darwin went on to explain that "at the present day such matter would be instantly devoured or absorbed, which would not have been the case before living creatures were formed."
Alexander Oparin in 1924 and J. B. S. Haldane in 1929 proposed that the first molecules constituting the earliest cells slowly self-organized from a primordial soup, and this theory is called the Oparin–Haldane hypothesis. Haldane suggested that the Earth's prebiotic oceans consisted of a "hot dilute soup" in which organic compounds could have formed. J. D. Bernal showed that such mechanisms could form most of the necessary molecules for life from inorganic precursors. In 1967, he suggested three "stages": the origin of biological monomers; the origin of biological polymers; and the evolution from molecules to cells.
Miller–Urey experiment
In 1952, Stanley Miller and Harold Urey carried out a chemical experiment to demonstrate how organic molecules could have formed spontaneously from inorganic precursors under prebiotic conditions like those posited by the Oparin–Haldane hypothesis. It used a highly reducing (lacking oxygen) mixture of gases—methane, ammonia, and hydrogen, as well as water vapor—to form simple organic monomers such as amino acids. Bernal said of the Miller–Urey experiment that "it is not enough to explain the formation of such molecules, what is necessary, is a physical-chemical explanation of the origins of these molecules that suggests the presence of suitable sources and sinks for free energy." However, current scientific consensus describes the primitive atmosphere as weakly reducing or neutral, diminishing the amount and variety of amino acids that could be produced. The addition of iron and carbonate minerals, present in early oceans, however, produces a diverse array of amino acids. Later work has focused on two other potential reducing environments: outer space and deep-sea hydrothermal vents.
Producing a habitable Earth
Evolutionary history
Early universe with first stars
Soon after the Big Bang, which occurred roughly 14 Gya, the only chemical elements present in the universe were hydrogen, helium, and lithium, the three lightest atoms in the periodic table. These elements gradually accreted and began orbiting in disks of gas and dust. Gravitational accretion of material at the hot and dense centers of these protoplanetary disks formed stars by the fusion of hydrogen. Early stars were massive and short-lived, producing all the heavier elements through stellar nucleosynthesis. Element formation through stellar nucleosynthesis proceeds to its most stable element Iron-56. Heavier elements were formed during supernovae at the end of a stars lifecycle. Carbon, currently the fourth most abundant chemical element in the universe (after hydrogen, helium, and oxygen), was formed mainly in white dwarf stars, particularly those bigger than twice the mass of the sun. As these stars reached the end of their lifecycles, they ejected these heavier elements, among them carbon and oxygen, throughout the universe. These heavier elements allowed for the formation of new objects, including rocky planets and other bodies. According to the nebular hypothesis, the formation and evolution of the Solar System began 4.6 Gya with the gravitational collapse of a small part of a giant molecular cloud. Most of the collapsing mass collected in the center, forming the Sun, while the rest flattened into a protoplanetary disk out of which the planets, moons, asteroids, and other small Solar System bodies formed.
Emergence of Earth
The age of the Earth is 4.54 Gya as found by radiometric dating of calcium-aluminium-rich inclusions in carbonaceous chrondrite meteorites, the oldest material in the Solar System. The Hadean Earth (from its formation until 4 Gya) was at first inhospitable to any living organisms. During its formation, the Earth lost a significant part of its initial mass, and consequentially lacked the gravity to hold molecular hydrogen and the bulk of the original inert gases. Soon after initial accretion of Earth at 4.48 Ga, its collision with Theia, a hypothesised impactor, is thought to have created the ejected debris that would eventually form the Moon. This impact would have removed the Earth's primary atmosphere, leaving behind clouds of viscous silicates and carbon dioxide. This unstable atmosphere was short-lived and condensed shortly after to form the bulk silicate Earth, leaving behind an atmosphere largely consisting of water vapor, nitrogen, and carbon dioxide, with smaller amounts of carbon monoxide, hydrogen, and sulfur compounds. The solution of carbon dioxide in water is thought to have made the seas slightly acidic, with a pH of about 5.5.
Condensation to form liquid oceans is theorised to have occurred as early as the Moon-forming impact. This scenario has found support from the dating of 4.404 Gya zircon crystals with high δ18O values from metamorphosed quartzite of Mount Narryer in Western Australia. The Hadean atmosphere has been characterized as a "gigantic, productive outdoor chemical laboratory," similar to volcanic gases today which still support some abiotic chemistry. Despite the likely increased volcanism from early plate tectonics, the Earth may have been a predominantly water world between 4.4 and 4.3 Gya. It is debated whether or not crust was exposed above this ocean due to uncertainties of what early plate tectonics looked like. For early life to have developed, it is generally thought that a land setting is required, so this question is essential to determining when in Earth's history life evolved. The post-Moon-forming impact Earth likely existed with little if any continental crust, a turbulent atmosphere, and a hydrosphere subject to intense ultraviolet light from a T Tauri stage Sun, from cosmic radiation, and from continued asteroid and comet impacts. Despite all this, niche environments likely existed conducive to life on Earth in the Late-Hadean to Early-Archaean.
The Late Heavy Bombardment hypothesis posits that a period of intense impact occurred at ~3.9 Gya during the Hadean. A cataclysmic impact event would have had the potential to sterilise all life on Earth by volatilising liquid oceans and blocking the Sun needed for photosynthesising primary producers, pushing back the earliest possible emergence of life to after Late Heavy Bombardment. Recent research questions both the intensity of the Late Heavy Bombardment as well as its potential for sterilisation. Uncertainties as to whether Late Heavy Bombardment was one giant impact or a period of greater impact rates greatly changed the implication of its destructive power. The 3.9 Ga date arises from dating of Apollo mission sample returns collected mostly near the Imbrium Basin, biasing the age of recorded impacts. Impact modelling of the lunar surface reveals that rather than a cataclysmic event at 3.9 Ga, multiple small-scale, short-lived periods of bombardment likely occurred. Terrestrial data backs this idea by showing multiple periods of ejecta in the rock record both before and after the 3.9 Ga marker, suggesting that the early Earth was subject to continuous impacts that would not have had as great an impact on extinction as previously thought. If the Late Heavy Bombardment did not take place, this allows for the emergence of life to have taken place far before 3.9 Ga.
If life evolved in the ocean at depths of more than ten meters, it would have been shielded both from late impacts and the then high levels of ultraviolet radiation from the sun. Geothermically heated oceanic crust could have yielded far more organic compounds through deep hydrothermal vents than the Miller–Urey experiments indicated. The available energy is maximized at 100–150 °C, the temperatures at which hyperthermophilic bacteria and thermoacidophilic archaea live.
Earliest evidence of life
The exact timing at which life emerged on Earth is unknown. Minimum age estimates are based on evidence from the geologic rock record. The earliest physical evidence of life so far found consists of microbialites in the Nuvvuagittuq Greenstone Belt of Northern Quebec, in banded iron formation rocks at least 3.77 and possibly as old as 4.32 Gya. The micro-organisms lived within hydrothermal vent precipitates, soon after the 4.4 Gya formation of oceans during the Hadean. The microbes resembled modern hydrothermal vent bacteria, supporting the view that abiogenesis began in such an environment.
Biogenic graphite has been found in 3.7 Gya metasedimentary rocks from southwestern Greenland and in microbial mat fossils from 3.49 Gya cherts in the Pilbara region of Western Australia. Evidence of early life in rocks from Akilia Island, near the Isua supracrustal belt in southwestern Greenland, dating to 3.7 Gya, have shown biogenic carbon isotopes. In other parts of the Isua supracrustal belt, graphite inclusions trapped within garnet crystals are connected to the other elements of life: oxygen, nitrogen, and possibly phosphorus in the form of phosphate, providing further evidence for life 3.7 Gya. In the Pilbara region of Western Australia, compelling evidence of early life was found in pyrite-bearing sandstone in a fossilized beach, with rounded tubular cells that oxidized sulfur by photosynthesis in the absence of oxygen. Carbon isotope ratios on graphite inclusions from the Jack Hills zircons suggest that life could have existed on Earth from 4.1 Gya.
The Pilbara region of Western Australia contains the Dresser Formation with rocks 3.48 Gya, including layered structures called stromatolites. Their modern counterparts are created by photosynthetic micro-organisms including cyanobacteria. These lie within undeformed hydrothermal-sedimentary strata; their texture indicates a biogenic origin. Parts of the Dresser formation preserve hot springs on land, but other regions seem to have been shallow seas. A molecular clock analysis suggests the LUCA emerged prior to the Late Heavy Bombardment (3.9 Gya).
Producing molecules: prebiotic synthesis
All chemical elements except for hydrogen and helium derive from stellar nucleosynthesis. The basic chemical ingredients of life – the carbon-hydrogen molecule (CH), the carbon-hydrogen positive ion (CH+) and the carbon ion (C+) – were produced by ultraviolet light from stars. Complex molecules, including organic molecules, form naturally both in space and on planets. Organic molecules on the early Earth could have had either terrestrial origins, with organic molecule synthesis driven by impact shocks or by other energy sources, such as ultraviolet light, redox coupling, or electrical discharges; or extraterrestrial origins (pseudo-panspermia), with organic molecules formed in interstellar dust clouds raining down on to the planet.
Observed extraterrestrial organic molecules
An organic compound is a chemical whose molecules contain carbon. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Organic compounds are relatively common in space, formed by "factories of complex molecular synthesis" which occur in molecular clouds and circumstellar envelopes, and chemically evolve after reactions are initiated mostly by ionizing radiation. Purine and pyrimidine nucleobases including guanine, adenine, cytosine, uracil, and thymine have been found in meteorites. These could have provided the materials for DNA and RNA to form on the early Earth. The amino acid glycine was found in material ejected from comet Wild 2; it had earlier been detected in meteorites. Comets are encrusted with dark material, thought to be a tar-like organic substance formed from simple carbon compounds under ionizing radiation. A rain of material from comets could have brought such complex organic molecules to Earth. It is estimated that during the Late Heavy Bombardment, meteorites may have delivered up to five million tons of organic prebiotic elements to Earth per year.
PAH world hypothesis
Polycyclic aromatic hydrocarbons (PAH) are the most common and abundant polyatomic molecules in the observable universe, and are a major store of carbon. They seem to have formed shortly after the Big Bang, and are associated with new stars and exoplanets. They are a likely constituent of Earth's primordial sea. PAHs have been detected in nebulae, and in the interstellar medium, in comets, and in meteorites.
The PAH world hypothesis posits PAHs as precursors to the RNA world. A star, HH 46-IR, resembling the sun early in its life, is surrounded by a disk of material which contains molecules including cyanide compounds, hydrocarbons, and carbon monoxide. PAHs in the interstellar medium can be transformed through hydrogenation, oxygenation, and hydroxylation to more complex organic compounds used in living cells.
Nucleobases and nucleotides
The majority of organic compounds introduced on Earth by interstellar dust particles have helped to form complex molecules, thanks to their peculiar surface-catalytic activities. Studies of the 12C/13C isotopic ratios of organic compounds in the Murchison meteorite suggest that the RNA component uracil and related molecules, including xanthine, were formed extraterrestrially. NASA studies of meteorites suggest that all four DNA nucleobases (adenine, guanine and related organic molecules) have been formed in outer space. The cosmic dust permeating the universe contains complex organics ("amorphous organic solids with a mixed aromatic–aliphatic structure") that could be created rapidly by stars. Glycolaldehyde, a sugar molecule and RNA precursor, has been detected in regions of space including around protostars and on meteorites.
Laboratory synthesis
As early as the 1860s, experiments demonstrated that biologically relevant molecules can be produced from interaction of simple carbon sources with abundant inorganic catalysts. The spontaneous formation of complex polymers from abiotically generated monomers under the conditions posited by the "soup" theory is not straightforward. Besides the necessary basic organic monomers, compounds that would have prohibited the formation of polymers were also formed in high concentration during the Miller–Urey and Joan Oró experiments. Biology uses essentially 20 amino acids for its coded protein enzymes, representing a very small subset of the structurally possible products. Since life tends to use whatever is available, an explanation is needed for why the set used is so small. Formamide is attractive as a medium that potentially provided a source of amino acid derivatives from simple aldehyde and nitrile feedstocks.
Sugars
Alexander Butlerov showed in 1861 that the formose reaction created sugars including tetroses, pentoses, and hexoses when formaldehyde is heated under basic conditions with divalent metal ions like calcium. R. Breslow proposed that the reaction was autocatalytic in 1959.
Nucleobases
Nucleobases, such as guanine and adenine, can be synthesized from simple carbon and nitrogen sources, such as hydrogen cyanide (HCN) and ammonia. Formamide produces all four ribonucleotides when warmed with terrestrial minerals. Formamide is ubiquitous in the Universe, produced by the reaction of water and HCN. It can be concentrated by the evaporation of water. HCN is poisonous only to aerobic organisms (eukaryotes and aerobic bacteria), which did not yet exist. It can play roles in other chemical processes such as the synthesis of the amino acid glycine.
DNA and RNA components including uracil, cytosine and thymine can be synthesized under outer space conditions, using starting chemicals such as pyrimidine found in meteorites. Pyrimidine may have been formed in red giant stars or in interstellar dust and gas clouds. All four RNA-bases may be synthesized from formamide in high-energy density events like extraterrestrial impacts.
Other pathways for synthesizing bases from inorganic materials have been reported. Freezing temperatures are advantageous for the synthesis of purines, due to the concentrating effect for key precursors such as hydrogen cyanide. However, while adenine and guanine require freezing conditions for synthesis, cytosine and uracil may require boiling temperatures. Seven amino acids and eleven types of nucleobases formed in ice when ammonia and cyanide were left in a freezer for 25 years. S-triazines (alternative nucleobases), pyrimidines including cytosine and uracil, and adenine can be synthesized by subjecting a urea solution to freeze-thaw cycles under a reductive atmosphere, with spark discharges as an energy source. The explanation given for the unusual speed of these reactions at such a low temperature is eutectic freezing, which crowds impurities in microscopic pockets of liquid within the ice, causing the molecules to collide more often.
Peptides
Prebiotic peptide synthesis is proposed to have occurred through a number of possible routes. Some center on high temperature/concentration conditions in which condensation becomes energetically favorable, while others focus on the availability of plausible prebiotic condensing agents.
Experimental evidence for the formation of peptides in uniquely concentrated environments is bolstered by work suggesting that wet-dry cycles and the presence of specific salts can greatly increase spontaneous condensation of glycine into poly-glycine chains. Other work suggests that while mineral surfaces, such as those of pyrite, calcite, and rutile catalyze peptide condensation, they also catalyze their hydrolysis. The authors suggest that additional chemical activation or coupling would be necessary to produce peptides at sufficient concentrations. Thus, mineral surface catalysis, while important, is not sufficient alone for peptide synthesis.
Many prebiotically plausible condensing/activating agents have been identified, including the following: cyanamide, dicyanamide, dicyandiamide, diaminomaleonitrile, urea, trimetaphosphate, NaCl, CuCl2, (Ni,Fe)S, CO, carbonyl sulfide (COS), carbon disulfide (CS2), SO2, and diammonium phosphate (DAP).
An experiment reported in 2024 used a saffire substrate with a web of thin cracks under a heat flow, similar to the environment of deep-ocean vents, as a mechanism to separate and concentrate prebiotically relevant building blocks from a dilute mixture, purifying their concentration by up to three orders of magnitude. The authors propose this as a plausible model for the origin of complex biopolymers. This presents another physical process that allows for concentrated peptide precursors to combine in the right conditions. A similar role of increasing amino acid concentration has been suggested for clays as well.
While all of these scenarios involve the condensation of amino acids, the prebiotic synthesis of peptides from simpler molecules such as CO, NH3 and C, skipping the step of amino acid formation, is very efficient.
Producing suitable vesicles
The largest unanswered question in evolution is how simple protocells first arose and differed in reproductive contribution to the following generation, thus initiating the evolution of life. The lipid world theory postulates that the first self-replicating object was lipid-like. Phospholipids form lipid bilayers in water while under agitation—the same structure as in cell membranes. These molecules were not present on early Earth, but other amphiphilic long-chain molecules also form membranes. These bodies may expand by insertion of additional lipids, and may spontaneously split into two offspring of similar size and composition. Lipid bodies may have provided sheltering envelopes for information storage, allowing the evolution and preservation of polymers like RNA that store information. Only one or two types of amphiphiles have been studied which might have led to the development of vesicles. There is an enormous number of possible arrangements of lipid bilayer membranes, and those with the best reproductive characteristics would have converged toward a hypercycle reaction, a positive feedback composed of two mutual catalysts represented by a membrane site and a specific compound trapped in the vesicle. Such site/compound pairs are transmissible to the daughter vesicles leading to the emergence of distinct lineages of vesicles, which would have allowed natural selection.
A protocell is a self-organized, self-ordered, spherical collection of lipids proposed as a stepping-stone to the origin of life. A functional protocell has (as of 2014) not yet been achieved in a laboratory setting. Self-assembled vesicles are essential components of primitive cells. The theory of classical irreversible thermodynamics treats self-assembly under a generalized chemical potential within the framework of dissipative systems. The second law of thermodynamics requires that overall entropy increases, yet life is distinguished by its great degree of organization. Therefore, a boundary is needed to separate ordered life processes from chaotic non-living matter.
Irene Chen and Jack W. Szostak suggest that elementary protocells can give rise to cellular behaviors including primitive forms of differential reproduction, competition, and energy storage. Competition for membrane molecules would favor stabilized membranes, suggesting a selective advantage for the evolution of cross-linked fatty acids and even the phospholipids of today. Such micro-encapsulation would allow for metabolism within the membrane and the exchange of small molecules, while retaining large biomolecules inside. Such a membrane is needed for a cell to create its own electrochemical gradient to store energy by pumping ions across the membrane. Fatty acid vesicles in conditions relevant to alkaline hydrothermal vents can be stabilized by isoprenoids which are synthesized by the formose reaction; the advantages and disadvantages of isoprenoids incorporated within the lipid bilayer in different microenvironments might have led to the divergence of the membranes of archaea and bacteria.
Laboratory experiments have shown that vesicles can undergo an evolutionary process under pressure cycling conditions. Simulating the systemic environment in tectonic fault zones within the Earth's crust, pressure cycling leads to the periodic formation of vesicles. Under the same conditions, random peptide chains are being formed, which are being continuously selected for their ability to integrate into the vesicle membrane. A further selection of the vesicles for their stability potentially leads to the development of functional peptide structures, associated with an increase in the survival rate of the vesicles.
Producing biology
Energy and entropy
Life requires a loss of entropy, or disorder, as molecules organize themselves into living matter. At the same time, the emergence of life is associated with the formation of structures beyond a certain threshold of complexity. The emergence of life with increasing order and complexity does not contradict the second law of thermodynamics, which states that overall entropy never decreases, since a living organism creates order in some places (e.g. its living body) at the expense of an increase of entropy elsewhere (e.g. heat and waste production).
Multiple sources of energy were available for chemical reactions on the early Earth. Heat from geothermal processes is a standard energy source for chemistry. Other examples include sunlight, lightning, atmospheric entries of micro-meteorites, and implosion of bubbles in sea and ocean waves. This has been confirmed by experiments and simulations.
Unfavorable reactions can be driven by highly favorable ones, as in the case of iron-sulfur chemistry. For example, this was probably important for carbon fixation. Carbon fixation by reaction of CO2 with H2S via iron-sulfur chemistry is favorable, and occurs at neutral pH and 100 °C. Iron-sulfur surfaces, which are abundant near hydrothermal vents, can drive the production of small amounts of amino acids and other biomolecules.
Chemiosmosis
In 1961, Peter Mitchell proposed chemiosmosis as a cell's primary system of energy conversion. The mechanism, now ubiquitous in living cells, powers energy conversion in micro-organisms and in the mitochondria of eukaryotes, making it a likely candidate for early life. Mitochondria produce adenosine triphosphate (ATP), the energy currency of the cell used to drive cellular processes such as chemical syntheses. The mechanism of ATP synthesis involves a closed membrane in which the ATP synthase enzyme is embedded. The energy required to release strongly bound ATP has its origin in protons that move across the membrane. In modern cells, those proton movements are caused by the pumping of ions across the membrane, maintaining an electrochemical gradient. In the first organisms, the gradient could have been provided by the difference in chemical composition between the flow from a hydrothermal vent and the surrounding seawater, or perhaps meteoric quinones that were conducive to the development of chemiosmotic energy across lipid membranes if at a terrestrial origin.
The RNA world
The RNA world hypothesis describes an early Earth with self-replicating and catalytic RNA but no DNA or proteins. Many researchers concur that an RNA world must have preceded the DNA-based life that now dominates. However, RNA-based life may not have been the first to exist. Another model echoes Darwin's "warm little pond" with cycles of wetting and drying.
RNA is central to the translation process. Small RNAs can catalyze all the chemical groups and information transfers required for life. RNA both expresses and maintains genetic information in modern organisms; and the chemical components of RNA are easily synthesized under the conditions that approximated the early Earth, which were very different from those that prevail today. The structure of the ribosome has been called the "smoking gun", with a central core of RNA and no amino acid side chains within 18 Å of the active site that catalyzes peptide bond formation.
The concept of the RNA world was proposed in 1962 by Alexander Rich, and the term was coined by Walter Gilbert in 1986. There were initial difficulties in the explanation of the abiotic synthesis of the nucleotides cytosine and uracil. Subsequent research has shown possible routes of synthesis; for example, formamide produces all four ribonucleotides and other biological molecules when warmed in the presence of various terrestrial minerals.
RNA replicase can function as both code and catalyst for further RNA replication, i.e. it can be autocatalytic. Jack Szostak has shown that certain catalytic RNAs can join smaller RNA sequences together, creating the potential for self-replication. The RNA replication systems, which include two ribozymes that catalyze each other's synthesis, showed a doubling time of the product of about one hour, and were subject to natural selection under the experimental conditions. If such conditions were present on early Earth, then natural selection would favor the proliferation of such autocatalytic sets, to which further functionalities could be added. Self-assembly of RNA may occur spontaneously in hydrothermal vents. A preliminary form of tRNA could have assembled into such a replicator molecule.
Possible precursors to protein synthesis include the synthesis of short peptide cofactors or the self-catalysing duplication of RNA. It is likely that the ancestral ribosome was composed entirely of RNA, although some roles have since been taken over by proteins. Major remaining questions on this topic include identifying the selective force for the evolution of the ribosome and determining how the genetic code arose.
Eugene Koonin has argued that "no compelling scenarios currently exist for the origin of replication and translation, the key processes that together comprise the core of biological systems and the apparent pre-requisite of biological evolution. The RNA World concept might offer the best chance for the resolution of this conundrum but so far cannot adequately account for the emergence of an efficient RNA replicase or the translation system."
From RNA to directed protein synthesis
In line with the RNA world hypothesis, much of modern biology's templated protein biosynthesis is done by RNA molecules—namely tRNAs and the ribosome (consisting of both protein and rRNA components). The most central reaction of peptide bond synthesis is understood to be carried out by base catalysis by the 23S rRNA domain V. Experimental evidence has demonstrated successful di- and tripeptide synthesis with a system consisting of only aminoacyl phosphate adaptors and RNA guides, which could be a possible stepping stone between an RNA world and modern protein synthesis. Aminoacylation ribozymes that can charge tRNAs with their cognate amino acids have also been selected in in vitro experimentation. The authors also extensively mapped fitness landscapes within their selection to find that chance emergence of active sequences was more important that sequence optimization.
Early functional peptides
The first proteins would have had to arise without a fully-fledged system of protein biosynthesis. As discussed above, numerous mechanisms for the prebiotic synthesis of polypeptides exist. However, these random sequence peptides would not have likely had biological function. Thus, significant study has gone into exploring how early functional proteins could have arisen from random sequences. First, some evidence on hydrolysis rates shows that abiotically plausible peptides likely contained significant "nearest-neighbor" biases. This could have had some effect on early protein sequence diversity. In other work by Anthony Keefe and Jack Szostak, mRNA display selection on a library of 6*1012 80-mers was used to search for sequences with ATP binding activity. They concluded that approximately 1 in 1011 random sequences had ATP binding function. While this is a single example of functional frequency in the random sequence space, the methodology can serve as a powerful simulation tool for understanding early protein evolution.
Phylogeny and LUCA
Starting with the work of Carl Woese from 1977, genomics studies have placed the last universal common ancestor (LUCA) of all modern life-forms between Bacteria and a clade formed by Archaea and Eukaryota in the phylogenetic tree of life. It lived over 4 Gya. A minority of studies have placed the LUCA in Bacteria, proposing that Archaea and Eukaryota are evolutionarily derived from within Eubacteria; Thomas Cavalier-Smith suggested in 2006 that the phenotypically diverse bacterial phylum Chloroflexota contained the LUCA.
In 2016, a set of 355 genes likely present in the LUCA was identified. A total of 6.1 million prokaryotic genes from Bacteria and Archaea were sequenced, identifying 355 protein clusters from among 286,514 protein clusters that were probably common to the LUCA. The results suggest that the LUCA was anaerobic with a Wood–Ljungdahl (reductive Acetyl-CoA) pathway, nitrogen- and carbon-fixing, thermophilic. Its cofactors suggest dependence upon an environment rich in hydrogen, carbon dioxide, iron, and transition metals. Its genetic material was probably DNA, requiring the 4-nucleotide genetic code, messenger RNA, transfer RNA, and ribosomes to translate the code into proteins such as enzymes. LUCA likely inhabited an anaerobic hydrothermal vent setting in a geochemically active environment. It was evidently already a complex organism, and must have had precursors; it was not the first living thing. The physiology of LUCA has been in dispute.
Leslie Orgel argued that early translation machinery for the genetic code would be susceptible to error catastrophe. Geoffrey Hoffmann however showed that such machinery can be stable in function against "Orgel's paradox". Metabolic reactions that have also been inferred in LUCA are the incomplete reverse Krebs cycle, gluconeogenesis, the pentose phosphate pathway, glycolysis, reductive amination, and transamination.
Suitable geological environments
A variety of geologic and environmental settings have been proposed for an origin of life. These theories are often in competition with one another as there are many differing views of prebiotic compound availability, geophysical setting, and early life characteristics. The first organism on Earth likely looked different from LUCA. Between the first appearance of life and where all modern phylogenies began branching, an unknown amount of time passed, with unknown gene transfers, extinctions, and evolutionary adaptation to various environmental niches. One major shift is believed to be from the RNA world to an RNA-DNA-protein world. Modern phylogenies provide more pertinent genetic evidence about LUCA than about its precursors.
The most popular hypotheses for settings for the origin of life are deep sea hydrothermal vents and surface bodies of water. Surface waters can be classified into hot springs, moderate temperature lakes and ponds, and cold settings.
Deep sea hydrothermal vents
Hot fluids
Early micro-fossils may have come from a hot world of gases such as methane, ammonia, carbon dioxide, and hydrogen sulfide, toxic to much current life. Analysis of the tree of life places thermophilic and hyperthermophilic bacteria and archaea closest to the root, suggesting that life may have evolved in a hot environment. The deep sea or alkaline hydrothermal vent theory posits that life began at submarine hydrothermal vents. William Martin and Michael Russell have suggested "that life evolved in structured iron monosulphide precipitates in a seepage site hydrothermal mound at a redox, pH, and temperature gradient between sulphide-rich hydrothermal fluid and iron(II)-containing waters of the Hadean ocean floor. The naturally arising, three-dimensional compartmentation observed within fossilized seepage-site metal sulphide precipitates indicates that these inorganic compartments were the precursors of cell walls and membranes found in free-living prokaryotes. The known capability of FeS and NiS to catalyze the synthesis of the acetyl-methylsulphide from carbon monoxide and methylsulphide, constituents of hydrothermal fluid, indicates that pre-biotic syntheses occurred at the inner surfaces of these metal-sulphide-walled compartments".
These form where hydrogen-rich fluids emerge from below the sea floor, as a result of serpentinization of ultra-mafic olivine with seawater and a pH interface with carbon dioxide-rich ocean water. The vents form a sustained chemical energy source derived from redox reactions, in which electron donors (molecular hydrogen) react with electron acceptors (carbon dioxide); see iron–sulfur world theory. These are exothermic reactions.
Chemiosmotic gradient
Russell demonstrated that alkaline vents created an abiogenic proton motive force chemiosmotic gradient, ideal for abiogenesis. Their microscopic compartments "provide a natural means of concentrating organic molecules," composed of iron-sulfur minerals such as mackinawite, endowed these mineral cells with the catalytic properties envisaged by Günter Wächtershäuser. This movement of ions across the membrane depends on a combination of two factors:
Diffusion force caused by concentration gradient—all particles including ions tend to diffuse from higher concentration to lower.
Electrostatic force caused by electrical potential gradient—cations like protons H+ tend to diffuse down the electrical potential, anions in the opposite direction.
These two gradients taken together can be expressed as an electrochemical gradient, providing energy for abiogenic synthesis. The proton motive force can be described as the measure of the potential energy stored as a combination of proton and voltage gradients across a membrane (differences in proton concentration and electrical potential).
The surfaces of mineral particles inside deep-ocean hydrothermal vents have catalytic properties similar to those of enzymes and can create simple organic molecules, such as methanol (CH3OH) and formic, acetic, and pyruvic acids out of the dissolved CO2 in the water, if driven by an applied voltage or by reaction with H2 or H2S.
The research reported by Martin in 2016 supports the thesis that life arose at hydrothermal vents, that spontaneous chemistry in the Earth's crust driven by rock–water interactions at disequilibrium thermodynamically underpinned life's origin and that the founding lineages of the archaea and bacteria were H2-dependent autotrophs that used CO2 as their terminal acceptor in energy metabolism. Martin suggests, based upon this evidence, that the LUCA "may have depended heavily on the geothermal energy of the vent to survive". Pores at deep sea hydrothermal vents are suggested to have been occupied by membrane-bound compartments which promoted biochemical reactions. Metabolic intermediates in the Krebs cycle, gluconeogenesis, amino acid bio-synthetic pathways, glycolysis, the pentose phosphate pathway, and including sugars like ribose, and lipid precursors can occur non-enzymatically at conditions relevant to deep-sea alkaline hydrothermal vents.
If the deep marine hydrothermal setting was the site for the origin of life, then abiogenesis could have happened as early as 4.0-4.2 Gya. If life evolved in the ocean at depths of more than ten meters, it would have been shielded both from impacts and the then high levels of ultraviolet radiation from the sun. The available energy in hydrothermal vents is maximized at 100–150 °C, the temperatures at which hyperthermophilic bacteria and thermoacidophilic archaea live. Arguments against a hydrothermal origin of life state that hyperthermophily was a result of convergent evolution in bacteria and archaea, and that a mesophilic environment would have been more likely. This hypothesis, suggested in 1999 by Galtier, was proposed one year before the discovery of the Lost City Hydrothermal Field, where white-smoker hydrothermal vents average ~45-90 °C. Moderate temperatures and alkaline seawater at Lost City are now the favoured hydrothermal vent setting in contrast to acidic, high temperature (~350 °C) black-smokers.
Arguments against a vent setting
Production of prebiotic organic compounds at hydrothermal vents is estimated to be 1x108 kg yr−1. While a large amount of key prebiotic compounds, such as methane, are found at vents, they are in far lower concentrations than estimates of a Miller-Urey Experiment environment. In the case of methane, the production rate at vents is around 2-4 orders of magnitude lower than predicted amounts in a Miller-Urey Experiment surface atmosphere.
Other arguments against an oceanic vent setting for the origin of life include the inability to concentrate prebiotic materials due to strong dilution from seawater. This open-system cycles compounds through minerals that make up vents, leaving little residence time to accumulate. All modern cells rely on phosphates and potassium for nucleotide backbone and protein formation respectively, making it likely that the first life forms also shared these functions. These elements were not available in high quantities in the Archaean oceans as both primarily come from the weathering of continental rocks on land, far from vent settings. Submarine hydrothermal vents are not conducive to condensation reactions needed for polymerisation to form macromolecules.
An older argument was that key polymers were encapsulated in vesicles after condensation, which supposedly would not happen in saltwater because of the high concentrations of ions. However, while it is true that salinity inhibits vesicle formation from low-diversity mixtures of fatty acids, vesicle formation from a broader, more realistic mix of fatty-acid and 1-alkanol species is more resilient.
Surface bodies of water
Surface bodies of water provide environments able to dry out and be rewetted. Continued wet-dry cycles allow the concentration of prebiotic compounds and condensation reactions to polymerise macromolecules. Moreover, lake and ponds on land allow for detrital input from the weathering of continental rocks which contain apatite, the most common source of phosphates needed for nucleotide backbones. The amount of exposed continental crust in the Hadean is unknown, but models of early ocean depths and rates of ocean island and continental crust growth make it plausible that there was exposed land. Another line of evidence for a surface start to life is the requirement for UV for organism function. UV is necessary for the formation of the U+C nucleotide base pair by partial hydrolysis and nucleobase loss. Simultaneously, UV can be harmful and sterilising to life, especially for simple early lifeforms with little ability to repair radiation damage. Radiation levels from a young Sun were likely greater, and, with no ozone layer, harmful shortwave UV rays would reach the surface of Earth. For life to begin, a shielded environment with influx from UV-exposed sources is necessary to both benefit and protect from UV. Shielding under ice, liquid water, mineral surfaces (e.g. clay) or regolith is possible in a range of surface water settings. While deep sea vents may have input from raining down of surface exposed materials, the likelihood of concentration is lessened by the ocean's open system.
Hot springs
Most branching phylogenies are thermophilic or hyperthermophilic, making it possible that the Last universal common ancestor (LUCA) and preceding lifeforms were similarly thermophilic. Hot springs are formed from the heating of groundwater by geothermal activity. This intersection allows for influxes of material from deep penetrating waters and from surface runoff that transports eroded continental sediments. Interconnected groundwater systems create a mechanism for distribution of life to wider area.
Mulkidjanian and co-authors argue that marine environments did not provide the ionic balance and composition universally found in cells, or the ions required by essential proteins and ribozymes, especially with respect to high K+/Na+ ratio, Mn2+, Zn2+ and phosphate concentrations. They argue that the only environments that mimic the needed conditions on Earth are hot springs similar to ones at Kamchatka. Mineral deposits in these environments under an anoxic atmosphere would have suitable pH (while current pools in an oxygenated atmosphere would not), contain precipitates of photocatalytic sulfide minerals that absorb harmful ultraviolet radiation, have wet-dry cycles that concentrate substrate solutions to concentrations amenable to spontaneous formation of biopolymers created both by chemical reactions in the hydrothermal environment, and by exposure to UV light during transport from vents to adjacent pools that would promote the formation of biomolecules. The hypothesized pre-biotic environments are similar to hydrothermal vents, with additional components that help explain peculiarities of the LUCA.
A phylogenomic and geochemical analysis of proteins plausibly traced to the LUCA shows that the ionic composition of its intracellular fluid is identical to that of hot springs. The LUCA likely was dependent upon synthesized organic matter for its growth. Experiments show that RNA-like polymers can be synthesized in wet-dry cycling and UV light exposure. These polymers were encapsulated in vesicles after condensation. Potential sources of organics at hot springs might have been transport by interplanetary dust particles, extraterrestrial projectiles, or atmospheric or geochemical synthesis. Hot springs could have been abundant in volcanic landmasses during the Hadean.
Temperate surface bodies of water
A mesophilic start in surface bodies of waters hypothesis has evolved from Darwin's concept of a 'warm little pond' and the Oparin-Haldane hypothesis. Freshwater bodies under temperate climates can accumulate prebiotic materials while providing suitable environmental conditions conducive to simple life forms. The climate during the Archaean is still a highly debated topic, as there is uncertainty about what continents, oceans, and the atmosphere looked like then. Atmospheric reconstructions of the Archaean from geochemical proxies and models state that sufficient greenhouse gases were present to maintain surface temperatures between 0-40 °C. Under this assumption, there is a greater abundance of moderate temperature niches in which life could begin.
Strong lines of evidence for mesophily from biomolecular studies include Galtier's G+C nucleotide thermometer. G+C are more abundant in thermophiles due to the added stability of an additional hydrogen bond not present between A+T nucleotides. rRNA sequencing on a diverse range of modern lifeforms show that LUCA's reconstructed G+C content was likely representative of moderate temperatures.
Although most modern phylogenies are thermophilic or hyperthermophilic, it is possible that their widespread diversity today is a product of convergent evolution and horizontal gene transfer rather than an inherited trait from LUCA. The reverse gyrase topoisomerase is found exclusively in thermophiles and hyperthermophiles as it allows for coiling of DNA. The reverse gyrase enzyme requires ATP to function, both of which are complex biomolecules. If an origin of life is hypothesised to involve a simple organism that had not yet evolved a membrane, let alone ATP, this would make the existence of reverse gyrase improbable. Moreover, phylogenetic studies show that reverse gyrase had an archaeal origin, and that it was transferred to bacteria by horizontal gene transfer. This implies that reverse gyrase was not present in the LUCA.
Icy surface bodies of water
Cold-start origin of life theories stem from the idea there may have been cold enough regions on the early Earth that large ice cover could be found. Stellar evolution models predict that the Sun's luminosity was ~25% weaker than it is today. Fuelner states that although this significant decrease in solar energy would have formed an icy planet, there is strong evidence for liquid water to be present, possibly driven by a greenhouse effect. This would create an early Earth with both liquid oceans and icy poles.
Ice melts that form from ice sheets or glaciers melts create freshwater pools, another niche capable of experiencing wet-dry cycles. While these pools that exist on the surface would be exposed to intense UV radiation, bodies of water within and under ice are sufficiently shielded while remaining connected to UV exposed areas through ice cracks. Suggestions of impact melting of ice allow freshwater paired with meteoritic input, a popular vessel for prebiotic components. Near-seawater levels of sodium chloride are found to destabilize fatty acid membrane self-assembly, making freshwater settings appealing for early membranous life.
Icy environments would trade the faster reaction rates that occur in warm environments for increased stability and accumulation of larger polymers. Experiments simulating Europa-like conditions of ~20 °C have synthesised amino acids and adenine, showing that Miller-Urey type syntheses can still occur at cold temperatures. In an RNA world, the ribozyme would have had even more functions than in a later DNA-RNA-protein-world. For RNA to function, it must be able to fold, a process that is hindered by temperatures above 30 °C. While RNA folding in psychrophilic organisms is slower, the process is more successful as hydrolysis is also slower. Shorter nucleotides would not suffer from higher temperatures.
Inside the continental crust
An alternative geological environment has been proposed by the geologist Ulrich Schreiber and the physical chemist Christian Mayer: the continental crust. Tectonic fault zones could present a stable and well-protected environment for long-term prebiotic evolution. Inside these systems of cracks and cavities, water and carbon dioxide present the bulk solvents. Their phase state would depend on the local temperature and pressure conditions and could vary between liquid, gaseous and supercritical. When forming two separate phases (e.g., liquid water and supercritical carbon dioxide in depths of little more than 1 km), the system provides optimal conditions for phase transfer reactions. Concurrently, the contents of the tectonic fault zones are being supplied by a multitude of inorganic educts (e.g., carbon monoxide, hydrogen, ammonia, hydrogen cyanide, nitrogen, and even phosphate from dissolved apatite) and simple organic molecules formed by hydrothermal chemistry (e.g. amino acids, long-chain amines, fatty acids, long-chain aldehydes). Finally, the abundant mineral surfaces provide a rich choice of catalytic activity.
An especially interesting section of the tectonic fault zones is located at a depth of approximately 1000 m. For the carbon dioxide part of the bulk solvent, it provides temperature and pressure conditions near the phase transition point between the supercritical and the gaseous state. This leads to a natural accumulation zone for lipophilic organic molecules that dissolve well in supercritical CO2, but not in its gaseous state, leading to their local precipitation. Periodic pressure variations such as caused by geyser activity or tidal influences result in periodic phase transitions, keeping the local reaction environment in a constant non-equilibrium state. In presence of amphiphilic compounds (such as the long chain amines and fatty acids mentioned above), subsequent generations of vesicles are being formed that are constantly and efficiently being selected for their stability. The resulting structures could provide hydrothermal vents as well as hot springs with raw material for further development.
Homochirality
Homochirality is the geometric uniformity of materials composed of chiral (non-mirror-symmetric) units. Living organisms use molecules that have the same chirality (handedness): with almost no exceptions, amino acids are left-handed while nucleotides and sugars are right-handed. Chiral molecules can be synthesized, but in the absence of a chiral source or a chiral catalyst, they are formed in a 50/50 (racemic) mixture of both forms. Known mechanisms for the production of non-racemic mixtures from racemic starting materials include: asymmetric physical laws, such as the electroweak interaction; asymmetric environments, such as those caused by circularly polarized light, quartz crystals, or the Earth's rotation, statistical fluctuations during racemic synthesis, and spontaneous symmetry breaking.
Once established, chirality would be selected for. A small bias (enantiomeric excess) in the population can be amplified into a large one by asymmetric autocatalysis, such as in the Soai reaction. In asymmetric autocatalysis, the catalyst is a chiral molecule, which means that a chiral molecule is catalyzing its own production. An initial enantiomeric excess, such as can be produced by polarized light, then allows the more abundant enantiomer to outcompete the other.
Homochirality may have started in outer space, as on the Murchison meteorite the amino acid L-alanine (left-handed) is more than twice as frequent as its D (right-handed) form, and L-glutamic acid is more than three times as abundant as its D counterpart. Amino acids from meteorites show a left-handed bias, whereas sugars show a predominantly right-handed bias: this is the same preference found in living organisms, suggesting an abiogenic origin of these compounds.
In a 2010 experiment by Robert Root-Bernstein, "two D-RNA-oligonucleotides having inverse base sequences (D-CGUA and D-AUGC) and their corresponding L-RNA-oligonucleotides (L-CGUA and L-AUGC) were synthesized and their affinity determined for Gly and eleven pairs of L- and D-amino acids". The results suggest that homochirality, including codon directionality, might have "emerged as a function of the origin of the genetic code".
See also
Autopoiesis
Manganese metallic nodules
Notes
References
Sources
International Symposium on the Origin of Life on the Earth (held at Moscow, 19–24 August 1957)
Proceedings of the SPIE held at San Jose, California, 22–24 January 2001
Proceedings of the SPIE held at San Diego, California, 31 July–2 August 2005
External links
Making headway with the mysteries of life's origins – Adam Mann (PNAS; 14 April 2021)
Exploring Life's Origins a virtual exhibit at the Museum of Science (Boston)
How life began on Earth – Marcia Malory (Earth Facts; 2015)
The Origins of Life – Richard Dawkins et al. (BBC Radio; 2004)
Life in the Universe – Essay by Stephen Hawking (1996)
Astrobiology
Evolutionarily significant biological phenomena
Evolutionary biology
Global events
Natural events
Prebiotic chemistry | 0.778607 | 0.999338 | 0.778092 |
Structure and Interpretation of Computer Programs | Structure and Interpretation of Computer Programs (SICP) is a computer science textbook by Massachusetts Institute of Technology professors Harold Abelson and Gerald Jay Sussman with Julie Sussman. It is known as the "Wizard Book" in hacker culture. It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation.
MIT Press published the first edition in 1984, and the second edition in 1996. It was formerly used as the textbook for MIT's introductory course in computer science. SICP focuses on discovering general patterns for solving specific problems, and building software systems that make use of those patterns.
MIT Press published the JavaScript edition in 2022.
Content
The book describes computer science concepts using Scheme, a dialect of Lisp. It also uses a virtual register machine and assembler to implement Lisp interpreters and compilers.
Topics in the books are:
Chapter 1: Building Abstractions with Procedures
The Elements of Programming
Procedures and the Processes They Generate
Formulating Abstractions with Higher-Order Procedures
Chapter 2: Building Abstractions with Data
Introduction to Data Abstraction
Hierarchical Data and the Closure Property
Symbolic Data
Multiple Representations for Abstract Data
Systems with Generic Operations
Chapter 3: Modularity, Objects, and State
Assignment and Local State
The Environment Model of Evaluation
Modeling with Mutable Data
Concurrency: Time Is of the Essence
Streams
Chapter 4: Metalinguistic Abstraction
The Metacircular Evaluator
Variations on a Scheme – Lazy Evaluation
Variations on a Scheme – Nondeterministic Computing
Logic Programming
Chapter 5: Computing with Register Machines
Designing Register Machines
A Register-Machine Simulator
Storage Allocation and Garbage Collection
The Explicit-Control Evaluator
Compilation
Characters
Several fictional characters appear in the book:
Alyssa P. Hacker, a Lisp hacker
Ben Bitdiddle
Cy D. Fect, a "reformed C programmer"
Eva Lu Ator
Lem E. Tweakit
Louis Reasoner, a loose reasoner
License
The book is licensed under a Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.
Coursework
The book was used as the textbook for MIT's former introductory programming course, 6.001, from fall 1984 through its last semester, in fall 2007. Other schools also made use of the book as a course textbook.
Reception
Byte recommended SICP in 1986 "for professional programmers who are really interested in their profession". The magazine said that the book was not easy to read, but that it would expose experienced programmers to both old and new topics.
Influence
SICP has been influential in computer science education, and several later books have been inspired by its style.
Structure and Interpretation of Classical Mechanics (SICM), another book that uses Scheme as an instructional element, by Gerald Jay Sussman and Jack Wisdom
Software Design for Flexibility, by Chris Hanson and Gerald Jay Sussman
How to Design Programs (HtDP), which intends to be a more accessible book for introductory Computer Science, and to address perceived incongruities in SICP
Essentials of Programming Languages (EoPL), a book for Programming Languages courses
See also
Compilers: Principles, Techniques, and Tools also known as The Dragon Book
References
External links
Video lectures
Book compiled from TeX source
Structure and Interpretation of Computer Programs. Interactive Version
1984 non-fiction books
1996 non-fiction books
Computer science books
Computer programming books
Creative Commons-licensed books
Massachusetts Institute of Technology
Scheme (programming language)
Lisp (programming language) | 0.782738 | 0.994023 | 0.77806 |
Autotroph | An autotroph is an organism that can convert abiotic sources of energy into energy stored in organic compounds, which can be used by other organisms. Autotrophs produce complex organic compounds (such as carbohydrates, fats, and proteins) using carbon from simple substances such as carbon dioxide, generally using energy from light or inorganic chemical reactions. Autotrophs do not need a living source of carbon or energy and are the producers in a food chain, such as plants on land or algae in water. Autotrophs can reduce carbon dioxide to make organic compounds for biosynthesis and as stored chemical fuel. Most autotrophs use water as the reducing agent, but some can use other hydrogen compounds such as hydrogen sulfide.
The primary producers can convert the energy in the light (phototroph and photoautotroph) or the energy in inorganic chemical compounds (chemotrophs or chemolithotrophs) to build organic molecules, which is usually accumulated in the form of biomass and will be used as carbon and energy source by other organisms (e.g. heterotrophs and mixotrophs). The photoautotrophs are the main primary producers, converting the energy of the light into chemical energy through photosynthesis, ultimately building organic molecules from carbon dioxide, an inorganic carbon source. Examples of chemolithotrophs are some archaea and bacteria (unicellular organisms) that produce biomass from the oxidation of inorganic chemical compounds, these organisms are called chemoautotrophs, and are frequently found in hydrothermal vents in the deep ocean. Primary producers are at the lowest trophic level, and are the reasons why Earth sustains life to this day.
Most chemoautotrophs are lithotrophs, using inorganic electron donors such as hydrogen sulfide, hydrogen gas, elemental sulfur, ammonium and ferrous oxide as reducing agents and hydrogen sources for biosynthesis and chemical energy release. Autotrophs use a portion of the ATP produced during photosynthesis or the oxidation of chemical compounds to reduce NADP+ to NADPH to form organic compounds.
History
The term autotroph was coined by the German botanist Albert Bernhard Frank in 1892. It stems from the ancient Greek word , meaning "nourishment" or "food". The first autotrophic organisms likely evolved early in the Archean but proliferated across Earth's Great Oxidation Event with an increase to the rate of oxygenic photosynthesis by cyanobacteria. Photoautotrophs evolved from heterotrophic bacteria by developing photosynthesis. The earliest photosynthetic bacteria used hydrogen sulphide. Due to the scarcity of hydrogen sulphide, some photosynthetic bacteria evolved to use water in photosynthesis, leading to cyanobacteria.
Variants
Some organisms rely on organic compounds as a source of carbon, but are able to use light or inorganic compounds as a source of energy. Such organisms are mixotrophs. An organism that obtains carbon from organic compounds but obtains energy from light is called a photoheterotroph, while an organism that obtains carbon from organic compounds and energy from the oxidation of inorganic compounds is termed a chemolithoheterotroph.
Evidence suggests that some fungi may also obtain energy from ionizing radiation: Such radiotrophic fungi were found growing inside a reactor of the Chernobyl nuclear power plant.
Examples
There are many different types of autotrophs in Earth's ecosystems. Lichens located in tundra climates are an exceptional example of a primary producer that, by mutualistic symbiosis, combines photosynthesis by algae (or additionally nitrogen fixation by cyanobacteria) with the protection of a decomposer fungus. Also, plant-like primary producers (trees, algae) use the sun as a form of energy and put it into the air for other organisms. There are of course H2O primary producers, including a form of bacteria, and phytoplankton. As there are many examples of primary producers, two dominant types are coral and one of the many types of brown algae, kelp.
Photosynthesis
Gross primary production occurs by photosynthesis. This is also the main way that primary producers take energy and produce/release it somewhere else. Plants, coral, bacteria, and algae do this. During photosynthesis, primary producers take energy from the sun and convert it into energy, sugar, and oxygen. Primary producers also need the energy to convert this same energy elsewhere, so they get it from nutrients. One type of nutrient is nitrogen.
Ecology
Without primary producers, organisms that are capable of producing energy on their own, the biological systems of Earth would be unable to sustain themselves. Plants, along with other primary producers, produce the energy that other living beings consume, and the oxygen that they breathe. It is thought that the first organisms on Earth were primary producers located on the ocean floor.
Autotrophs are fundamental to the food chains of all ecosystems in the world. They take energy from the environment in the form of sunlight or inorganic chemicals and use it to create fuel molecules such as carbohydrates. This mechanism is called primary production. Other organisms, called heterotrophs, take in autotrophs as food to carry out functions necessary for their life. Thus, heterotrophs – all animals, almost all fungi, as well as most bacteria and protozoa – depend on autotrophs, or primary producers, for the raw materials and fuel they need. Heterotrophs obtain energy by breaking down carbohydrates or oxidizing organic molecules (carbohydrates, fats, and proteins) obtained in food. Carnivorous organisms rely on autotrophs indirectly, as the nutrients obtained from their heterotrophic prey come from autotrophs they have consumed.
Most ecosystems are supported by the autotrophic primary production of plants and cyanobacteria that capture photons initially released by the sun. Plants can only use a fraction (approximately 1%) of this energy for photosynthesis. The process of photosynthesis splits a water molecule (H2O), releasing oxygen (O2) into the atmosphere, and reducing carbon dioxide (CO2) to release the hydrogen atoms that fuel the metabolic process of primary production. Plants convert and store the energy of the photon into the chemical bonds of simple sugars during photosynthesis. These plant sugars are polymerized for storage as long-chain carbohydrates, including other sugars, starch, and cellulose; glucose is also used to make fats and proteins. When autotrophs are eaten by heterotrophs, i.e., consumers such as animals, the carbohydrates, fats, and proteins contained in them become energy sources for the heterotrophs. Proteins can be made using nitrates, sulfates, and phosphates in the soil.
Primary production in tropical streams and rivers
Aquatic algae are a significant contributor to food webs in tropical rivers and streams. This is displayed by net primary production, a fundamental ecological process that reflects the amount of carbon that is synthesized within an ecosystem. This carbon ultimately becomes available to consumers. Net primary production displays that the rates of in-stream primary production in tropical regions are at least an order of magnitude greater than in similar temperate systems.
Origin of autotrophs
Researchers believe that the first cellular lifeforms were not heterotrophs as they would rely upon autotrophs since organic substrates delivered from space were either too heterogeneous to support microbial growth or too reduced to be fermented. Instead, they consider that the first cells were autotrophs. These autotrophs might have been thermophilic and anaerobic chemolithoautotrophs that lived at deep sea alkaline hydrothermal vents. Catalytic Fe(Ni)S minerals in these environments are shown to catalyze biomolecules like RNA. This view is supported by phylogenetic evidence as the physiology and habitat of the last universal common ancestor (LUCA) was inferred to have also been a thermophilic anaerobe with a Wood-Ljungdahl pathway, its biochemistry was replete with FeS clusters and radical reaction mechanisms. It was dependent upon Fe, H2, and CO2. The high concentration of K+ present within the cytosol of most life forms suggests that early cellular life had Na+/H+ antiporters or possibly symporters. Autotrophs possibly evolved into heterotrophs when they were at low H2 partial pressures where the first form of heterotrophy were likely amino acid and clostridial type purine fermentations and photosynthesis emerged in the presence of long-wavelength geothermal light emitted by hydrothermal vents. The first photochemically active pigments are inferred to be Zn-tetrapyrroles.
See also
Electrolithoautotroph
Electrotroph
Heterotrophic nutrition
Organotroph
Primary nutritional groups
References
External links
Trophic ecology
Microbial growth and nutrition
Biology terminology
Plant nutrition | 0.780799 | 0.996389 | 0.77798 |
Intermolecular force | An intermolecular force (IMF; also secondary force) is the force that mediates interaction between molecules, including the electromagnetic forces of attraction
or repulsion which act between atoms and other types of neighbouring particles, e.g. atoms or ions. Intermolecular forces are weak relative to intramolecular forces – the forces which hold a molecule together. For example, the covalent bond, involving sharing electron pairs between atoms, is much stronger than the forces present between neighboring molecules. Both sets of forces are essential parts of force fields frequently used in molecular mechanics.
The first reference to the nature of microscopic forces is found in Alexis Clairaut's work Théorie de la figure de la Terre, published in Paris in 1743. Other scientists who have contributed to the investigation of microscopic forces include: Laplace, Gauss, Maxwell, Boltzmann and Pauling.
Attractive intermolecular forces are categorized into the following types:
Hydrogen bonding
Ion–dipole forces and ion–induced dipole force
Cation–π, σ–π and π–π bonding
Van der Waals forces – Keesom force, Debye force, and London dispersion force
Cation–cation bonding
Salt bridge (protein and supramolecular)
Information on intermolecular forces is obtained by macroscopic measurements of properties like viscosity, pressure, volume, temperature (PVT) data. The link to microscopic aspects is given by virial coefficients and intermolecular pair potentials, such as the Mie potential, Buckingham potential or Lennard-Jones potential.
In the broadest sense, it can be understood as such interactions between any particles (molecules, atoms, ions and molecular ions) in which the formation of chemical, that is, ionic, covalent or metallic bonds does not occur. In other words, these interactions are significantly weaker than covalent ones and do not lead to a significant restructuring of the electronic structure of the interacting particles. (This is only partially true. For example, all enzymatic and catalytic reactions begin with a weak intermolecular interaction between a substrate and an enzyme or a molecule with a catalyst, but several such weak interactions with the required spatial configuration of the active center of the enzyme lead to significant restructuring changes the energy state of molecules or substrate, which ultimately leads to the breaking of some and the formation of other covalent chemical bonds. Strictly speaking, all enzymatic reactions begin with intermolecular interactions between the substrate and the enzyme, therefore the importance of these interactions is especially great in biochemistry and molecular biology, and is the basis of enzymology).
Hydrogen bonding
A hydrogen bond is an extreme form of dipole-dipole bonding, referring to the attraction between a hydrogen atom that is bonded to an element with high electronegativity, usually nitrogen, oxygen, or fluorine. The hydrogen bond is often described as a strong electrostatic dipole–dipole interaction. However, it also has some features of covalent bonding: it is directional, stronger than a van der Waals force interaction, produces interatomic distances shorter than the sum of their van der Waals radii, and usually involves a limited number of interaction partners, which can be interpreted as a kind of valence. The number of Hydrogen bonds formed between molecules is equal to the number of active pairs. The molecule which donates its hydrogen is termed the donor molecule, while the molecule containing lone pair participating in H bonding is termed the acceptor molecule. The number of active pairs is equal to the common number between number of hydrogens the donor has and the number of lone pairs the acceptor has.
Though both not depicted in the diagram, water molecules have four active bonds. The oxygen atom’s two lone pairs interact with a hydrogen each, forming two additional hydrogen bonds, and the second hydrogen atom also interacts with a neighbouring oxygen. Intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group 16 hydrides, which have little capability to hydrogen bond. Intramolecular hydrogen bonding is partly responsible for the secondary, tertiary, and quaternary structures of proteins and nucleic acids. It also plays an important role in the structure of polymers, both synthetic and natural.
Salt bridge
The attraction between cationic and anionic sites is a noncovalent, or intermolecular interaction which is usually referred to as ion pairing or salt bridge.
It is essentially due to electrostatic forces, although in aqueous medium the association is driven by entropy and often even endothermic. Most salts form crystals with characteristic distances between the ions; in contrast to many other noncovalent interactions, salt bridges are not directional and show in the solid state usually contact determined only by the van der Waals radii of the ions.
Inorganic as well as organic ions display in water at moderate ionic strength I similar salt bridge as association ΔG values around 5 to 6 kJ/mol for a 1:1 combination of anion and cation, almost independent of the nature (size, polarizability, etc.) of the ions. The ΔG values are additive and approximately a linear function of the charges, the interaction of e.g. a doubly charged phosphate anion with a single charged ammonium cation accounts for about 2x5 = 10 kJ/mol. The ΔG values depend on the ionic strength I of the solution, as described by the Debye-Hückel equation, at zero ionic strength one observes ΔG = 8 kJ/mol.
Dipole–dipole and similar interactions
Dipole–dipole interactions (or Keesom interactions) are electrostatic interactions between molecules which have permanent dipoles. This interaction is stronger than the London forces but is weaker than ion-ion interaction because only partial charges are involved. These interactions tend to align the molecules to increase attraction (reducing potential energy). An example of a dipole–dipole interaction can be seen in hydrogen chloride (HCl): the positive end of a polar molecule will attract the negative end of the other molecule and influence its position. Polar molecules have a net attraction between them. Examples of polar molecules include hydrogen chloride (HCl) and chloroform (CHCl3).
Often molecules contain dipolar groups of atoms, but have no overall dipole moment on the molecule as a whole. This occurs if there is symmetry within the molecule that causes the dipoles to cancel each other out. This occurs in molecules such as tetrachloromethane and carbon dioxide. The dipole–dipole interaction between two individual atoms is usually zero, since atoms rarely carry a permanent dipole.
The Keesom interaction is a van der Waals force. It is discussed further in the section "Van der Waals forces".
Ion–dipole and ion–induced dipole forces
Ion–dipole and ion–induced dipole forces are similar to dipole–dipole and dipole–induced dipole interactions but involve ions, instead of only polar and non-polar molecules. Ion–dipole and ion–induced dipole forces are stronger than dipole–dipole interactions because the charge of any ion is much greater than the charge of a dipole moment. Ion–dipole bonding is stronger than hydrogen bonding.
An ion–dipole force consists of an ion and a polar molecule interacting. They align so that the positive and negative groups are next to one another, allowing maximum attraction. An important example of this interaction is hydration of ions in water which give rise to hydration enthalpy. The polar water molecules surround themselves around ions in water and the energy released during the process is known as hydration enthalpy. The interaction has its immense importance in justifying the stability of various ions (like Cu2+) in water.
An ion–induced dipole force consists of an ion and a non-polar molecule interacting. Like a dipole–induced dipole force, the charge of the ion causes distortion of the electron cloud on the non-polar molecule.
Van der Waals forces
The van der Waals forces arise from interaction between uncharged atoms or molecules, leading not only to such phenomena as the cohesion of condensed phases and physical absorption of gases, but also to a universal force of attraction between macroscopic bodies.
Keesom force (permanent dipole – permanent dipole)
The first contribution to van der Waals forces is due to electrostatic interactions between rotating permanent dipoles, quadrupoles (all molecules with symmetry lower than cubic), and multipoles. It is termed the Keesom interaction, named after Willem Hendrik Keesom. These forces originate from the attraction between permanent dipoles (dipolar molecules) and are temperature dependent.
They consist of attractive interactions between dipoles that are ensemble averaged over different rotational orientations of the dipoles. It is assumed that the molecules are constantly rotating and never get locked into place. This is a good assumption, but at some point molecules do get locked into place. The energy of a Keesom interaction depends on the inverse sixth power of the distance, unlike the interaction energy of two spatially fixed dipoles, which depends on the inverse third power of the distance. The Keesom interaction can only occur among molecules that possess permanent dipole moments, i.e., two polar molecules. Also Keesom interactions are very weak van der Waals interactions and do not occur in aqueous solutions that contain electrolytes. The angle averaged interaction is given by the following equation:
where d = electric dipole moment, = permittivity of free space, = dielectric constant of surrounding material, T = temperature, = Boltzmann constant, and r = distance between molecules.
Debye force (permanent dipoles–induced dipoles)
The second contribution is the induction (also termed polarization) or Debye force, arising from interactions between rotating permanent dipoles and from the polarizability of atoms and molecules (induced dipoles). These induced dipoles occur when one molecule with a permanent dipole repels another molecule's electrons. A molecule with permanent dipole can induce a dipole in a similar neighboring molecule and cause mutual attraction. Debye forces cannot occur between atoms. The forces between induced and permanent dipoles are not as temperature dependent as Keesom interactions because the induced dipole is free to shift and rotate around the polar molecule. The Debye induction effects and Keesom orientation effects are termed polar interactions.
The induced dipole forces appear from the induction (also termed polarization), which is the attractive interaction between a permanent multipole on one molecule with an induced (by the former di/multi-pole) 31 on another. This interaction is called the Debye force, named after Peter J. W. Debye.
One example of an induction interaction between permanent dipole and induced dipole is the interaction between HCl and Ar. In this system, Ar experiences a dipole as its electrons are attracted (to the H side of HCl) or repelled (from the Cl side) by HCl. The angle averaged interaction is given by the following equation:
where = polarizability.
This kind of interaction can be expected between any polar molecule and non-polar/symmetrical molecule. The induction-interaction force is far weaker than dipole–dipole interaction, but stronger than the London dispersion force.
London dispersion force (fluctuating dipole–induced dipole interaction)
The third and dominant contribution is the dispersion or London force (fluctuating dipole–induced dipole), which arises due to the non-zero instantaneous dipole moments of all atoms and molecules. Such polarization can be induced either by a polar molecule or by the repulsion of negatively charged electron clouds in non-polar molecules. Thus, London interactions are caused by random fluctuations of electron density in an electron cloud. An atom with a large number of electrons will have a greater associated London force than an atom with fewer electrons. The dispersion (London) force is the most important component because all materials are polarizable, whereas Keesom and Debye forces require permanent dipoles. The London interaction is universal and is present in atom-atom interactions as well. For various reasons, London interactions (dispersion) have been considered relevant for interactions between macroscopic bodies in condensed systems. Hamaker developed the theory of van der Waals between macroscopic bodies in 1937 and showed that the additivity of these interactions renders them considerably more long-range.
Relative strength of forces
This comparison is approximate. The actual relative strengths will vary depending on the molecules involved. For instance, the presence of water creates competing interactions that greatly weaken the strength of both ionic and hydrogen bonds. We may consider that for static systems, Ionic bonding and covalent bonding will always be stronger than intermolecular forces in any given substance. But it is not so for big moving systems like enzyme molecules interacting with substrate molecules. Here the numerous intramolecular (most often - hydrogen bonds) bonds form an active intermediate state where the intermolecular bonds cause some of the covalent bond to be broken, while the others are formed, in this way proceeding the thousands of enzymatic reactions, so important for living organisms.
Effect on the behavior of gases
Intermolecular forces are repulsive at short distances and attractive at long distances (see the Lennard-Jones potential). In a gas, the repulsive force chiefly has the effect of keeping two molecules from occupying the same volume. This gives a real gas a tendency to occupy a larger volume than an ideal gas at the same temperature and pressure. The attractive force draws molecules closer together and gives a real gas a tendency to occupy a smaller volume than an ideal gas. Which interaction is more important depends on temperature and pressure (see compressibility factor).
In a gas, the distances between molecules are generally large, so intermolecular forces have only a small effect. The attractive force is not overcome by the repulsive force, but by the thermal energy of the molecules. Temperature is the measure of thermal energy, so increasing temperature reduces the influence of the attractive force. In contrast, the influence of the repulsive force is essentially unaffected by temperature.
When a gas is compressed to increase its density, the influence of the attractive force increases. If the gas is made sufficiently dense, the attractions can become large enough to overcome the tendency of thermal motion to cause the molecules to disperse. Then the gas can condense to form a solid or liquid, i.e., a condensed phase. Lower temperature favors the formation of a condensed phase. In a condensed phase, there is very nearly a balance between the attractive and repulsive forces.
Quantum mechanical theories
Intermolecular forces observed between atoms and molecules can be described phenomenologically as occurring between permanent and instantaneous dipoles, as outlined above. Alternatively, one may seek a fundamental, unifying theory that is able to explain the various types of interactions such as hydrogen bonding, van der Waals force and dipole–dipole interactions. Typically, this is done by applying the ideas of quantum mechanics to molecules, and Rayleigh–Schrödinger perturbation theory has been especially effective in this regard. When applied to existing quantum chemistry methods, such a quantum mechanical explanation of intermolecular interactions provides an array of approximate methods that can be used to analyze intermolecular interactions. One of the most helpful methods to visualize this kind of intermolecular interactions, that we can find in quantum chemistry, is the non-covalent interaction index, which is based on the electron density of the system. London dispersion forces play a big role with this.
Concerning electron density topology, recent methods based on electron density gradient methods have emerged recently, notably with the development of IBSI (Intrinsic Bond Strength Index), relying on the IGM (Independent Gradient Model) methodology.
See also
Ionic bonding
Salt bridges
Coomber's relationship
Force field (chemistry)
Hydrophobic effect
Intramolecular force
Molecular solid
Polymer
Quantum chemistry computer programs
van der Waals force
Comparison of software for molecular mechanics modeling
Non-covalent interactions
Solvation
References
Intermolecular forces
Chemical bonding
Johannes Diderik van der Waals | 0.780816 | 0.996108 | 0.777777 |
Model of computation | In computer science, and more specifically in computability theory and computational complexity theory, a model of computation is a model which describes how an output of a mathematical function is computed given an input. A model describes how units of computations, memories, and communications are organized. The computational complexity of an algorithm can be measured given a model of computation. Using a model allows studying the performance of algorithms independently of the variations that are specific to particular implementations and specific technology.
Models
Models of computation can be classified into three categories: sequential models, functional models, and concurrent models.
Sequential models
Sequential models include:
Finite state machines
Post machines (Post–Turing machines and tag machines).
Pushdown automata
Register machines
Random-access machines
Turing machines
Decision tree model
Functional models
Functional models include:
Abstract rewriting systems
Combinatory logic
General recursive functions
Lambda calculus
Concurrent models
Concurrent models include:
Actor model
Cellular automaton
Interaction nets
Kahn process networks
Logic gates and digital circuits
Petri nets
Process calculus
Synchronous Data Flow
Some of these models have both deterministic and nondeterministic variants. Nondeterministic models correspond to limits of certain sequences of finite computers, but do not correspond to any subset of finite computers; they are used in the study of computational complexity of algorithms.
Models differ in their expressive power; for example, each function that can be computed by a Finite state machine can also be computed by a Turing machine, but not vice versa.
Uses
In the field of runtime analysis of algorithms, it is common to specify a computational model in terms of primitive operations allowed which have unit cost, or simply unit-cost operations. A commonly used example is the random-access machine, which has unit cost for read and write access to all of its memory cells. In this respect, it differs from the above-mentioned Turing machine model.
See also
Stack machine (0-operand machine)
Accumulator machine (1-operand machine)
Register machine (2,3,... operand machine)
Random-access machine
Abstract machine
Cell-probe model
Robertson–Webb query model
Chomsky hierarchy
Turing completeness
References
Further reading
Computational complexity theory
Computability theory | 0.786758 | 0.988518 | 0.777724 |
Etiology | Etiology (; alternatively spelled aetiology or ætiology) is the study of causation or origination. The word is derived from the Greek word , meaning "giving a reason for". More completely, etiology is the study of the causes, origins, or reasons behind the way that things are, or the way they function, or it can refer to the causes themselves. The word is commonly used in medicine (pertaining to causes of disease or illness) and in philosophy, but also in physics, biology, psychology, political science, geography, cosmology, spatial analysis and theology in reference to the causes or origins of various phenomena.
In the past, when many physical phenomena were not well understood or when histories were not recorded, myths often arose to provide etiologies. Thus, an etiological myth, or origin myth, is a myth that has arisen, been told over time or written to explain the origins of various social or natural phenomena. For example, Virgil's Aeneid is a national myth written to explain and glorify the origins of the Roman Empire. In theology, many religions have creation myths explaining the origins of the world or its relationship to believers.
Medicine
In medicine, the etiology of an illness or condition refers to the frequent studies to determine one or more factors that come together to cause the illness. Relatedly, when disease is widespread, epidemiological studies investigate what associated factors, such as location, sex, exposure to chemicals, and many others, make a population more or less likely to have an illness, condition, or disease, thus helping determine its etiology. Sometimes determining etiology is an imprecise process. In the past, the etiology of a common sailor's disease, scurvy, was long unknown. When large, ocean-going ships were built, sailors began to put to sea for long periods of time, and often lacked fresh fruit and vegetables. Without knowing the precise cause, Captain James Cook suspected scurvy was caused by the lack of vegetables in the diet. Based on his suspicion, he forced his crew to eat sauerkraut, a cabbage preparation, every day, and based upon the positive outcomes, he inferred that it prevented scurvy, even though he did not know precisely why. It took about another two hundred years to discover the precise etiology; the lack of vitamin C in a sailor's diet.
The following are examples of intrinsic factors:
Inherited conditions, or conditions that are passed down to you from your parents. An example of this is hemophilia, a disorder that leads to excessive bleeding.
Metabolic and endocrine, or hormone, disorders. These are abnormalities in the chemical signaling and interaction in the body. For example, Diabetes mellitus is an endocrine disease that causes high blood sugar.
Neoplastic disorders or cancer where the cells of the body grow out of control.
Problems with immunity, such as allergies, which are an overreaction of the immune system.
Mythology
An etiological myth, or origin myth, is a myth intended to explain the origins of cult practices, natural phenomena, proper names and the like. For example, the name Delphi and its associated deity, Apollon Delphinios, are explained in the Homeric Hymn which tells of how Apollo, in the shape of a dolphin, propelled Cretans over the seas to make them his priests. While Delphi is actually related to the word ("womb"), many etiological myths are similarly based on folk etymology (the term "Amazon", for example). In the Aeneid (published ), Virgil claims the descent of Augustus Caesar's Julian clan from the hero Aeneas through his son Ascanius, also called Iulus. The story of Prometheus' sacrifice trick at Mecone in Hesiod's Theogony relates how Prometheus tricked Zeus into choosing the bones and fat of the first sacrificial animal rather than the meat to justify why, after a sacrifice, the Greeks offered the bones wrapped in fat to the gods while keeping the meat for themselves. In Ovid's Pyramus and Thisbe, the origin of the color of mulberries is explained, as the white berries become stained red from the blood gushing forth from their double suicide.
See also
Backstory
Bradford Hill criteria
Correlation does not imply causation
Creation myth
Just-so story
Just So Stories
Pathology
Pourquoi story
Problem of causation
Involution (esoterism)
References
External links
Causes of conditions
Origin myths
Mythography
Mythology
Origins | 0.780296 | 0.996651 | 0.777682 |
Organophosphate | In organic chemistry, organophosphates (also known as phosphate esters, or OPEs) are a class of organophosphorus compounds with the general structure , a central phosphate molecule with alkyl or aromatic substituents. They can be considered as esters of phosphoric acid. Organophosphates are best known for their use as pesticides.
Like most functional groups, organophosphates occur in a diverse range of forms, with important examples including key biomolecules such as DNA, RNA and ATP, as well as many insecticides, herbicides, nerve agents and flame retardants. OPEs have been widely used in various products as flame retardants, plasticizers, and performance additives to engine oil. The low cost of production and compatibility to diverse polymers made OPEs to be widely used in industry including textile, furniture, electronics as plasticizers and flame retardants. These compounds are added to the final product physically rather than by chemical bond. Due to this, OPEs leak into the environment more readily through volatilization, leaching, and abrasion. OPEs have been detected in diverse environmental compartments such as air, dust, water, sediment, soil and biota samples at higher frequency and concentration.
The popularity of OPEs as flame retardants came as a substitution for the highly regulated brominated flame retardants.
Forms
Organophosphates are a class of compounds encompassing a number of distinct but closely related function groups. These are primarily the esters of phosphoric acid and can be mono‑esters, di‑esters or tri‑esters depending on the number of attached organic groups (abbreviated as 'R' in the image below). In general man‑made organophosphates are most often triesters, while biological organophosphates are usually mono- or di-esters. The hydolysis of triesters can form diesters and monoesters.
In the context of pesticides, derivatives of organophosphates such as organothiophosphates (P=S) or phosphorodiamidates (P-N) are included as being organophosphates. The reason is that these compound are converted into organophosphates biologically.
In biology the esters of diphosphoric acid and triphosphoric acid are generally included as organophosphates. The reason is again a practical one, as many cellular processes involve the mono- di and tri- phosphates of the same compound. For instance, the phosphates of adenosine (AMP, ADP, ATP) play a key role in many metabolic processes.
Synthesis
Alcoholysis of
Phosphorus oxychloride reacts readily with alcohols to give organophosphates. This is the dominant industrial route and is responsible for almost all organophosphate production.
When aliphatic alcohols are used the HCl by-product can react with the phosphate esters to give organochlorides and a lower ester.
This reaction is usually undesirable and is exacerbated by high reaction temperatures. It can be inhibited by the use of a base or the removal of HCl through sparging.
Esterification of phosphoric acid and P2O5
Esterifications of phosphoric acid with alcohols proceed less readily than the more common carboxylic acid esterifications, with the reactions rarely proceeding much further than the phosphate mono-ester. The reaction requires high temperatures, under which the phosphoric acid can dehydrate to form poly-phosphoric acids. These are exceedingly viscous and their linear polymeric structure renders them less reactive than phosphoric acid. Despite these limitations the reaction does see industrial use for the formation of monoalkyl phosphates, which are used as surfactants. A major appeal of this route is the low cost of phosphoric acid compared to phosphorus oxychloride.
P2O5 is the anhydride of phosphoric acid and acts similarly. The reaction yields equimolar amounts of di- and monoesters with no phosphoric acid. The process is mostly limited to primary alcohols, as secondary alcohols are prone to undesirable side reactions such as dehydration.
Oxidation of phosphite and phosphonate esters
Organophosphites can be easily oxidised to give organophosphates. This is not a common industrial route, however large quantities of organophosphites are manufactured as antioxidant stabilisers for plastics. The gradual oxidation these generates organophosphates in the human environment.
A more specialised alternative is the Atherton-Todd reaction, which converts a dialkyl phosphite to a phosphoryl chloride. This can then react with an alcohol to give an organophosphate and HCl.
Phosphorylation
The formation of organophosphates is an important part of biochemistry and living systems achieve this using a variety of enzymes. Phosphorylation is essential to the processes of both anaerobic and aerobic respiration, which involve the production of adenosine triphosphate (ATP), the "high-energy" exchange medium in the cell. Protein phosphorylation is the most abundant post-translational modification in eukaryotes. Many enzymes and receptors are switched "on" or "off" by phosphorylation and dephosphorylation.
Properties
Bonding
The bonding in organophosphates has been a matter of prolonged debate; the phosphorus atom is classically hypervalent, as it possesses more bonds than the octet rule should allow. The focus of debate is usually on the nature of the phosphoryl P=O bond, which displays (in spite of the common depiction) non-classical bonding, with a bond order somewhere between 1 and 2. Early papers explained the hypervalence in terms of d-orbital hybridisation, with the energy penalty of promoting electrons into the higher energy orbitals being off-set by the stabilisation of additional bonding. Later advances in computational chemistry showed that d-orbitals played little significant role in bonding. Current models rely on either negative hyperconjugation, or a more complex arraignment with a dative-type bond from P to O, combined with back-donation from an oxygen 2p orbital. These models agree with the experimental observations of the phosphoryl as being shorter than P-OR bonds and much more polarised. It has been argued that a more accurate depiction is dipolar (i.e. (RO)3P+-O-), which is similar to the depiction of phosphorus ylides such as methylenetriphenylphosphorane. However in contrast to ylides, the phosphoryl group is unreactive and organophosphates are poor nucleophiles, despite the high concentration of charge on the phosphoryl oxygen. The polarisation accounts in part for the higher melting points of phosphates when compared to their corresponding phosphites. The bonding in penta-coordinate phosphoranes (i.e. P(OR)5) is entirely different and involves three-center four-electron bonds.
Acidity
Phosphate esters bearing P-OH groups are acidic. The pKa of the first OH group is typically between 1-2, while the second OH deprotonates at a pKa between 6-7. As such, phosphate mono- and di-esters are negatively charged at physiological pH. This is of great practical importance, as it makes these compounds far more resistant to degradation by hydrolysis or other forms of nucleophilic attack, due to electrostatic repulsion between negative charges. This effects nearly all organophosphate biomolecules, such as DNA and RNA and accounts in-part for their high stability. The presence of this negative charge also makes these compound much more water soluble.
Water solubility
The water solubility of organophosphates is an important factor in biological, industrial and environmental settings. The wide variety of substitutes used in organophosphate esters results in great variations in physical properties. OPEs exhibit a wide range of octanol/water partition coefficients where log Kow values range from -0.98 up to 10.6. Mono- and di- esters are usually water soluble, particularity biomolecules. Tri-esters such as flame retardants and plasticisers have positive log Kow values ranging between 1.44 and 9.49, signifying hydrophobicity. Hydrophobic OPEs are more likely to be bioaccumulated and biomagnified in aquatic ecosystems. Halogenated organophosphates tend to be denser than water and sink, causing them to accumulate in sediments.
Industrial materials
Pesticides
Organophosphates are best known for their use as pesticides. The vast majority are insecticides and are used either to protect crops, or as vector control agents to reduce the transmission of diseases spread by insects, such as mosquitoes. Health concerns have seen their use significantly decrease since the turn of the century. Glyphosate is sometimes called an organophosphate, but is in-fact a phosphonate. Its chemistry, mechanism of toxicity and end-use as a herbicide are different from the organophosphate insecticides.
The development of organophosphate insecticides dates back to the 1930s and is generally credited to Gerhard Schrader. At the time pesticides were largely limited to arsenic salts (calcium arsenate, lead arsenate and Paris green) or pyrethrin plant extracts, all of which had major problems. Schrader was seeking more effective agents, however while some organophosphates were found to be far more dangerous to insects than higher animals, the potential effectiveness of others as chemical weapons did not go unnoticed. The development of organophosphate insecticides and the earliest nerve agents was conjoined, with Schrader also developing the nerve agents tabun and sarin. Organophosphate pesticides were not commercialised until after WWII. Parathion was among the first marketed, followed by malathion and azinphosmethyl . Although organophosphates were used in considerable qualities they were originally less important than organochlorine insecticides such as DDT, dieldrin, and heptachlor. When many of the organochlorines were banned in the 1970s, following the publishing of Silent Spring, organophosphates became the most important class of insecticides globally. Nearly 100 were commercialised, with the following being a varied selection:
Acephate
Azinphos-methyl
Bensulide
Chlorethoxyfos
Coumaphos
Diazinon
Dichlorvos
Dicrotophos
Dimethoate
Disulfoton
Ethion
Ethoprop
Ethyl parathion
Fenamiphos
Fenitrothion
Fonofos
Isoxathion
Malathion
Methamidophos
Methidathion
Mevinphos
Naled
Phosmet
Profenofos
Propetamphos
Quinalphos
Sulfotep
Tebupirimfos
Temephos
Terbufos
Tetrachlorvinphos
Triazofos
Organophosphate insecticides are acetylcholinesterase inhibitors, which disrupt the transmission of nerve signals in exposed organisms, with fatal results. The risk of human death through organophosphate poisoning was obvious from the start and led to efforts to lower toxicity against mammals while not reducing efficacy against insects.
The majority of organophosphate insecticides are organothiophosphates (P=S) or phosphorodiamidates (P-N), both of which are significantly weaker acetylcholinesterase inhibitors than the corresponding phosphates (P=O). They are 'activated' biologically by the exposed organism, via oxidative conversion of P=S to P=O, hydroxylation, or other related process which see them transformed into organophosphates. In mammals these transformations occur almost exclusively in the liver, while in insects they take place in the gut and fat body. As the transformations are handled by different enzymes in different classes of organism it is possible to find compounds which activate more rapidly and completely in insects, and thus display more targeted lethal action.
This selectivity is far from perfect and organophosphate insecticides remain acutely toxic to humans, with many thousands estimated to be killed each year due to intentional (suicide) or unintentional poisoning. Beyond their acute toxicity, long-term exposure to organophosphates is associated with a number of heath risks, including organophosphate-induced delayed neuropathy (muscle weakness) and developmental neurotoxicity. There is limited evidence that certain compounds cause cancer, including malathion and diazinon. Children and farmworkers are considered to be at greater risk.
Pesticide regulation in the United States and the regulation of pesticides in the European Union have both been increasing restrictions on organophosphate pesticides since the 1990s, particularly when used for crop protection. The use of organophosphates has decreased considerably since that time, having been replaced by pyrethroids and neonicotinoids, which are effective a much lower levels. Reported cases of organophosphate poisoning in the US have reduced during this period. Regulation in the global south can be less extensive.
In 2015, only 3 of the 50 most common crop-specific pesticides used in the US were organophosphates (Chlorpyrifos, Bensulide, Acephate), of these Chlorpyrifos was banned in 2021. No new organophosphate pesticides have been commercialised in the 21st century. The situation in vector control is fairly similar, despite different risk trade-offs, with the global use of organophosphate insecticides falling by nearly half between 2010 and 2019. Pirimiphos-methyl, Malathion and Temefos are still important, primarily for the control of malaria in the Asia-Pacific region. The continued use of these agents is being challenged by the emergence of insecticide resistance.
Flame retardants
Flame retardants are added to materials to prevent combustion and to delay the spread of fire after ignition. Organophosphate flame retardants are part of a wider family of phosphorus-based agents which include organic phosphonate and phosphinate esters, in addition to inorganic salts. When some prominent brominated flame retardant were banned in the early 2000s phosphorus-based agents were promoted as safer replacements. This has led to a large increase in their use, with an estimated 1 million tonnes of organophosphate flame retardants produced in 2018. Safety concerns have subsequently been raised about some of these reagents, with several under regulatory scrutiny.
Organophosphate flame retardants were first developed in the first half of the twentieth century in the from of triphenyl phosphate, tricresyl phosphate and tributyl phosphate for use in plastics like cellulose nitrate and cellulose acetate. Use in cellulose products is still significant, but the largest area of application is now in plasticized vinyl polymers, principally PVC. The more modern organophosphate flame retardants come in 2 major types; chlorinated aliphatic compounds or aromatic diphosphates. The chlorinated compounds TDCPP, TCPP and TCEP are all involatile liquids, of which TCPP is perhaps the most important. They are used in polyurethane (insulation, soft furnishings), PVC (wire and cable) phenolic resins and epoxy resins (varnishes, coatings and adhesives). The most important of the diphosphates is bisphenol-A bis(diphenyl phosphate), with related analogues based around resorcinol and hydroquinone. These are used in polymer blends of engineering plastics, such as PPO/HIPS and PC/ABS, which are commonly used to make casing for electrical items like TVs, computers and home appliances.
Organophosphates act multifunctionally to retard fire in both the gas phase and condensed (solid) phase. Halogenated organophosphates are more active overall as their degradation products interfere with combustion directly in the gas phase. All organophosphates have activity in the condensed phase, by forming phosphorus acids which promote char formation, insulating the surface from heat and air.
Organophosphates were originally thought to be a safe replacements for brominated flame retardants, however many are now coming under regulatory pressure due to their apparent health risks. The chlorinated organophosphates may be carcinogenic, while others such as tricresyl phosphate have necrotoxic properties.
Bisphenol-A bis(diphenyl phosphate) can hydrolyse to form Bisphenol-A which is under significant scrutiny as potential endocrine-disrupting chemical. Although their names imply that they are a single chemical, some (but not all) are produced as complex mixtures. For instance, commercial grade TCPP can contain 7 different isomers, while tricresyl phosphate can contain up to 10. This makes their safety profiles harder to ascertain, as material from different producers can have different compositions.
Plasticisers
Plasticisers are added to polymers and plastics to improve their flexibility and processability, giving a softer more easily deformable material. In this way brittle polymers can be made more durable. Organophosphates find use because they are multifunctional; primarily plasticising but also imparting flame resistance. The most frequently plasticised polymers are the vinyls (PVC, PVB, PVA and PVCA), as well as cellulose plastics (cellulose acetate, nitrocellulose and cellulose acetate butyrate). PVC dominates the market, consuming 80-90% of global plasticiser production. PVC can accept large amounts of plasticiser; in extreme cases an item may be 70-80% plasticiser by mass, but loadings of between 0-50% are more common. The main applications of these products are in wire and cable insulation, flexible pipe, automotive interiors, plastic sheeting, vinyl flooring, and toys.
Pure PVC is more than 60% chlorine by mass and difficult to burn, but its flammability increases the more it is plasticised. Organophosphates can act as both plasticisers and flame retarders. Compounds used are typically triaryl or alkyl diaryl phosphates, with cresyl diphenyl phosphate and 2-ethylhexyl diphenyl phosphate being important respective example. These are both liquids with high boiling points. Organophosphates are more expensive than traditional plasticisers and so tend be used in combination with other plasticisers and flame retardants.
Hydraulic fluids and lubricant additives
Similar to their use as plastisiers, organophosphates are well suited to use as hydraulic fluids due to their low freezing points and high boiling points, fire-resistance, non-corrosiveness, excellent boundary lubrication properties and good general chemical stability. The triaryl phosphates are the most important group, with tricresyl phosphate being the first to be commercialised in the 1940s, with trixylyl phosphate following shortly after. Butylphenyl diphenyl phosphate and propylphenyl diphenyl phosphate became available after 1960.
In addition to their use as hydraulic base-stock, organophosphates (tricresyl phosphate) and metal organothiophosphates (zinc dithiophosphate) are used as both an antiwear additives and extreme pressure additives in lubricants, where they remain effective even at high temperatures.
Metal extractants
Organophosphates have long been used in the field of extractive metallurgy to liberate valuable rare earths from their ores. Di(2-ethylhexyl)phosphoric acid and tributyl phosphate are used for the liquid–liquid extraction of these elements from the acidic mixtures form by the leaching of mineral deposits. These compounds are also used in nuclear reprocessing, as part of the PUREX process.
Surfactants
Mono- and di- phosphate esters of alcohols (or alcohol ethoxylates) act as surfactants (detergents). Although they are very common in biology as phospholipids, their industrial use is largely limited to certain niche areas. Compared to the more common sulfur-based anionic surfactants (such as LAS or SLES), phosphate ester surfactants are more expensive and generate less foam. Benefits include high stability at extremes of pH, low skin irritation and a high tolerance to dissolved salts.
In agricultural settings monoesters of fatty alcohol ethoxylates are used, which are able to disperse poorly miscible or insoluble pesticides into water. As they are low-foaming these mixtures can be sprayed effectively onto fields, while a high salt tolerance allows co-spraying of pesticides and inorganic fertilisers.
Low-levels of phosphate mono-esters, such as potassium cetyl phosphate, find use in cosmetic creams and lotions. These in oil-in-water formulations are primarily based on non-ionic surfactants, with the anionic phosphate acting as emulsion-stabilisers. Phosphate tri-esters such as tributyl phosphate are used as anti-foaming agent in paints and concrete.
Nerve agents
Although the first phosphorus compounds observed to act as cholinesterase inhibitors were organophosphates, the vast majority of nerve agents are instead phosphonates containing a P-C bond. Only a handful of organophosphate nerve agents were developed between the 1930s and 1960s, including diisopropylfluorophosphate, VG and NPF. Between 1971 and 1993 the Soviet Union developed many new potential nerve agents, commonly known as the Novichok agents. Some of these can be considered organophosphates (in a broad sense), being derivatives of fluorophosphoric acid. Examples include A-232, A-234, A-262, C01-A035 and C01-A039. The most notable of these is A-234, which was claimed to be responsible for the poisoning of Sergei and Yulia Skripal in Salisbury (UK) 2018.
In nature
The detection of OPEs in the air as far away as Antarctica at concentrations around 1 ng/m3 suggests their persistence in air, and their potential for long-range transport. OPEs were measured in high frequency in air and water and widely distributed in northern hemisphere. The chlorinated OPEs (TCEP, TCIPP, TDCIPP) in urban sampling sites and non-halogenated like TBOEP in rural areas respectively were frequently measured in the environment across multiple sites. In the Laurentian Great Lakes total OPEs concentrations were found to be 2–3 orders of magnitude higher than concentrations of brominated flame retardants measured in similar air. Waters from rivers in Germany, Austria, and Spain have been consistently recorded for TBOEP and TCIPP at highest concentrations. From these studies, it is clear that OPE concentrations in both air and water samples are often orders of magnitude higher than other flame retardants, and that concentrations are largely dependent on sampling location, with higher concentrations in more urban, polluted locations.
References
Phosphorus(V) compounds
Anticholinesterases | 0.780912 | 0.995848 | 0.77767 |
Analytical skill | Analytical skill is the ability to deconstruct information into smaller categories in order to draw conclusions. Analytical skill consists of categories that include logical reasoning, critical thinking, communication, research, data analysis and creativity. Analytical skill is taught in contemporary education with the intention of fostering the appropriate practices for future professions. The professions that adopt analytical skill include educational institutions, public institutions, community organisations and industry.
Richards J. Heuer Jr. explained that In the article by Freed, the need for programs within the educational system to help students develop these skills is demonstrated. Workers "will need more than elementary basic skills to maintain the standard of living of their parents. They will have to think for a living, analyse problems and solutions, and work cooperatively in teams".
Logical Reasoning
Logical reasoning is a process consisting of inferences, where premises and hypotheses are formulated to arrive at a probable conclusion. It is a broad term covering three sub-classifications in deductive reasoning, inductive reasoning and abductive reasoning.
Deductive Reasoning
‘Deductive reasoning is a basic form of valid reasoning, commencing with a general statement or hypothesis, then examines the possibilities to reach a specific, logical conclusion’. This scientific method utilises deductions, to test hypotheses and theories, to predict if possible observations were correct.
A logical deductive reasoning sequence can be executed by establishing: an assumption, followed by another assumption and finally, conducting an inference. For example, ‘All men are mortal. Harold is a man. Therefore, Harold is mortal.’
For deductive reasoning to be upheld, the hypothesis must be correct, therefore, reinforcing the notion that the conclusion is logical and true. It is possible for deductive reasoning conclusions to be inaccurate or incorrect entirely, but the reasoning and premise is logical. For example, ‘All bald men are grandfathers. Harold is bald. Therefore, Harold is a grandfather.’ is a valid and logical conclusion but it is not true as the original assumption is incorrect. Deductive reasoning is an analytical skill used in many professions such as management, as the management team delegates tasks for day-to-day business operations.
Inductive Reasoning
Inductive reasoning compiles information and data to establish a general assumption that is suitable to the situation. Inductive reasoning commences with an assumption based on faithful data, leading to a generalised conclusion. For example, ‘All the swans I have seen are white. (Premise) Therefore all swans are white. (Conclusion)’. It is clear that the conclusion is incorrect, therefore, it is a weak argument. To strengthen the conclusion, it is made more probable, for example, ‘All the swans I have seen are white. (Premise) Therefore most swans are probably white (Conclusion)’. Inductive reasoning is an analytical skill common in many professions such as the corporate environment, where statistics and data are constantly analysed.
The 6 types of inductive reasoning
Generalised: This manner utilises a premise on a sample set to extract a conclusion about a population.
Statistical: This is a method that utilises statistics based on a large and viable random sample set that is quantifiable to strengthen conclusions and observations.
Bayesian: This form adapts statistical reasoning to account for additional or new data.
Analogical: This is a method that records on the foundations of shared properties between two groups, leading to a conclusion that they are also likely to share further properties.
Predictive: This form of reasoning extrapolates a conclusion about the future based on a current or past sample.
Causal inference: This method of reasoning is formed around a causal link between the premise and the conclusion.
Abductive reasoning
Abductive reasoning commences with layered hypotheses, which may be insufficient with evidence, leading to a conclusion that is most likely explanatory for the problem. It is a form of reasoning where the conductor chooses a hypothesis that would best suit the given data. For example, when a patient is ill, the doctor gathers a hypothesis from the patient's symptoms, or other evidence, that they deem factual and appropriate. The doctor will then go through a list of possible illnesses and will attempt to assign the appropriate illness. Abductive reasoning is characterised by its lack of completeness, in evidence, explanation or both. This form of reasoning can be creative, intuitive and revolutionary due to its instinctive design.
Critical Thinking
Critical thinking is a skill used to interpret and explain the data given. It is the ability to think cautiously and rationally to resolve problems. This thinking is achieved by supporting conclusions without biases, having reliable evidence and reasoning, and using appropriate data and information. Critical thinking is an imperative skill as it underpins contemporary living in areas such as education and professional careers, but it is not restricted to a specific area.
Critical thinking is used to solve problems, calculate the likelihood, make decisions, and formulate inferences. Critical thinking requires examining information, reflective thinking, using appropriate skills, and confidence in the quality of the information given to come to a conclusion or plan. Critical thinking includes being willing to change if better information becomes available. As a critical thinker individuals do not accept assumptions without further questioning the reliability of it with further research and analysing the results found.
Developing Critical Thinking
Critical thinking can be developed through establishing personal beliefs and values. It is critical that individuals are able to query authoritative bodies: teachers, specialists, textbooks, books, newspapers, television etc. Querying these authorities allow critical thinking ability to be developed as the individual gains their own freedom and wisdom to think about reality and contemporary society, revering from autonomy.
Developing Critical Thinking through Probability Models
Critical thinking can be developed through probability models, where individuals adhere to a logical, conceptual understanding of mathematics and emphasise investigation, problem-solving, mathematical literacy and the use of mathematical discourse. The student actively constructs their knowledge and understanding, while teaching models function as a mediator by actively testing the student through querying, challenging and assigning investigation tasks, ultimately, allowing the student to think in deeper ways about various concepts, ideas and mathematical contexts.
Communication
Communication is a process where individuals transfer information from one another. It is a complex system consisting of a listener interpreting the information, understanding it and then transferring it. Communication as an analytical skill includes communicating with confidence, clarity, and sticking with the point you are trying to communicate. It consists of verbal and non-verbal communication. Communication is an imperative component of analytical skill as it allows the individual to develop relationships, contribute to group decisions, organisational communication, and influence media and culture.
Verbal Communication
Verbal communication is interaction through words in linguistic form. Verbal communication consists of oral communication, written communication and sign language. It is an effective form of communication as the individuals sending and receiving the information are physically present, allowing immediate responses. In this form of communication, the sender uses words, spoken or written, to express the message to the individuals receiving the information.
Verbal communication is an essential analytical skill as it allows for the development of positive relationships among individuals. This positive relationship is attributed to the notion that verbal communication between individuals fosters a depth of understanding, empathy and versatility among them, providing each other with more attention. Verbal communication is a skill that is commonly used in professions such as the health sector, where healthcare workers are desired to possess strong interpersonal skills. Verbal communication has been linked to patient satisfaction. An effective strategy to improve verbal communication ability is through debating as is it fosters communication and critical thinking.
Non-verbal Communication
Non-verbal communication is commonly known as unspoken dialogue between individuals. It is a significant analytical skill as it allows individuals to distinguish true feelings, opinions and behaviours, as individuals are more likely to believe nonverbal cues as opposed to verbal expressions. Non-verbal communication is able to transcend communicational barriers such as race, ethnicity and sexual orientation.
Statistical measures showcase that the true meaning behind all messages is 93% non-verbal and 7% verbal. Non-verbal communication is a critical analytical skill as it allows individuals to delve deeper into the meaning of messages. It allows individuals to analyse another person's perceptions, expressions and social beliefs. Individuals who excel in communicating and understanding non-verbal communication are able to analyse the interconnectedness of mutualism, social beliefs and expectations.
Communication Theories
A communication theory is an abstract understanding of how information is transferred from individuals. Many communication theories have been developed to foster and build upon the ongoing dynamic nature of how people communicate. Early models of communication were simple, such as Aristotle's model of communication, consisting of a speaker communicating a speech to an audience, leading to an effect. This is a basic form of communication that addresses communication as a linear concept where information is not being relayed back.
Modern theories for communication include Schramm's model where there are multiple individuals, each individual is encoding, interpreting and decoding the message, and messages are being transferred between one another. Schramm has included another factor in his model in experience i.e. expressing that each individual's experience influences their ability to interpret a message. Communication theories are constantly being developed to acclimatise to certain organisations or individuals. It is imperative for an individual to adopt a suitable communication theory for organisations to ensure that the organisation is able to function as desired. For example, traditional corporate hierarchy are commonly known to adopt a linear communicational model i.e. Aristotle's model of communication.
Research
Research is the construct of utilising tools and techniques to deconstruct and solve problems. While researching, it is important to distinguish what information is relevant to the data and avoiding excess, irrelevant data. Research involves the collection and analysis of information and data with the intention of founding new knowledge and/or deciphering a new understanding of existing data. Research ability is an analytical skill as it allows individuals to comprehend social implications. Research ability is valuable as it fosters transferable employment related skills. Research is primarily employed in academia and higher education, it is a profession pursued by many graduates, individuals intending to supervise or teach research students or those in pursuit of a PhD.
Research in Academia
In higher education, new research provides the most desired quality of evidence, if this is not available, then existing forms of evidence should be used. It is accepted that research provides the greatest form of knowledge, in the form of quantitative or qualitative data.
Research students are highly desired by various industries due to their dynamic mental capacity. Research students are commonly sought after due to their analysis and problem-solving ability, interpersonal and leadership skills, project management and organisation, research and information management and written and oral communication.
Data Analysis
Data analysis is a systematic method of cleaning, transforming and modelling statistical or logical techniques to describe and evaluate data. Using data analysis as an analytical skill means being able to examine large volumes of data and then identifying trends within the data. It is critical to be able to look at the data and determine what information is important and should be kept and what information is irrelevant and can be discarded. Data analysis includes finding different patterns within the information which allows you to narrow your research and come to a better conclusion. It is a tool to discover and decipher useful information for business decision-making. It is imperative in inferring information from data and adhering to a conclusion or decision from that data. Data analysis can stem from past or future data. Data analysis is an analytical skill, commonly adopted in business, as it allows organisations to become more efficient, internally and externally, solve complex problems and innovate.
Text Analysis
Text analysis is the discovery and understanding of valuable information in unstructured or large data. It is a method to transform raw data into business information, allowing for strategic business decisions by offering a method to extract and examine data, derive patterns and finally interpret the data.
Statistical Analysis
Statistical analysis involves the collection, analyses and presentation of data to decipher trends and patterns. It is common in research, industry and government to enhance the scientific aspects of the decision that needs to be made. It consists of descriptive analysis and inferential analysis.
Descriptive Analysis
Descriptive analysis provides information about a sample set that reflects the population by summarising relevant aspects of the dataset i.e. uncovering patterns. It displays the measures of central tendency and measures of spread, such as mean, deviation, proportion, frequency etc.
Inferential Analysis
Inferential analysis analyses a sample from complete data to compare the difference between treatment groups. Multiple conclusions are constructed by selecting different samples. Inferential analysis can provide evidence that, with a certain percentage of confidence, there is a relationship between two variables. It is adopted that the sample will be different to the population, thus, we further accept a degree of uncertainty.
Diagnostic Analysis
Diagnostic analysis showcases the origin of the problem by finding the cause from the insight found in statistical analysis. This form of analysis is useful to identify behavioural patterns of data.
Predictive Analysis
Predictive analysis is an advanced form of analytics that forecasts future activity, behaviour, trends and patterns from new and historical data. Its accuracy is based on how much faithful data is present and the degree of inference that can be exploited from it.
Prescriptive Analysis
Prescriptive analytics provide firms with optimal recommendations to solve complex decisions. It is used in many industries, such as aviation to optimise schedule selection for airline crew.
Creativity
Creativity is important when it comes to solving different problems when presented. Creative thinking works best for problems that can have multiple solutions to solve the problem. It is also used when there seems to be no correct answer that applies to every situation, and is instead based from situation to situation. It includes being able to put the pieces of a problem together, as well as figure out pieces that may be missing. Then it includes brainstorming with all the pieces and deciding what pieces are important and what pieces can be discarded. The next step would be now analysing the pieces found to be of worth and importance and using those to come to a logical conclusion on how to best solve the problem. There can be multiple answers you come across to solve this problem. Many times creative thinking is referred to as right brain thinking. Creativity is an analytical skill as it allows individuals to utilise innovative methods to solve problems. Individuals that adopt this analytical skill are able to perceive problems from varying perspectives. This analytical skill is highly transferable among professions.
References
Further references
Problem solving skills
Learning
Intelligence | 0.784881 | 0.990803 | 0.777663 |
Scientific theory | A scientific theory is an explanation of an aspect of the natural world and universe that can be (or a fortiori, that has been) repeatedly tested and corroborated in accordance with the scientific method, using accepted protocols of observation, measurement, and evaluation of results. Where possible, theories are tested under controlled conditions in an experiment. In circumstances not amenable to experimental testing, theories are evaluated through principles of abductive reasoning. Established scientific theories have withstood rigorous scrutiny and embody scientific knowledge.
A scientific theory differs from a scientific fact or scientific law in that a theory seeks to explain "why" or "how", whereas a fact is a simple, basic observation and a law is an empirical description of a relationship between facts and/or other laws. For example, Newton's Law of Gravity is a mathematical equation that can be used to predict the attraction between bodies, but it is not a theory to explain how gravity works. Stephen Jay Gould wrote that "...facts and theories are different things, not rungs in a hierarchy of increasing certainty. Facts are the world's data. Theories are structures of ideas that explain and interpret facts."
The meaning of the term scientific theory (often contracted to theory for brevity) as used in the disciplines of science is significantly different from the common vernacular usage of theory. In everyday speech, theory can imply an explanation that represents an unsubstantiated and speculative guess, whereas in a scientific context it most often refers to an explanation that has already been tested and is widely accepted as valid.
The strength of a scientific theory is related to the diversity of phenomena it can explain and its simplicity. As additional scientific evidence is gathered, a scientific theory may be modified and ultimately rejected if it cannot be made to fit the new findings; in such circumstances, a more accurate theory is then required. Some theories are so well-established that they are unlikely ever to be fundamentally changed (for example, scientific theories such as evolution, heliocentric theory, cell theory, theory of plate tectonics, germ theory of disease, etc.). In certain cases, a scientific theory or scientific law that fails to fit all data can still be useful (due to its simplicity) as an approximation under specific conditions. An example is Newton's laws of motion, which are a highly accurate approximation to special relativity at velocities that are small relative to the speed of light.
Scientific theories are testable and make verifiable predictions. They describe the causes of a particular natural phenomenon and are used to explain and predict aspects of the physical universe or specific areas of inquiry (for example, electricity, chemistry, and astronomy). As with other forms of scientific knowledge, scientific theories are both deductive and inductive, aiming for predictive and explanatory power. Scientists use theories to further scientific knowledge, as well as to facilitate advances in technology or medicine. Scientific hypothesis can never be "proven" because scientists are not able to fully confirm that their hypothesis is true. Instead, scientists say that the study "supports" or is consistent with their hypothesis.
Types
Albert Einstein described two different types of scientific theories: "Constructive theories" and "principle theories". Constructive theories are constructive models for phenomena: for example, kinetic theory. Principle theories are empirical generalisations, one such example being Newton's laws of motion.
Characteristics
Essential criteria
For any theory to be accepted within most academia there is usually one simple criterion. The essential criterion is that the theory must be observable and repeatable. The aforementioned criterion is essential to prevent fraud and perpetuate science itself.
The defining characteristic of all scientific knowledge, including theories, is the ability to make falsifiable or testable predictions. The relevance and specificity of those predictions determine how potentially useful the theory is. A would-be theory that makes no observable predictions is not a scientific theory at all. Predictions not sufficiently specific to be tested are similarly not useful. In both cases, the term "theory" is not applicable.
A body of descriptions of knowledge can be called a theory if it fulfills the following criteria:
It makes falsifiable predictions with consistent accuracy across a broad area of scientific inquiry (such as mechanics).
It is well-supported by many independent strands of evidence, rather than a single foundation.
It is consistent with preexisting experimental results and at least as accurate in its predictions as are any preexisting theories.
These qualities are certainly true of such established theories as special and general relativity, quantum mechanics, plate tectonics, the modern evolutionary synthesis, etc.
Other criteria
In addition, most scientists prefer to work with a theory that meets the following qualities:
It can be subjected to minor adaptations to account for new data that do not fit it perfectly, as they are discovered, thus increasing its predictive capability over time.
It is among the most parsimonious explanations, economical in the use of proposed entities or explanatory steps as per Occam's razor. This is because for each accepted explanation of a phenomenon, there may be an extremely large, perhaps even incomprehensible, number of possible and more complex alternatives, because one can always burden failing explanations with ad hoc hypotheses to prevent them from being falsified; therefore, simpler theories are preferable to more complex ones because they are more testable.
Definitions from scientific organizations
The United States National Academy of Sciences defines scientific theories as follows:
The formal scientific definition of theory is quite different from the everyday meaning of the word. It refers to a comprehensive explanation of some aspect of nature that is supported by a vast body of evidence. Many scientific theories are so well established that no new evidence is likely to alter them substantially. For example, no new evidence will demonstrate that the Earth does not orbit around the Sun (heliocentric theory), or that living things are not made of cells (cell theory), that matter is not composed of atoms, or that the surface of the Earth is not divided into solid plates that have moved over geological timescales (the theory of plate tectonics)...One of the most useful properties of scientific theories is that they can be used to make predictions about natural events or phenomena that have not yet been observed.
From the American Association for the Advancement of Science:
A scientific theory is a well-substantiated explanation of some aspect of the natural world, based on a body of facts that have been repeatedly confirmed through observation and experiment. Such fact-supported theories are not "guesses" but reliable accounts of the real world. The theory of biological evolution is more than "just a theory". It is as factual an explanation of the universe as the atomic theory of matter or the germ theory of disease. Our understanding of gravity is still a work in progress. But the phenomenon of gravity, like evolution, is an accepted fact.
Note that the term theory would not be appropriate for describing untested but intricate hypotheses or even scientific models.
Formation
The scientific method involves the proposal and testing of hypotheses, by deriving predictions from the hypotheses about the results of future experiments, then performing those experiments to see whether the predictions are valid. This provides evidence either for or against the hypothesis. When enough experimental results have been gathered in a particular area of inquiry, scientists may propose an explanatory framework that accounts for as many of these as possible. This explanation is also tested, and if it fulfills the necessary criteria (see above), then the explanation becomes a theory. This can take many years, as it can be difficult or complicated to gather sufficient evidence.
Once all of the criteria have been met, it will be widely accepted by scientists (see scientific consensus) as the best available explanation of at least some phenomena. It will have made predictions of phenomena that previous theories could not explain or could not predict accurately, and it will have many repeated bouts of testing. The strength of the evidence is evaluated by the scientific community, and the most important experiments will have been replicated by multiple independent groups.
Theories do not have to be perfectly accurate to be scientifically useful. For example, the predictions made by classical mechanics are known to be inaccurate in the relativistic realm, but they are almost exactly correct at the comparatively low velocities of common human experience. In chemistry, there are many acid-base theories providing highly divergent explanations of the underlying nature of acidic and basic compounds, but they are very useful for predicting their chemical behavior. Like all knowledge in science, no theory can ever be completely certain, since it is possible that future experiments might conflict with the theory's predictions. However, theories supported by the scientific consensus have the highest level of certainty of any scientific knowledge; for example, that all objects are subject to gravity or that life on Earth evolved from a common ancestor.
Acceptance of a theory does not require that all of its major predictions be tested, if it is already supported by sufficiently strong evidence. For example, certain tests may be unfeasible or technically difficult. As a result, theories may make predictions that have not yet been confirmed or proven incorrect; in this case, the predicted results may be described informally with the term "theoretical". These predictions can be tested at a later time, and if they are incorrect, this may lead to the revision or rejection of the theory.As Feynman puts it:It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with experiment, it's wrong.
Modification and improvement
If experimental results contrary to a theory's predictions are observed, scientists first evaluate whether the experimental design was sound, and if so they confirm the results by independent replication. A search for potential improvements to the theory then begins. Solutions may require minor or major changes to the theory, or none at all if a satisfactory explanation is found within the theory's existing framework. Over time, as successive modifications build on top of each other, theories consistently improve and greater predictive accuracy is achieved. Since each new version of a theory (or a completely new theory) must have more predictive and explanatory power than the last, scientific knowledge consistently becomes more accurate over time.
If modifications to the theory or other explanations seem to be insufficient to account for the new results, then a new theory may be required. Since scientific knowledge is usually durable, this occurs much less commonly than modification. Furthermore, until such a theory is proposed and accepted, the previous theory will be retained. This is because it is still the best available explanation for many other phenomena, as verified by its predictive power in other contexts. For example, it has been known since 1859 that the observed perihelion precession of Mercury violates Newtonian mechanics, but the theory remained the best explanation available until relativity was supported by sufficient evidence. Also, while new theories may be proposed by a single person or by many, the cycle of modifications eventually incorporates contributions from many different scientists.
After the changes, the accepted theory will explain more phenomena and have greater predictive power (if it did not, the changes would not be adopted); this new explanation will then be open to further replacement or modification. If a theory does not require modification despite repeated tests, this implies that the theory is very accurate. This also means that accepted theories continue to accumulate evidence over time, and the length of time that a theory (or any of its principles) remains accepted often indicates the strength of its supporting evidence.
Unification
In some cases, two or more theories may be replaced by a single theory that explains the previous theories as approximations or special cases, analogous to the way a theory is a unifying explanation for many confirmed hypotheses; this is referred to as unification of theories. For example, electricity and magnetism are now known to be two aspects of the same phenomenon, referred to as electromagnetism.
When the predictions of different theories appear to contradict each other, this is also resolved by either further evidence or unification. For example, physical theories in the 19th century implied that the Sun could not have been burning long enough to allow certain geological changes as well as the evolution of life. This was resolved by the discovery of nuclear fusion, the main energy source of the Sun. Contradictions can also be explained as the result of theories approximating more fundamental (non-contradictory) phenomena. For example, atomic theory is an approximation of quantum mechanics. Current theories describe three separate fundamental phenomena of which all other theories are approximations; the potential unification of these is sometimes called the Theory of Everything.
Example: Relativity
In 1905, Albert Einstein published the principle of special relativity, which soon became a theory. Special relativity predicted the alignment of the Newtonian principle of Galilean invariance, also termed Galilean relativity, with the electromagnetic field. By omitting from special relativity the luminiferous aether, Einstein stated that time dilation and length contraction measured in an object in relative motion is inertial—that is, the object exhibits constant velocity, which is speed with direction, when measured by its observer. He thereby duplicated the Lorentz transformation and the Lorentz contraction that had been hypothesized to resolve experimental riddles and inserted into electrodynamic theory as dynamical consequences of the aether's properties. An elegant theory, special relativity yielded its own consequences, such as the equivalence of mass and energy transforming into one another and the resolution of the paradox that an excitation of the electromagnetic field could be viewed in one reference frame as electricity, but in another as magnetism.
Einstein sought to generalize the invariance principle to all reference frames, whether inertial or accelerating. Rejecting Newtonian gravitation—a central force acting instantly at a distance—Einstein presumed a gravitational field. In 1907, Einstein's equivalence principle implied that a free fall within a uniform gravitational field is equivalent to inertial motion. By extending special relativity's effects into three dimensions, general relativity extended length contraction into space contraction, conceiving of 4D space-time as the gravitational field that alters geometrically and sets all local objects' pathways. Even massless energy exerts gravitational motion on local objects by "curving" the geometrical "surface" of 4D space-time. Yet unless the energy is vast, its relativistic effects of contracting space and slowing time are negligible when merely predicting motion. Although general relativity is embraced as the more explanatory theory via scientific realism, Newton's theory remains successful as merely a predictive theory via instrumentalism. To calculate trajectories, engineers and NASA still uses Newton's equations, which are simpler to operate.
Theories and laws
Both scientific laws and scientific theories are produced from the scientific method through the formation and testing of hypotheses, and can predict the behavior of the natural world. Both are also typically well-supported by observations and/or experimental evidence. However, scientific laws are descriptive accounts of how nature will behave under certain conditions. Scientific theories are broader in scope, and give overarching explanations of how nature works and why it exhibits certain characteristics. Theories are supported by evidence from many different sources, and may contain one or several laws.
A common misconception is that scientific theories are rudimentary ideas that will eventually graduate into scientific laws when enough data and evidence have been accumulated. A theory does not change into a scientific law with the accumulation of new or better evidence. A theory will always remain a theory; a law will always remain a law. Both theories and laws could potentially be falsified by countervailing evidence.
Theories and laws are also distinct from hypotheses. Unlike hypotheses, theories and laws may be simply referred to as scientific fact.
However, in science, theories are different from facts even when they are well supported. For example, evolution is both a theory and a fact.
About theories
Theories as axioms
The logical positivists thought of scientific theories as statements in a formal language. First-order logic is an example of a formal language. The logical positivists envisaged a similar scientific language. In addition to scientific theories, the language also included observation sentences ("the sun rises in the east"), definitions, and mathematical statements. The phenomena explained by the theories, if they could not be directly observed by the senses (for example, atoms and radio waves), were treated as theoretical concepts. In this view, theories function as axioms: predicted observations are derived from the theories much like theorems are derived in Euclidean geometry. However, the predictions are then tested against reality to verify the predictions, and the "axioms" can be revised as a direct result.
The phrase "the received view of theories" is used to describe this approach. Terms commonly associated with it are "linguistic" (because theories are components of a language) and "syntactic" (because a language has rules about how symbols can be strung together). Problems in defining this kind of language precisely, e.g., are objects seen in microscopes observed or are they theoretical objects, led to the effective demise of logical positivism in the 1970s.
Theories as models
The semantic view of theories, which identifies scientific theories with models rather than propositions, has replaced the received view as the dominant position in theory formulation in the philosophy of science. A model is a logical framework intended to represent reality (a "model of reality"), similar to the way that a map is a graphical model that represents the territory of a city or country.
In this approach, theories are a specific category of models that fulfill the necessary criteria (see above). One can use language to describe a model; however, the theory is the model (or a collection of similar models), and not the description of the model. A model of the solar system, for example, might consist of abstract objects that represent the sun and the planets. These objects have associated properties, e.g., positions, velocities, and masses. The model parameters, e.g., Newton's Law of Gravitation, determine how the positions and velocities change with time. This model can then be tested to see whether it accurately predicts future observations; astronomers can verify that the positions of the model's objects over time match the actual positions of the planets. For most planets, the Newtonian model's predictions are accurate; for Mercury, it is slightly inaccurate and the model of general relativity must be used instead.
The word "semantic" refers to the way that a model represents the real world. The representation (literally, "re-presentation") describes particular aspects of a phenomenon or the manner of interaction among a set of phenomena. For instance, a scale model of a house or of a solar system is clearly not an actual house or an actual solar system; the aspects of an actual house or an actual solar system represented in a scale model are, only in certain limited ways, representative of the actual entity. A scale model of a house is not a house; but to someone who wants to learn about houses, analogous to a scientist who wants to understand reality, a sufficiently detailed scale model may suffice.
Differences between theory and model
Several commentators have stated that the distinguishing characteristic of theories is that they are explanatory as well as descriptive, while models are only descriptive (although still predictive in a more limited sense). Philosopher Stephen Pepper also distinguished between theories and models, and said in 1948 that general models and theories are predicated on a "root" metaphor that constrains how scientists theorize and model a phenomenon and thus arrive at testable hypotheses.
Engineering practice makes a distinction between "mathematical models" and "physical models"; the cost of fabricating a physical model can be minimized by first creating a mathematical model using a computer software package, such as a computer aided design tool. The component parts are each themselves modelled, and the fabrication tolerances are specified. An exploded view drawing is used to lay out the fabrication sequence. Simulation packages for displaying each of the subassemblies allow the parts to be rotated, magnified, in realistic detail. Software packages for creating the bill of materials for construction allows subcontractors to specialize in assembly processes, which spreads the cost of manufacturing machinery among multiple customers. See: Computer-aided engineering, Computer-aided manufacturing, and 3D printing
Assumptions in formulating theories
An assumption (or axiom) is a statement that is accepted without evidence. For example, assumptions can be used as premises in a logical argument. Isaac Asimov described assumptions as follows:
...it is incorrect to speak of an assumption as either true or false, since there is no way of proving it to be either (If there were, it would no longer be an assumption). It is better to consider assumptions as either useful or useless, depending on whether deductions made from them corresponded to reality...Since we must start somewhere, we must have assumptions, but at least let us have as few assumptions as possible.
Certain assumptions are necessary for all empirical claims (e.g. the assumption that reality exists). However, theories do not generally make assumptions in the conventional sense (statements accepted without evidence). While assumptions are often incorporated during the formation of new theories, these are either supported by evidence (such as from previously existing theories) or the evidence is produced in the course of validating the theory. This may be as simple as observing that the theory makes accurate predictions, which is evidence that any assumptions made at the outset are correct or approximately correct under the conditions tested.
Conventional assumptions, without evidence, may be used if the theory is only intended to apply when the assumption is valid (or approximately valid). For example, the special theory of relativity assumes an inertial frame of reference. The theory makes accurate predictions when the assumption is valid, and does not make accurate predictions when the assumption is not valid. Such assumptions are often the point with which older theories are succeeded by new ones (the general theory of relativity works in non-inertial reference frames as well).
The term "assumption" is actually broader than its standard use, etymologically speaking. The Oxford English Dictionary (OED) and online Wiktionary indicate its Latin source as assumere ("accept, to take to oneself, adopt, usurp"), which is a conjunction of ad- ("to, towards, at") and sumere (to take). The root survives, with shifted meanings, in the Italian assumere and Spanish sumir. The first sense of "assume" in the OED is "to take unto (oneself), receive, accept, adopt". The term was originally employed in religious contexts as in "to receive up into heaven", especially "the reception of the Virgin Mary into heaven, with body preserved from corruption", (1297 CE) but it was also simply used to refer to "receive into association" or "adopt into partnership". Moreover, other senses of assumere included (i) "investing oneself with (an attribute)", (ii) "to undertake" (especially in Law), (iii) "to take to oneself in appearance only, to pretend to possess", and (iv) "to suppose a thing to be" (all senses from OED entry on "assume"; the OED entry for "assumption" is almost perfectly symmetrical in senses). Thus, "assumption" connotes other associations than the contemporary standard sense of "that which is assumed or taken for granted; a supposition, postulate" (only the 11th of 12 senses of "assumption", and the 10th of 11 senses of "assume").
Descriptions
From philosophers of science
Karl Popper described the characteristics of a scientific theory as follows:
It is easy to obtain confirmations, or verifications, for nearly every theory—if we look for confirmations.
Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory—an event which would have refuted the theory.
Every "good" scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of "corroborating evidence".)
Some genuinely testable theories, when found to be false, might still be upheld by their admirers—for example by introducing post hoc (after the fact) some auxiliary hypothesis or assumption, or by reinterpreting the theory post hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status, by tampering with evidence. The temptation to tamper can be minimized by first taking the time to write down the testing protocol before embarking on the scientific work.
Popper summarized these statements by saying that the central criterion of the scientific status of a theory is its "falsifiability, or refutability, or testability". Echoing this, Stephen Hawking states, "A theory is a good theory if it satisfies two requirements: It must accurately describe a large class of observations on the basis of a model that contains only a few arbitrary elements, and it must make definite predictions about the results of future observations." He also discusses the "unprovable but falsifiable" nature of theories, which is a necessary consequence of inductive logic, and that "you can disprove a theory by finding even a single observation that disagrees with the predictions of the theory".
Several philosophers and historians of science have, however, argued that Popper's definition of theory as a set of falsifiable statements is wrong because, as Philip Kitcher has pointed out, if one took a strictly Popperian view of "theory", observations of Uranus when first discovered in 1781 would have "falsified" Newton's celestial mechanics. Rather, people suggested that another planet influenced Uranus' orbit—and this prediction was indeed eventually confirmed.
Kitcher agrees with Popper that "There is surely something right in the idea that a science can succeed only if it can fail." He also says that scientific theories include statements that cannot be falsified, and that good theories must also be creative. He insists we view scientific theories as an "elaborate collection of statements", some of which are not falsifiable, while others—those he calls "auxiliary hypotheses", are.
According to Kitcher, good scientific theories must have three features:
Unity: "A science should be unified.... Good theories consist of just one problem-solving strategy, or a small family of problem-solving strategies, that can be applied to a wide range of problems."
Fecundity: "A great scientific theory, like Newton's, opens up new areas of research.... Because a theory presents a new way of looking at the world, it can lead us to ask new questions, and so to embark on new and fruitful lines of inquiry.... Typically, a flourishing science is incomplete. At any time, it raises more questions than it can currently answer. But incompleteness is not vice. On the contrary, incompleteness is the mother of fecundity.... A good theory should be productive; it should raise new questions and presume those questions can be answered without giving up its problem-solving strategies."
Auxiliary hypotheses that are independently testable: "An auxiliary hypothesis ought to be testable independently of the particular problem it is introduced to solve, independently of the theory it is designed to save." (For example, the evidence for the existence of Neptune is independent of the anomalies in Uranus's orbit.)
Like other definitions of theories, including Popper's, Kitcher makes it clear that a theory must include statements that have observational consequences. But, like the observation of irregularities in the orbit of Uranus, falsification is only one possible consequence of observation. The production of new hypotheses is another possible and equally important result.
Analogies and metaphors
The concept of a scientific theory has also been described using analogies and metaphors. For example, the logical empiricist Carl Gustav Hempel likened the structure of a scientific theory to a "complex spatial network:"
Its terms are represented by the knots, while the threads connecting the latter correspond, in part, to the definitions and, in part, to the fundamental and derivative hypotheses included in the theory. The whole system floats, as it were, above the plane of observation and is anchored to it by the rules of interpretation. These might be viewed as strings which are not part of the network but link certain points of the latter with specific places in the plane of observation. By virtue of these interpretive connections, the network can function as a scientific theory: From certain observational data, we may ascend, via an interpretive string, to some point in the theoretical network, thence proceed, via definitions and hypotheses, to other points, from which another interpretive string permits a descent to the plane of observation.
Michael Polanyi made an analogy between a theory and a map:
A theory is something other than myself. It may be set out on paper as a system of rules, and it is the more truly a theory the more completely it can be put down in such terms. Mathematical theory reaches the highest perfection in this respect. But even a geographical map fully embodies in itself a set of strict rules for finding one's way through a region of otherwise uncharted experience. Indeed, all theory may be regarded as a kind of map extended over space and time.
A scientific theory can also be thought of as a book that captures the fundamental information about the world, a book that must be researched, written, and shared. In 1623, Galileo Galilei wrote:
Philosophy [i.e. physics] is written in this grand book—I mean the universe—which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth.
The book metaphor could also be applied in the following passage, by the contemporary philosopher of science Ian Hacking:
I myself prefer an Argentine fantasy. God did not write a Book of Nature of the sort that the old Europeans imagined. He wrote a Borgesian library, each book of which is as brief as possible, yet each book of which is inconsistent with every other. No book is redundant. For every book there is some humanly accessible bit of Nature such that that book, and no other, makes possible the comprehension, prediction and influencing of what is going on...Leibniz said that God chose a world which maximized the variety of phenomena while choosing the simplest laws. Exactly so: but the best way to maximize phenomena and have simplest laws is to have the laws inconsistent with each other, each applying to this or that but none applying to all.
In physics
In physics, the term theory is generally used for a mathematical framework—derived from a small set of basic postulates (usually symmetries—like equality of locations in space or in time, or identity of electrons, etc.)—that is capable of producing experimental predictions for a given category of physical systems. A good example is classical electromagnetism, which encompasses results derived from gauge symmetry (sometimes called gauge invariance) in a form of a few equations called Maxwell's equations. The specific mathematical aspects of classical electromagnetic theory are termed "laws of electromagnetism", reflecting the level of consistent and reproducible evidence that supports them. Within electromagnetic theory generally, there are numerous hypotheses about how electromagnetism applies to specific situations. Many of these hypotheses are already considered to be adequately tested, with new ones always in the making and perhaps untested. An example of the latter might be the radiation reaction force. As of 2009, its effects on the periodic motion of charges are detectable in synchrotrons, but only as averaged effects over time. Some researchers are now considering experiments that could observe these effects at the instantaneous level (i.e. not averaged over time).
Examples
Note that many fields of inquiry do not have specific named theories, e.g. developmental biology. Scientific knowledge outside a named theory can still have a high level of certainty, depending on the amount of evidence supporting it. Also note that since theories draw evidence from many fields, the categorization is not absolute.
Biology: cell theory, theory of evolution (modern evolutionary synthesis), abiogenesis, germ theory, particulate inheritance theory, dual inheritance theory, Young–Helmholtz theory, opponent process, cohesion-tension theory
Chemistry: collision theory, kinetic theory of gases, Lewis theory, molecular theory, molecular orbital theory, transition state theory, valence bond theory
Physics: atomic theory, Big Bang theory, Dynamo theory, perturbation theory, theory of relativity (successor to classical mechanics), quantum field theory
Earth science: Climate change theory (from climatology), plate tectonics theory (from geology), theories of the origin of the Moon, theories for the Moon illusion
Astronomy: Self-gravitating system, Stellar evolution, solar nebular model, stellar nucleosynthesis
Explanatory notes
References
Further reading
Essay by a British/American meteorologist and NASA astronaut on anthopogenic global warming and "theory".
Epistemology of science
Scientific method | 0.779087 | 0.998064 | 0.777579 |
Steric effects | Steric effects arise from the spatial arrangement of atoms. When atoms come close together there is generally a rise in the energy of the molecule. Steric effects are nonbonding interactions that influence the shape (conformation) and reactivity of ions and molecules. Steric effects complement electronic effects, which dictate the shape and reactivity of molecules. Steric repulsive forces between overlapping electron clouds result in structured groupings of molecules stabilized by the way that opposites attract and like charges repel.
Steric hindrance
Steric hindrance is a consequence of steric effects. Steric hindrance is the slowing of chemical reactions due to steric bulk. It is usually manifested in intermolecular reactions, whereas discussion of steric effects often focus on intramolecular interactions. Steric hindrance is often exploited to control selectivity, such as slowing unwanted side-reactions.
Steric hindrance between adjacent groups can also affect torsional bond angles. Steric hindrance is responsible for the observed shape of rotaxanes and the low rates of racemization of 2,2'-disubstituted biphenyl and binaphthyl derivatives.
Measures of steric properties
Because steric effects have profound impact on properties, the steric properties of substituents have been assessed by numerous methods.
Rate data
Relative rates of chemical reactions provide useful insights into the effects of the steric bulk of substituents. Under standard conditions, methyl bromide solvolyzes 107 faster than does neopentyl bromide. The difference reflects the inhibition of attack on the compound with the sterically bulky (CH3)3C group.
A-values
A-values provide another measure of the bulk of substituents. A-values are derived from equilibrium measurements of monosubstituted cyclohexanes. The extent that a substituent favors the equatorial position gives a measure of its bulk.
Ceiling temperatures
Ceiling temperature is a measure of the steric properties of the monomers that comprise a polymer. is the temperature where the rate of polymerization and depolymerization are equal. Sterically hindered monomers give polymers with low 's, which are usually not useful.
Cone angles
Ligand cone angles are measures of the size of ligands in coordination chemistry. It is defined as the solid angle formed with the metal at the vertex and the hydrogen atoms at the perimeter of the cone (see figure).
Significance and applications
Steric effects are critical to chemistry, biochemistry, and pharmacology. In organic chemistry, steric effects are nearly universal and affect the rates and activation energies of most chemical reactions to varying degrees.
In biochemistry, steric effects are often exploited in naturally occurring molecules such as enzymes, where the catalytic site may be buried within a large protein structure. In pharmacology, steric effects determine how and at what rate a drug will interact with its target bio-molecules.
See also
Collision theory
Intramolecular force
Sterically induced reduction
Reaction rate accelerate as result of steric hindrance in the Thorpe–Ingold effect
Van der Waals strain, also known as steric strain
References
External links
Stereochemistry
Physical organic chemistry | 0.783761 | 0.992066 | 0.777542 |